LLM code assistance#

We will setup LLM code assistance in our editor so that the LLM can interact with our code directly. You are probably already using an LLM agent and LLM integration in our editor will ease the interaction and save you copy & paste time.

Rider features its own AI assistant, which is very well integrated like GitHub Copilot in VS Community. As of writing this, it requires a credit card for registration. Given that not every student may have a credit card, I did not try it.

Another option is GitHub Copilot, for which an extension exists in Rider. GitHub Copilot can be used for education for free after you apply for a license in GitHub Education.

In this tutorial, I used a plugin called Continue. It lets you connect to lots of different LLM providers, which makes the whole setup feel more open and flexible. The disadvantage is that it might not be as reliable as the official AI assistant JetBrains AI.

Tip

If you don’t like the user experience of Continue, then try GitHub Copilot extension. I did not have spent enough time to compare Copilot and Continue.

Let us first get an API key from Groq and then use it in the Code LLM extension Continue.

Creating a Groq API key#

  1. Go to https://console.groq.com.

  2. Create an account. You will be logged in to the dashboard after creating an account.

  3. Click on Create API key. You will immediately be shown an API key.

  4. Copy the API key.

Installation of Continue#

  1. Click on .

  2. Click on Plugins. Settings window should pop up.

  3. Click on Marketplace tab.

  4. In the search bar, type continue.

  5. Install the Continue plugin by accepting the terms.

  6. Close the Settings window by clicking on Save.

    You should now see the icon on the right sidebar, the file continue_tutorial.py will be opened and You will get a notification that Full Line Code Completion is disabled. You can discard these.

Configuration of Continue#

We will now configure the plugin.

If you want to disable sending usage data to Continue: (optional)

  1. On the rightmost bar, click on . Continue chat settings will show up.

  2. (optional) Go to Allow Anonymous Telemetry and opt out of telemetry if you desire.

  3. Close Settings by for example clicking on the icon of Continue again.

Now we will configure which LLM we want to use:

  1. On Continue window, click on Or, configure your own models. A configuration window will open up.

  2. Right below the Connect button, click on Click here to view more providers. Add Chat Model window will pop up.

  3. Select the provider Groq.

  4. Select in the model drop-down menu Autodetect.

  5. Paste the API key that you have here into the corresponding field.

  6. Click Connect. The tutorial file continue_tutorial.py and the text-based configuration file config.yaml will open up in new tabs. You can close these two files.

    Additionally Models window will be open. Select openai/gpt-oss-120b for Chat and Apply.

    Click on the Models to close the model settings.

Usage example: explanation #

  1. We are going to try the agent on our previous code. So click back to the tab with the source file you have written before – the DKK EUR converter.

  2. On the Continue tab, if the a subwindow called Create Your Own Assistant is open, you can close it.

We will ask the agent to explain the code for us. Before you do, take one minute at the code line by line and try to guess how the program creates the output. It is completely acceptable, if you don’t understand most of the lines. You will gradually improve. Continue after your try.

  1. Write on the prompt field explain line by line and press AltEnter. Without Alt, Continue does not send your code to the agent.

    Compare your explanation with the agent’s.

LLM-based inline code suggestions can affect your learning negatively#

One of the learning goals of this course is to identify and explain core programming concepts without an LLM to be able to criticize the output of a LLM later when you become more proficient. For learning programming concepts, you have to face some difficulty and not use an autopilot. My recommendations to reach the learning goals of this course are:

  1. First write the programs yourself and ask the LLM as a second step only for getting feedback, improvements or explanations.

  2. If the extension you use provides code suggestions while you write code – called inline code suggestions, deactivate this feature. Compared to typical code completion by the IDE, LLM-based suggestions are long and detailed. This feature can be too much help for learning in the beginning. After you become confident, e.g., you are able to write programs yourself, you can carefully activate this feature again.

As I heard, GitHub Copilot uses inline code suggestions in form of ghost text as default. Search for copilot disable inline suggestions to disable it.

Continue’s inline suggestions work only with particular LLMs. In our configuration, they are not activated. Continue suggests code changes only after asking in the chat window, that must be applied with mouse clicks. First try to understand these changes, and then type them manually until you become more confident.