AI code assistants now offer direct integration into popular Linux editors such as Visual Studio Code and GNOME Builder, providing real-time code suggestions, chat-based support, and automated code review. These tools reduce repetitive coding tasks, help identify bugs, and speed up project delivery by leveraging models like Google Gemini, Ollama, and open-source alternatives. Below, you'll find detailed instructions for setting up and using the most effective AI code assistants in VS Code and, where available, GNOME Builder on Linux.

Using Gemini Code Assist in Visual Studio Code on Linux

Gemini Code Assist, developed by Google, provides a free and robust AI coding assistant directly within Visual Studio Code. It supports code completion, chat-based assistance, and context-aware code modifications. The extension is available for Linux, making it accessible to a broad range of developers.

Step 1: Install Visual Studio Code on your Linux system if it's not already present. Use your package manager or download it from the official website.

sudo apt update
sudo apt install code

Step 2: Open Visual Studio Code and navigate to the Extensions tab on the left sidebar. In the search box, type gemini code assist and select the official Gemini Code Assist extension. Click Install to add it to your editor.

Step 3: Once installed, a Gemini tab appears in the left panel. Click the tab, then sign in with your Google account to activate the assistant. This step is required for authentication and access to the AI features.

Step 4: To test the assistant, open or create a new project folder. You can interact with Gemini Code Assist via the chat interface or by selecting code in the editor and choosing suggested actions such as Explain this or Generate unit tests. For chat-based prompts, use the Gemini sidebar to ask for code generation, bug fixes, or explanations.

Step 5: Accept or modify code suggestions as needed. Gemini Code Assist offers diffs for proposed changes, letting you review and selectively apply updates. Use the "Diff with Open File" option to see side-by-side comparisons before merging changes.

Step 6: For code completion, simply start writing code in the editor. Gemini Code Assist will provide context-aware suggestions that you can accept by pressing the Tab key.

Gemini Code Assist also supports targeting specific code by selecting it and then choosing or typing a prompt. This is useful for code reviews, refactoring, or generating documentation.


Running Open-Source AI Code Assistants Locally with Ollama and Continue

For those who want to run AI models entirely on their own hardware, Ollama and Continue offer a powerful combination. Ollama lets you download and run open-source large language models (LLMs) locally, while Continue integrates these models into VS Code and JetBrains editors. This approach keeps your code and data private and avoids cloud dependencies.

Step 1: Download and install Ollama for Linux. Follow the provided instructions to complete the installation.

Step 2: Install the Continue extension for Visual Studio Code. Go to the Extensions Marketplace, search for Continue, and click Install.

Step 3: Launch Ollama and download your preferred AI model. For example, to use Mistral’s Codestral model for code completion and chat, run:

ollama run codestral

Step 4: Configure Continue to use the Ollama model. Click the gear icon in the bottom right corner of Continue in VS Code to open config.json. Add the following configuration, adjusting the model names as needed:

{
  "models": [
    {
      "title": "Codestral",
      "provider": "ollama",
      "model": "codestral"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Codestral",
    "provider": "ollama",
    "model": "codestral"
  }
}

Step 5: To use multiple models (e.g., DeepSeek Coder for autocomplete and Llama 3 for chat), run each model in a separate terminal window and update config.json accordingly.

Step 6: Continue also supports using local embeddings for codebase-aware context. Download the nomic-embed-text model with:

ollama pull nomic-embed-text

Then update your config.json to add:

{
  "embeddingsProvider": {
    "provider": "ollama",
    "model": "nomic-embed-text"
  }
}

With this setup, Continue can retrieve relevant code snippets from your codebase and provide smarter, context-aware answers.


Building a Personal AI Coding Assistant with Gemma (Advanced, Local Deployment)

For advanced users who want full control and the ability to customize AI models, Google's Gemma models can be downloaded and run locally as a web service, then connected to a custom VS Code extension. This approach is ideal for those with strict data privacy requirements or who want to fine-tune models for specific codebases.

Step 1: Install required dependencies. On your Linux system, install Python 3, python3-venv, Node.js, npm, and git:

sudo apt update
sudo apt install git pip python3-venv nodejs npm

Step 2: Clone the Gemma Cookbook repository:

git clone https://github.com/google-gemini/gemma-cookbook.git

Step 3: Navigate to the Demos/personal-code-assistant/ directory. Optionally, use sparse checkout to only download relevant files.

Step 4: Set up the Gemma web service. Enter the gemma-web-service directory, create and activate a Python virtual environment, and install dependencies:

cd Demos/personal-code-assistant/gemma-web-service/
python3 -m venv venv
source venv/bin/activate
chmod +x setup_python.sh
./setup_python.sh

Step 5: Obtain Kaggle credentials and access to the Gemma models. Create a .env file in the web service directory with your Kaggle username and API key:

KAGGLE_USERNAME=your_username
KAGGLE_KEY=your_api_key

Step 6: Run the web service with:

./run_service.sh

The service will be accessible at http://localhost:8000/ by default.

Step 7: Set up the custom VS Code extension (Pipet Code Agent) by installing dependencies with npm in the pipet-code-agent-2 directory:

cd Demos/personal-code-assistant/pipet-code-agent-2/
npm install

Step 8: Open the extension project in VS Code, run it in Extension Development Host mode, and configure the extension to connect to your Gemma web service by setting the gemma.service.host setting to localhost or your server address.

Step 9: Use the Pipet commands in the Command Palette to send code, comments, or prompts to your local Gemma instance for review, generation, or explanation. You can modify or add new commands by editing the extension's TypeScript files.

This method provides full control and privacy, but requires more setup and hardware resources. It's suitable for users needing custom AI workflows or offline capability.


Other Free AI Code Assistants for VS Code

Several other free AI-powered extensions are available in the VS Code Marketplace, offering a range of features from code completion to chat-based help:

  • Codeium: Real-time code completion and limited chat support. Install via the Extensions Marketplace, register an account, and activate within VS Code.
  • Bito: Coding chatbot for answering questions and assisting with code generation. Requires Gmail-based registration.
  • Hugging Face Code Auto Complete: Multiline code completion using Hugging Face models. Requires a Hugging Face account and access token.

To install these, search for their names in the VS Code Extensions tab, follow the account setup instructions, and input any required API keys or tokens as prompted.


AI Code Assistants in GNOME Builder

While Visual Studio Code currently offers the broadest selection of AI code assistants on Linux, GNOME Builder is actively developing support for AI integrations. Some open-source projects and plugins may enable basic code completion or chat features, typically by connecting to external LLMs or using language server protocols. For the most up-to-date options, check the GNOME Builder plugin directory or community forums. Installation steps and capabilities may vary depending on the chosen plugin or backend model.


AI code assistants now bring context-aware suggestions, chat-driven support, and automated code review directly into Linux editors. By following the steps above, you can accelerate development, reduce manual coding, and focus more on creative problem-solving, whether you prefer cloud-based, local, or fully open-source solutions.