Installation Instructions for Local Developer AI support

Installation AI locally (Ollama)

You can set up ollama either through installation or using a Docker image.

Install on local host

This installation option has the advantage of out of the box gpu support. Currently windows has only a preview version available.

Visit https://ollama.com/download for installation instructions.

Configure your ollama to run the desired large language model

ollama run mistral

Run ollama with Docker

If docker is already setup this option is propably the fastest setup, but might require some additional steps for gpu support setup.

Utilize the Docker image provided by ollama Instructions can be found at: https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image Setup Language Model Provider:

Configure your ollama to run the desired large language model

docker exec -it ollama ollama run mistral

Install continue.dev

Use https://continue.dev/docs/quickstart to install it on your ide.

configure continue.dev and point to you running ollama instance and define the large language model eg.: mistral → open c:\Users\username\.continue\config.json and add under models

"models": [
  {
    "title": "Mistral@LocalOllama",
    "provider": "ollama",
    "model": "mistral",
    "apiBase": "http://localhost:11434/"
  }
],

Detailed instructions can be found under: https://continue.dev/docs/model-setup/select-provider

To checkout available options of development support with continue.dev https://continue.dev/docs/how-to-use-continue