Using Copilot with Local Deployment & Private Deployment Models via Ollama
Using Copilot with Local Models via Ollama Recently, Mistral AI released a highly efficient code-writing model called codestral. It’s only 22B, and my MacBook can easily handle it, using around 15GB of memory during operation. I wanted to integrate it with VSCode to replace GitHub Copilot for more secure coding. Ensure you have Ollama and the codestral model installed. It’s straightforward following the official links. To use a Copilot-like feature in VSCode, you need to install codestral and starcoder2 via Ollama, then install the Continue plugin in VSCode....