Build your Ai in Ai in Jospyterlab Code for Ollama and District Convention

JOYTER AI It brings the usual AI skills to interface. Having a AI local assistance guarantees privacy, reduces the latency, and provides an offline performance, which makes it a powerful engineer. In this article, we will learn how we can set a local AI codes in it JYSTERLAB use JOYTER AI, Ollama including Kisses face. At the end of this article, you will have a fully functional code for a Juptetterlab on the Autoclab code, to correct new menu, to create more booklets from the start, and more, as shown on the screen below.
The Josyter AI still continues to improve difficult progress, so some features may break. According to writing this article, test me to set up to ensure that it works, but expect potential changes as the project appears. And the performance of a helper depends on the model you choose and make sure you select one ready for your use of your use.
First things first – what is Jobyter Ai? As the name displays, Josyter Ai is a Jussterlab extension of ai generating AI. This powerful tool changes your common books of JOYTER or the Jobyterlab environment into a productive AI player. The best part? Works with sewing in places like Google Calocooratory including Visual Code. This expansion does all heavy lifting, providing access to various model suppliers (open and closed source) within your JUPYER.

Setting Environmentences including three main components:
- JYSTERLAB
- JOSSTER AI Expansion
- Ollama (working on a local model)
- [Optional] Face-Kissing (GGuf models)
Generally, finding a helper to resolve codes is a simple part. The deceptive confirms that all the installation is well done. It is therefore important to follow steps correctly.
1. To install the Jobyter Ai extension
It is recommended to create a new place Usually Jopyter AI Keeping Your Initiative Environment Is Clean and Organized. Once you finish following the following steps. Jobyter AI needs JYSTERLAB 4.X or JOYTER NETEBOOK 7+So make sure you have the latest version of Jobyter Lab installed. You can install the JYCTTERLAB with PIP or Colla:
# Install JupyterLab 4 using pip
pip install jupyterlab~=4.0
Next, enter the expansion of Jusster AI as follows.
pip install "jupyter-ai[all]"
This simple way of installation as includes every impressive one (so supports the face of face, Ollama, etc., outside the box). To date, Jime, Ai supports the following model providers:

If you experience errors during Josyter Ai installation, manually enter Jobyter AI using pip
apart from- [all] a group of self-determination. This way you can control which models are available in your Jobyter AI area. For example, to install JOYTER AI For additional support for Ollama models, use the following:
pip install jupyter-ai langchain-ollama
Reliance depends on model providers (see the table above). Next, Restart your Josytterlab Example. If you see the chat thumbnail in the left hand letter, this means that everything is well installed. With Jusster AI, you can discuss models or use the magical Inline magical instructions within your brochures.

2. Setting Ollama local models
Now this Jupyeter Ai, we need to prepare for the model. While Josyter AI meets directly selfish face models, some models may not work properly. Instead, Ollama provides a reliable way of loading models in the area.
Ollama It is a practical tool for using large-language models in the area. It allows you to download AI models prepared before from its library. Ollama supports all the largest platforms (Macos, windows, Linux), so select your OS method and download and install it from the official website. After installation, make sure it is well supported by working:
Ollama --version
------------------------------
ollama version is 0.6.2
Also, make sure your Ollama server must run can check by calling ollama serve
E-end:
$ ollama serve
Error: listen tcp 127.0.0.1:11434: bind: address already in use
When the server is already running, you will see the same error that guarantees that the Ollama works and works.
Option 1: Using previously prepared models
Ollama provides library libraries to read Download and run in the area. To start using model, download it using Haul command. For example, to use qwen2.5-coder:1.5b
Run:
ollama pull qwen2.5-coder:1.5b
This will download the model in your area. To ensure that the model is downloaded, Run:
ollama list
This will list all the models you downloaded and stored in your system using Ollama.
Option 2: Loading a custom model
If the model you need is not available in Ollama's librarian, you can upload a custom model by creating a Model file This specifies the model source. Detailed instructions in this process, refer to the OLAMAMA documents.
Option 3: GGUF models directly from face mass
Ollama now supports GGUF models directly from the face of faceincludes community and private models. This means that if you want to use the GGuf model directly from the HUB faces you can do so without requiring custom model file as mentioned in Option 2 above.
For example, loading a 4-bit quantized Qwen2.5-Coder-1.5B-Instruct model
From the face of face:
1. First, let Ollama under the local app settings.

2. On the model page, select Ollama from using this model list as shown below.

About there. In Juptetlab, open JOYTER AI Interface to chat in a separate bar. At the top of the chat panel or its settings (Gear icon), a decline or field to select the Model Provider and model ID. Designate Ollama As a provider, and enter a model name directly as shown Ollama list in the term (eg qwen2.5-coder:1.5b
). JOYTER AI will connect to the Ollama area server and load the questionnaire model. No API keys are required as this in the area.
- Set the language model, embedding model and inline final models based on your selected models.
- Save the settings and return to the view of the conversation.

This configuration links JOYTER AI to the area running in your area for Ollama. While Inline Completion should be enabled with this process, if that does not happen, you can do it by clicking on the JYSNARTUST Thumbnail, found in the bottom text of the Jussterlab display on the left of Mode index (eg mode: command). This opens the drop menu where you can choose Enable completions by Jupyternaut
to activate the feature.

When setting, you can use a ai code for a variety of functions such as autocompliment verbs, Debugging Help, and generate new code from scratch. It is important to note here that you can work with the assistant or The Chat Sidebar or directly on the nototook cells using %%ai magic commands
. Let's look at both ways.
Coding Assistant with chat display
This is straightforward. You can only discuss the action model. For example, here is how we can apply the model to define an error in the code and adjust the error by choosing the book registry.

You can also request AI to produce a work code from the beginning, by just explaining what you need in the environment. Here is the Python work that restores all the main numbers that come to the Incer N provided for the provided inCEGER n, produced by Jussternnaut.

Codes Help With Notebook Cell Cell or Ipython Shell:
You can also contact the models directly within Jobyter Notebook. First of all, Upload Ipython Expansion:
%load_ext jupyter_ai_magics
Now, you can use %%ai
Cell Magic to join your selected language model using a specified answer. Let's write the above example but this inside the NoteBook cells.

For more information and options you can refer to in official documents.
As you can look at this article, Jimeeter Ai makes it easy to set up the code assistant, as long as you have the right items and set in place. I used a small model, but you can choose from a variety of models supported Ollama or kisses face. The important benefit is that using the local model provides major benefits: promotes confidentiality, reduces the dependency indicators of model. However, running lArge models in the Ellama area with Ollama can be resources and make sure you have enough ram. With fast speed where open source models develop, you can reach comparatively performance even in other ways.