Run Qwen3.5 on an Old Laptop: A Lightweight Local AI Setup Guide

Photo by the Author
# Introduction
Running a high-performance AI model on-premise no longer requires high-end workstations or expensive cloud setups. With lightweight tools and small open source models, you can now turn an old laptop into an active local AI environment for coding, testing, and agent-style workflows.
In this lesson, you will learn how to run Q3.5 you use in the area Ollama and connect it to OpenCode to create a simple local agent setup. The goal is to keep everything straightforward, accessible, and beginner-friendly, so you can get a working AI location assistant without dealing with a complex stack.
# Featuring Ollama
The first step is to install Ollama, which makes it easy to use large language models locally on your machine.
If you use Windowsyou can download Ollama directly from the official Download Ollama for Windows page and install it like any other application, or use the following command to PowerShell:
irm | iex

The Ollama download page includes installation instructions for Linux again macOSso you can follow the steps there if you are using a different operating system.
Once the installation is complete, you will be ready to launch Ollama and download your first local model.
# It starts Ollama
In most cases, Ollama starts automatically after installation, especially when you launch it for the first time. That means you may not need to do anything else before using the model in the environment.
If the Ollama server is not running, you can start it manually with the following command:
# Running Qwen3.5 Locally
Once Ollama is up and running, the next step is to download and run Qwen3.5 on your machine.
If you visit the Qwen3.5 model page at Ollama, you will see models of many sizes, from large variations to small, lighter options.
In this tutorial, we will use the 4B version because it offers a good balance between performance and hardware requirements. It is a viable option for older laptops and typically requires around 3.5 GB of random access memory (RAM).

To download and run the model in your terminal, run the following command:
When you first run this command, Ollama will download the model files to your machine. Depending on your internet speed, this may take a few minutes.

After completing the download, Ollama may take a moment to load the model and configure everything needed to run it locally. When you're ready, you'll see a terminal dialog interface where you can start updating the model directly.

At this point, you can already run Qwen3.5 in the terminal for easy local chats, quick tests, and easy coding help before connecting it to OpenCode for more workflows.
# Includes OpenCode
After setting up Ollama and Qwen3.5, the next step is to install OpenCode, a local code agent that can work with models running on your machine.
You can visit the OpenCode website to explore the available installation options and learn more about how it works. In this tutorial, we'll use the quick install method because it's the easiest way to get started.

Run the following command in your terminal:
curl -fsSL | bash
This installer walks us through the setup process and installs the necessary dependencies, including Node.js when needed, so you don't have to configure everything manually.

# Introducing OpenCode with Qwen3.5
Now that both Ollama and OpenCode are installed, you can connect OpenCode to your Qwen3.5 model and start using it as a lightweight coding agent.
If you look at the Qwen3.5 page on Ollama, you will see that Ollama now supports easy integration with external AI tools and coding agents. This makes it much easier to use local models in user-friendly workflows instead of only interacting with them in the terminal.

To open OpenCode for Qwen3.5 4B model, run the following command:
ollama launch opencode --model qwen3.5:4b
This command tells Ollama to start OpenCode using your model located in the Qwen3.5 environment. After it works, you will be taken to the OpenCode interface with Qwen3.5 4B already connected and ready to use.

# Building a Simple Python Project with Qwen3.5
If OpenCode works with Qwen3.5, you can start giving it simple commands to build software directly from your terminal.
In this lesson, we asked you to build a small one Python game project from scratch using the following command:
Create a new Python project and build a modern Guess the Word game with clean code, simple gameplay, score tracking, and an easy-to-use interface.

After a few minutes, OpenCode generated the project structure, wrote the code, and managed the setup needed to get the game running.
We also asked it to install any necessary dependencies and test the project, which made the workflow feel much closer to working with a lightweight local code agent than a simple chatbot.

The end result was a fully functional Python game that ran smoothly on the platform. The game was simple, the code structure was clean, and the score tracking worked as expected.

For example, if you enter the correct letter, the game immediately reveals the same letter in the hidden word, which shows that the mind works well out of the box.

# Final thoughts
I was really impressed with how easy it was to get a local agent setup working on an old laptop with Ollama, Qwen3.5, and OpenCode. With a simple, low-cost setup, it works surprisingly well and makes local AI feel more usable than most people expect.
That said, it's not all smooth sailing.
Because this setup relies on a small and scaled model, the results are not always robust enough for complex coding tasks. In my experience, it can handle simple projects, basic writing, research assistance, and general purpose tasks well, but starts to struggle when the software engineering task becomes more complex or multi-step.
Another issue I ran into over and over again was that the model would sometimes stop working. When that happened, I had to type it in manually go ahead to continue and complete the task. That's manageable for testing, but it makes the workflow less reliable if you want consistent output for large coding tasks.
Abid Ali Awan (@1abidiawan) is a data science expert with a passion for building machine learning models. Currently, he specializes in content creation and technical blogging on machine learning and data science technologies. Abid holds a Master's degree in technology management and a bachelor's degree in telecommunication engineering. His idea is to create an AI product using a graph neural network for students with mental illness.



