ANI

5 Useful Docker Containers for Agentic Developers


Photo by the Author

# Introduction

The rise of structures such as LangChain again CrewAI made building AI agents easier than ever. However, developing these agents often involves hitting API rate limits, handling high-dimensional data, or exposing local servers to the Internet.

Instead of paying for cloud services during the prototyping phase or polluting your host machine with dependencies, you can develop Docker. With a single command, you can put together the infrastructure that makes your agents smarter.

Here are 5 essential Docker containers that every AI agent developer should have in their toolkit.

# 1. Ollama: Run Local Language Models

Ollama dashboard
Ollama dashboard

When building agents, send all alert to cloud provider like OpenAI it can be expensive and slow. Sometimes, you need a fast, discrete model for certain tasks – like grammar correction or classification tasks.

Ollama allows you to use large open source language models (LLMs) – such as Sleep 3, The Mistralor Where – directly to your local machine. By running it in a container, you keep your system clean and you can easily switch between different models without setting up a complex Python environment.

Privacy and cost are the biggest concerns when building agents. I Photo by Ollama Docker makes it easy to provide models like Llama 3 or Mistral with a REST API.

// Explaining Why It Matters to Agenttic Developers

Instead of sending sensitive data to external APIs like OpenAI, you can give your agent a “brain” that lives inside your infrastructure. This is important for business agents who handle proprietary data. By running docker run ollama/ollamayou immediately have a local repository that your agent code can call to generate text or consider tasks.

// Getting Started Quick Start

To download and run the Mistral model with the Ollama container, run the following command. This maps the port and keeps the models persistent on your local drive.

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

If the container is running, you need to pull the model by executing the command inside the container:

docker exec -it ollama ollama run mistral

// Explaining Why It's Useful For Agenttic Developers

Now you can target your agent's LLM client http://localhost:11434. This gives you an API-compatible endpoint for rapid prototyping and ensures that your data never leaves your machine.

// Reviewing Key Benefits

  • Data Privacy: Keep your information and data secure
  • Cost Effectiveness: There are no API fees to consider
  • Latency: Faster responses when using local GPUs

Read more: Ollama Docker Hub

# 2. Qdrant: Memory Vector Database

Qdrant dashboard
Qdrant dashboard

Agents need memory to remember past conversations and background information. To give an agent long-term memory, you need a vector database. These databases store numerical (embedded) representations of text, allowing your agent to search for similar information later.

Qdrant is a high performance, open source vector database built in Rust. It's fast, reliable, and offers both a gRPC and REST API. Running it on Docker gives you a production-grade in-memory system for your agents instantly.

// Explaining Why It Matters to Agenttic Developers

To build a retrieval-augmented generation (RAG) agent, you need to store document embeddings and retrieve them quickly. Qdrant acts as the agent's long-term memory. When a user asks a question, the agent converts it to a vector, searches the Qdrant for matching vectors – representing the relevant information – and uses that context to generate an answer. Running it in Docker keeps this memory layer isolated from your application code, making it more robust.

// Getting Started Quick Start

You can start Qdrant with a single command. This exposes the API and dashboard on port 6333 and the gRPC interface on port 6334.

docker run -d -p 6333:6333 -p 6334:6334 qdrant/qdrant

After activating this, you can connect your agent to it localhost:6333. When an agent learns something new, keep embedding it in Qdrant. The next time a user asks a question, the agent can search this database for relevant “memories” to include in the notification, making it truly conversational.

# 3. n8n: Glue Workflows Together

n8n dashboard
n8n dashboard

Agent workflow rarely exists in a vacuum. Sometimes you need your agent to check your email, update a row in a Google Sheet, or send a Slack message. While you can write API calls manually, the process is often tedious.

n8n is a fair-code workflow automation tool. It allows you to connect various services using a visual UI. By running it locally, you can create complex workflows — like “When an agent gets a sales lead, add them to HubSpot and send a Slack alert” — without writing a single line of integration code.

// Getting Started Quick Start

To keep up with your workflow, you have to turn up the volume. The following command sets up n8n with SQLite as its database.

docker run -d --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n n8nio/n8n

// Explaining Why It's Useful For Agenttic Developers

You can design your agent to call the n8n webhook URL. The agent simply sends the data, and n8n handles the dirty logic of talking to third-party APIs. This separates the “brain” (LLM) from the “hands” (combination).

Access the editor at and start doing it yourself.

Read more: n8n Docker Hub

# 4. Firecrawl: Convert Websites into Big Data Modeling Ready Data

Firecrawl dashboard
Firecrawl dashboard

One of the most common tasks of agents is research. However, agents struggle to read raw HTML or websites rendered in JavaScript. They need clean, well-formatted text.

Firecrawl is an API service that takes a URL, parses it from a website, and converts the content into clean markup or structured data. It handles JavaScript rendering and removes boilerplate – like ads and navigation bars – automatically. Running it locally bypasses the usage limits of the cloud version.

// Getting Started Quick Start

Firecrawl uses a docker-compose.yml file because it contains many services, including the application, Redis, and Playwright. Extract the repository and run it.

git clone 
cd firecrawl
docker compose up

// Explaining Why It's Useful For Agenttic Developers

Give your agent the ability to import live web data. When you build a research agent, you can have it call your local Firecrawl instance to download a web page, convert it to plain text, parse it, and save it to your Qdrant instance automatically.

# 5. PostgreSQL and pgvector: Use Relational Memory

PostgreSQL dashboard
PostgreSQL dashboard

Sometimes, vector search alone is not enough. You may need a database that can handle structured data – such as user profiles or activity logs – and vector embeddings at the same time. PostgreSQLonce pgvector extension, allows you to do just that.

Instead of running a separate vector database and a separate SQL database, you get the best of both worlds. You can store the user's name and age in a column of the table and store the embedding of their chat in another column, and do a mixed search (eg “I found chats from users in New York about refunds”).

// Getting Started Quick Start

The official PostgreSQL image does not include pgvector by default. You need to use a specific image, like the one from the pgvector community.

docker run -d --name postgres-pgvector -p 5432:5432 -e POSTGRES_PASSWORD=mysecretpassword pgvector/pgvector:pg16

// Explaining Why It's Useful For Agenttic Developers

This is the ultimate goal of country agents. Your agent can write its memories and internal state to the same database where your application data resides, ensuring consistency and simplifying your architecture.

# Wrapping up

You don't need a huge cloud budget to build great AI agents. The Docker ecosystem offers production-grade alternatives that work well on a developer's laptop.

By adding these five components to your workflow, you arm yourself with:

  • Brain: Ollama for a local description
  • Memory: Qdrant for vector search
  • Hands on: n8n for workflow automation
  • Eyes: Firecrawl for web browsing
  • Storage: PostgreSQL with pgvector for structured data

Start your containers, point your LangChain or CrewAI code to localhost, and watch your agents live.

// Continuous Learning

Long Shithu is a software engineer and technical writer who likes to use cutting-edge technology to make interesting stories, with a keen eye for detail and the ability to simplify complex concepts. You can also find Shittu Twitter.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button