Machine Learning

The future of AI agent's communication with ACP

It is a pleasure to see the Genai industry starting to go up to my walk. We may have a similar witness to the first Internet, when HTTP is presented in the HTIM Berners-Lee developing HTTP in 1990, provides the Internet from a special research network on the World Wide. In 1993, web browsers such as the Mosaic has made HTTP so popular as a web traffic to instant other programs.

One step that is promising in this program is MCP (Montect Protection Model, developed by anthropic. The MCP receives popularity with its efforts to match the encounter between the external llms and the sources of data. Recently (recently (recently (First commitment is the past in April 2025), The new protocol called ACP (Agent Countocol) appeared. Fills the MCP by explaining the methods that agents can communicate with each other.

Example of Art | Photo by the writer

In this article, I would like to discuss which ACP, why it can also help how to be used. We will create an agent-agent AII program to communicate with the data.

All of the ACP View

Before jumping to work, let's take a while to understand the idea behind ACP and how it works under the Hood.

ACP (Agent Communortocol Protocol) is an open protocol designed to deal with the growing challenge of connecting the agents AGENTS AI, applications, and people. The current Genai Center is severely fragile, with different groups in the construction of Eylatha using various types, commonly associated with technology. This clip has reduced new items and makes it difficult for agents to work together successfully.

Dealing with this challenge, ACP aims to estimate the link between agents with renewed apis renewal. Protocol is a framework with technology-acnostic, which means it can be used for any agentic framework, such as Langchain, Crewai, smolagents, or others. This flexibility makes systems easier to create uses where agents can work together, no matter how development is originally developed.

This protocol has been developed as an open levels under Linux Foundation, next to Beaaai (reference implementation). One of the key points The team emphasizes that ACP is openly reinstated and manufactured by the public, instead of a dealer group.

What benefits can bring ACPs?

  • Agents are easily replaced. At the present speed of designation in the area of genai, the new technology of cutting-technology appears at all times. ACP enables agents to be drawn to the production outside of the seams, reduce the cost of repairs and make it easy to accept the most advanced tool when available.
  • It enables cooperation between many agents designed to different structures. As we know about human management, technology often leads to better results. The same applies to Agentic Plans. A group of agents, each focuses on a specific job (such as the Python's writing code or web research), usually is it one agent trying to do everything. ACP makes it possible that special agents have been able to communicate and work together, whether it is built using different structures or technology.
  • New opportunities for cooperation. To the combined extent of agents, it can be easier for agents to work together, which light up a new relationship between different groups between different groups or companies. Think of the country when your intelligent home agent sees an unusual, determining that a failed temperature system, and explores your work provider's agent to ensure there is no operation of it. Finally, letters are an expert linking to your Google Calendar agent to make sure you are at home. It may sound tomorrow, but with ACP, it may be close.

We covered what ACP is and why it is important. The protocol looks really promising. So, let's try and see how it works in working.

ACP to running

Let's try ACP with Classic “Talk to Data” Case Special Case. To use ACP Profit for the AGNOSTIC framework, we will create ACP agents with various entities:

  • SQL agent Nocrewai to write SQL questions
  • DB AGENT With Huggingface Smolagents to do those questions.

I will not come into the details of each frame here, but if you want to know, write me deep articles of both of them:
– “Multi Ai 101” Applications “about Ncrewai,
– “Code agents: The future of Agentic Ai” about smolagents.

To create a DB agent

Let's start with a DB agent. As I said before, ACP may be accompanied by MCP. Therefore, I will use the tools from the Analytics tool box for the MCP server. You can find the implementation of the MCP server in GitTub. With the deepest dive and step instructions by step, check my previous article, where I cover MCP in detail.

The code itself is straight: We start ACP server and use @server.agent() Decoration to describe our agent's work. This work expects a messaging list as installation and returns the generator.

from collections.abc import AsyncGenerator
from acp_sdk.models import Message, MessagePart
from acp_sdk.server import Context, RunYield, RunYieldResume, Server
from smolagents import LiteLLMModel,ToolCallingAgent, ToolCollection
import logging 
from dotenv import load_dotenv
from mcp import StdioServerParameters

load_dotenv() 

# initialise ACP server
server = Server()

# initialise LLM
model = LiteLLMModel(
  model_id="openai/gpt-4o-mini",  
  max_tokens=2048
)

# define config for MCP server to connect
server_parameters = StdioServerParameters(
  command="uv",
  args=[
      "--directory",
      "/Users/marie/Documents/github/mcp-analyst-toolkit/src/mcp_server",
      "run",
      "server.py"
  ],
  env=None
)

@server.agent()
async def db_agent(input: list[Message], context: Context) -> AsyncGenerator[RunYield, RunYieldResume]:
  "This is a CodeAgent can execute SQL queries against ClickHouse database."
  with ToolCollection.from_mcp(server_parameters, trust_remote_code=True) as tool_collection:
    agent = ToolCallingAgent(tools=[*tool_collection.tools], model=model)
    question = input[0].parts[0].content
    response = agent.run(question)

  yield Message(parts=[MessagePart(content=str(response))])

if __name__ == "__main__":
  server.run(port=8001)

We will also need to set up the Python environment. I will be using uv The package manager of this.

uv init --name acp-sql-agent
uv venv
source .venv/bin/activate
uv add acp-sdk "smolagents[litellm]" python-dotenv mcp "smolagents[mcp]" ipykernel

Then, we can run agent and use the following command.

uv run db_agent.py

If everything is set up, you will see the Port 8001 server. We will need an ACP client to ensure that it works as expected. Endure me, we will test soon.

To create a SQL agent

Before there, let's build a SQL agent to formulate questions. We will use Crewail framework for this. Our agent will look at the basis for information and questions to produce the answers. Therefore, we will equip us with the RAG tool (Reducing Generation) Tool).
First, we will start a rag tool and upload a reference file clickhouse_queries.txt. Next, we will build a Creway agent by explaining its role, vaccine and backstory. Finally, we will create a job and cover everything together in the labor force.

from crewai import Crew, Task, Agent, LLM
from crewai.tools import BaseTool
from crewai_tools import RagTool
from collections.abc import AsyncGenerator
from acp_sdk.models import Message, MessagePart
from acp_sdk.server import RunYield, RunYieldResume, Server
import json
import os
from datetime import datetime
from typing import Type
from pydantic import BaseModel, Field

import nest_asyncio
nest_asyncio.apply()

# config for RAG tool
config = {
  "llm": {
    "provider": "openai",
    "config": {
      "model": "gpt-4o-mini",
    }
  },
  "embedding_model": {
    "provider": "openai",
    "config": {
      "model": "text-embedding-ada-002"
    }
  }
}

# initialise tool
rag_tool = RagTool(
  config=config,  
  chunk_size=1200,       
  chunk_overlap=200)

rag_tool.add("clickhouse_queries.txt")

# initialise ACP server
server = Server()

# initialise LLM
llm = LLM(model="openai/gpt-4o-mini", max_tokens=2048)

@server.agent()
async def sql_agent(input: list[Message]) -> AsyncGenerator[RunYield, RunYieldResume]:
  "This agent knows the database schema and can return SQL queries to answer questions about the data."

  # create agent
  sql_agent = Agent(
    role="Senior SQL analyst", 
    goal="Write SQL queries to answer questions about the e-commerce analytics database.",
    backstory="""
You are an expert in ClickHouse SQL queries with over 10 years of experience. You are familiar with the e-commerce analytics database schema and can write optimized queries to extract insights.
        
## Database Schema

You are working with an e-commerce analytics database containing the following tables:

### Table: ecommerce.users 
**Description:** Customer information for the online shop
**Primary Key:** user_id
**Fields:** 
- user_id (Int64) - Unique customer identifier (e.g., 1000004, 3000004)
- country (String) - Customer's country of residence (e.g., "Netherlands", "United Kingdom")
- is_active (Int8) - Customer status: 1 = active, 0 = inactive
- age (Int32) - Customer age in full years (e.g., 31, 72)

### Table: ecommerce.sessions 
**Description:** User session data and transaction records
**Primary Key:** session_id
**Foreign Key:** user_id (references ecommerce.users.user_id)
**Fields:** 
- user_id (Int64) - Customer identifier linking to users table (e.g., 1000004, 3000004)
- session_id (Int64) - Unique session identifier (e.g., 106, 1023)
- action_date (Date) - Session start date (e.g., "2021-01-03", "2024-12-02")
- session_duration (Int32) - Session duration in seconds (e.g., 125, 49)
- os (String) - Operating system used (e.g., "Windows", "Android", "iOS", "MacOS")
- browser (String) - Browser used (e.g., "Chrome", "Safari", "Firefox", "Edge")
- is_fraud (Int8) - Fraud indicator: 1 = fraudulent session, 0 = legitimate
- revenue (Float64) - Purchase amount in USD (0.0 for non-purchase sessions, >0 for purchases)

## ClickHouse-Specific Guidelines

1. **Use ClickHouse-optimized functions:**
   - uniqExact() for precise unique counts
   - uniqExactIf() for conditional unique counts
   - quantile() functions for percentiles
   - Date functions: toStartOfMonth(), toStartOfYear(), today()

2. **Query formatting requirements:**
   - Always end queries with "format TabSeparatedWithNames"
   - Use meaningful column aliases
   - Use proper JOIN syntax when combining tables
   - Wrap date literals in quotes (e.g., '2024-01-01')

3. **Performance considerations:**
   - Use appropriate WHERE clauses to filter data
   - Consider using HAVING for post-aggregation filtering
   - Use LIMIT when finding top/bottom results

4. **Data interpretation:**
   - revenue > 0 indicates a purchase session
   - revenue = 0 indicates a browsing session without purchase
   - is_fraud = 1 sessions should typically be excluded from business metrics unless specifically analyzing fraud

## Response Format
Provide only the SQL query as your answer. Include brief reasoning in comments if the query logic is complex. 
        """,

    verbose=True,
    allow_delegation=False,
    llm=llm,
    tools=[rag_tool], 
    max_retry_limit=5
  )
    
  # create task
  task1 = Task(
    description=input[0].parts[0].content,
    expected_output = "Reliable SQL query that answers the question based on the e-commerce analytics database schema.",
    agent=sql_agent
  )

  # create crew
  crew = Crew(agents=[sql_agent], tasks=[task1], verbose=True)
  
  # execute agent
  task_output = await crew.kickoff_async()
  yield Message(parts=[MessagePart(content=str(task_output))])

if __name__ == "__main__":
  server.run(port=8002)

We will also need to add any missing packages to uv before using the server.

uv add crewai crewai_tools nest-asyncio
uv run sql_agent.py

Now, the second agent is working in Port 8002. For both servers up and run, it's time to look at whether they work properly.

To drive ACP agent and client

Now that we are ready to test our agents, we will use ACP client to run them in harmony. In that, we need to start the client with server URL and use it run_sync work by specifying the agent name and installation.

import os
import nest_asyncio
nest_asyncio.apply()
from acp_sdk.client import Client
import asyncio

# Set your OpenAI API key here (or use environment variable)
# os.environ["OPENAI_API_KEY"] = "your-api-key-here"

async def example() -> None:
  async with Client(base_url=" as client1:
    run1 = await client1.run_sync(
      agent="db_agent", input="select 1 as test"
    )
    print(' DB agent response:')
    print(run1.output[0].parts[0].content)

  async with Client(base_url=" as client2:
    run2 = await client2.run_sync(
      agent="sql_agent", input="How many customers did we have in May 2024?" 
    )
    print(' SQL agent response:')
    print(run2.output[0].parts[0].content)

if __name__ == "__main__":
  asyncio.run(example())

#  DB agent response:
# 1
#  SQL agent response:
# ```
# SELECT COUNT(DISTINCT user_id) AS total_customers
# FROM ecommerce.users
# WHERE is_active = 1
# AND user_id IN (
#     SELECT DISTINCT user_id
#     FROM ecommerce.sessions
#     WHERE action_date >= '2024-05-01' AND action_date < '2024-06-01'
# ) 
# format TabSeparatedWithNames

We've received the expected results from both servers, so it looks like everything works as intended.

💡Tip: You can check full terminals in the term where each server works.

Chanes agents in order

To answer real questions from customers, we need both agents to work together. Let's buy one after another. Therefore, we will start to call a SQL agent and go through the SQL question produced in the DB agent for execution.

Photo by the writer

Here is the search code for agents. It is like what we have used to test each server. The main difference is that we have now passed out from the SQL agent directly to the DB agent.

async def example() -> None: 
  async with Client(base_url=" as db_agent, Client(base_url=" as sql_agent:
    question = 'How many customers did we have in May 2024?'
    sql_query = await sql_agent.run_sync(
      agent="sql_agent", input=question
    )
    print('SQL query generated by SQL agent:')
    print(sql_query.output[0].parts[0].content)
    
    answer = await db_agent.run_sync(
      agent="db_agent", input=sql_query.output[0].parts[0].content
    )
    print('Answer from DB agent:')
    print(answer.output[0].parts[0].content)

asyncio.run(example())

Everything worked well, and we got the expected result.

SQL query generated by SQL agent:
Thought: I need to craft a SQL query to count the number of unique customers 
who were active in May 2024 based on their sessions.

```sql
SELECT COUNT(DISTINCT u.user_id) AS active_customers
FROM ecommerce.users AS u
JOIN ecommerce.sessions AS s ON u.user_id = s.user_id
WHERE u.is_active = 1
AND s.action_date >= '2024-05-01' 
AND s.action_date < '2024-06-01'
FORMAT TabSeparatedWithNames
```
Answer from DB agent:
234544

Router pattern

In some cases of use, the way is a static and well-defined, and we can quote the agents directly as we did before. However, we usually expect llm agents to think independently and determine which tools or agents should use to achieve purpose. Solving such offenses, we will use the router pattern using ACP. We will build a new agent (Orchestrator) that can convey functions in the DB and SQL agents.

Photo by the writer

We will start by adding the implementation of reference beeai_framework to the package manager.

uv add beeai_framework

Enabling our orchestors to call SQL and DB agents, we will threaten them as tools. In this way, the Orchestrator can treat them like any other tool and ask them when needed.

Let's start with a SQL agent. It is boiled Code: Describing Installing fields and releasing using Pydantic and calls an agent to _run work.

from pydantic import BaseModel, Field

from acp_sdk import Message
from acp_sdk.client import Client
from acp_sdk.models import MessagePart
from beeai_framework.tools.tool import Tool
from beeai_framework.tools.types import ToolRunOptions
from beeai_framework.context import RunContext
from beeai_framework.emitter import Emitter
from beeai_framework.tools import ToolOutput
from beeai_framework.utils.strings import to_json

# helper function
async def run_agent(agent: str, input: str) -> list[Message]:
  async with Client(base_url=" as client:
    run = await client.run_sync(
      agent=agent, input=[Message(parts=[MessagePart(content=input, content_type="text/plain")])]
    )

  return run.output

class SqlQueryToolInput(BaseModel):
  question: str = Field(description="The question to answer using SQL queries against the e-commerce analytics database")

class SqlQueryToolResult(BaseModel):
  sql_query: str = Field(description="The SQL query that answers the question")

class SqlQueryToolOutput(ToolOutput):
  result: SqlQueryToolResult = Field(description="SQL query result")

  def get_text_content(self) -> str:
    return to_json(self.result)

  def is_empty(self) -> bool:
    return self.result.sql_query.strip() == ""

  def __init__(self, result: SqlQueryToolResult) -> None:
    super().__init__()
    self.result = result

class SqlQueryTool(Tool[SqlQueryToolInput, ToolRunOptions, SqlQueryToolOutput]):
  name = "SQL Query Generator"
  description = "Generate SQL queries to answer questions about the e-commerce analytics database"
  input_schema = SqlQueryToolInput

  def _create_emitter(self) -> Emitter:
    return Emitter.root().child(
        namespace=["tool", "sql_query"],
        creator=self,
    )

  async def _run(self, input: SqlQueryToolInput, options: ToolRunOptions | None, context: RunContext) -> SqlQueryToolOutput:
    result = await run_agent("sql_agent", input.question)
    return SqlQueryToolOutput(result=SqlQueryToolResult(sql_query=str(result[0])))

Let's follow the same way as a DB agent.

from pydantic import BaseModel, Field

from acp_sdk import Message
from acp_sdk.client import Client
from acp_sdk.models import MessagePart
from beeai_framework.tools.tool import Tool
from beeai_framework.tools.types import ToolRunOptions
from beeai_framework.context import RunContext
from beeai_framework.emitter import Emitter
from beeai_framework.tools import ToolOutput
from beeai_framework.utils.strings import to_json

async def run_agent(agent: str, input: str) -> list[Message]:
  async with Client(base_url=" as client:
    run = await client.run_sync(
      agent=agent, input=[Message(parts=[MessagePart(content=input, content_type="text/plain")])]
    )

  return run.output

class DatabaseQueryToolInput(BaseModel):
  query: str = Field(description="The SQL query or question to execute against the ClickHouse database")

class DatabaseQueryToolResult(BaseModel):
  result: str = Field(description="The result of the database query execution")

class DatabaseQueryToolOutput(ToolOutput):
  result: DatabaseQueryToolResult = Field(description="Database query execution result")

  def get_text_content(self) -> str:
    return to_json(self.result)

  def is_empty(self) -> bool:
    return self.result.result.strip() == ""

  def __init__(self, result: DatabaseQueryToolResult) -> None:
    super().__init__()
    self.result = result

class DatabaseQueryTool(Tool[DatabaseQueryToolInput, ToolRunOptions, DatabaseQueryToolOutput]):
  name = "Database Query Executor"
  description = "Execute SQL queries and questions against the ClickHouse database"
  input_schema = DatabaseQueryToolInput

  def _create_emitter(self) -> Emitter:
    return Emitter.root().child(
      namespace=["tool", "database_query"],
      creator=self,
    )

  async def _run(self, input: DatabaseQueryToolInput, options: ToolRunOptions | None, context: RunContext) -> DatabaseQueryToolOutput:
    result = await run_agent("db_agent", input.query)
    return DatabaseQueryToolOutput(result=DatabaseQueryToolResult(result=str(result[0])))

Now let's get a key agent that will be looking for others as tools. We will use the implementation of the DEACT orvent from Beaaai framework of the orchestrator. I even added additional tools to the tool of the tool that surrounds our DB and SQL agents, so we can see all information about calls.

from collections.abc import AsyncGenerator

from acp_sdk import Message
from acp_sdk.models import MessagePart
from acp_sdk.server import Context, Server
from beeai_framework.backend.chat import ChatModel
from beeai_framework.agents.react import ReActAgent
from beeai_framework.memory import TokenMemory
from beeai_framework.utils.dicts import exclude_none
from sql_tool import SqlQueryTool
from db_tool import DatabaseQueryTool
import os
import logging

# Configure logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)

# Only add handler if it doesn't already exist
if not logger.handlers:
  handler = logging.StreamHandler()
  handler.setLevel(logging.INFO)
  formatter = logging.Formatter('ORCHESTRATOR - %(levelname)s - %(message)s')
  handler.setFormatter(formatter)
  logger.addHandler(handler)

# Prevent propagation to avoid duplicate messages
logger.propagate = False

# Wrapped our tools with additional logging for tracebility
class LoggingSqlQueryTool(SqlQueryTool):
  async def _run(self, input, options, context):
    logger.info(f"🔍 SQL Tool Request: {input.question}")
    result = await super()._run(input, options, context)
    logger.info(f"📝 SQL Tool Response: {result.result.sql_query}")
    return result

class LoggingDatabaseQueryTool(DatabaseQueryTool):
  async def _run(self, input, options, context):
    logger.info(f"🗄️ Database Tool Request: {input.query}")
    result = await super()._run(input, options, context)
    logger.info(f"📊 Database Tool Response: {result.result.result}...")  
    return result

server = Server()

@server.agent(name="orchestrator")
async def orchestrator(input: list[Message], context: Context) -> AsyncGenerator:
  logger.info(f"🚀 Orchestrator started with input: {input[0].parts[0].content}")
    
  llm = ChatModel.from_name("openai:gpt-4o-mini")

  agent = ReActAgent(
    llm=llm,
    tools=[LoggingSqlQueryTool(), LoggingDatabaseQueryTool()],
    templates={
        "system": lambda template: template.update(
          defaults=exclude_none({
            "instructions": """
                You are an expert data analyst assistant that helps users analyze e-commerce data.
                
                You have access to two tools:
                1. SqlQueryTool - Use this to generate SQL queries from natural language questions about the e-commerce database
                2. DatabaseQueryTool - Use this to execute SQL queries directly against the ClickHouse database
                
                The database contains two main tables:
                - ecommerce.users (customer information)
                - ecommerce.sessions (user sessions and transactions)
                
                When a user asks a question:
                1. First, use SqlQueryTool to generate the appropriate SQL query
                2. Then, use DatabaseQueryTool to execute that query and get the results
                3. Present the results in a clear, understandable format
                
                Always provide context about what the data shows and any insights you can derive.
            """,
            "role": "system"
        })
      )
    }, memory=TokenMemory(llm))

  prompt = (str(input[0]))
  logger.info(f"🤖 Running ReAct agent with prompt: {prompt}")
  
  response = await agent.run(prompt)
  
  logger.info(f"✅ Orchestrator completed. Response length: {len(response.result.text)} characters")
  logger.info(f"📤 Final response: {response.result.text}...")  

  yield Message(parts=[MessagePart(content=response.result.text)])

if __name__ == "__main__":
  server.run(port=8003)

Now, as before, we can run the Orchestrator agent to use ACP client to see the result.

async def router_example() -> None:
  async with Client(base_url=" as orchestrator_client:
    question = 'How many customers did we have in May 2024?'
    response = await orchestrator_client.run_sync(
      agent="orchestrator", input=question
    )
    print('Orchestrator response:')
    
    # Debug: Print the response structure
    print(f"Response type: {type(response)}")
    print(f"Response output length: {len(response.output) if hasattr(response, 'output') else 'No output attribute'}")
    
    if response.output and len(response.output) > 0:
      print(response.output[0].parts[0].content)
    else:
      print("No response received from orchestrator")
      print(f"Full response: {response}")

asyncio.run(router_example())
# In May 2024, we had 234,544 unique active customers.

Our program worked well, and we found the expected result. Good work!

Let's see how working under the Hood works by looking at the logs from the Orchestrator server. The router began urging the SQL agent as a SQL tool. Then, use the question to call the DB agent. Eventually, the final answer produced.

ORCHESTRATOR - INFO - 🚀 Orchestrator started with input: How many customers did we have in May 2024?
ORCHESTRATOR - INFO - 🤖 Running ReAct agent with prompt: How many customers did we have in May 2024?
ORCHESTRATOR - INFO - 🔍 SQL Tool Request: How many customers did we have in May 2024?

ORCHESTRATOR - INFO - 📝 SQL Tool Response: 
SELECT COUNT(uniqExact(u.user_id)) AS active_customers
FROM ecommerce.users AS u
JOIN ecommerce.sessions AS s ON u.user_id = s.user_id
WHERE u.is_active = 1 
  AND s.action_date >= '2024-05-01' 
  AND s.action_date < '2024-06-01'
FORMAT TabSeparatedWithNames

ORCHESTRATOR - INFO - 🗄️ Database Tool Request: 
SELECT COUNT(uniqExact(u.user_id)) AS active_customers
FROM ecommerce.users AS u
JOIN ecommerce.sessions AS s ON u.user_id = s.user_id
WHERE u.is_active = 1 
  AND s.action_date >= '2024-05-01' 
  AND s.action_date < '2024-06-01'
FORMAT TabSeparatedWithNames

ORCHESTRATOR - INFO - 📊 Database Tool Response: 234544...
ORCHESTRATOR - INFO - ✅ Orchestrator completed. Response length: 52 characters
ORCHESTRATOR - INFO - 📤 Final response: In May 2024, we had 234,544 unique active customers....

Due to the additional cutting added, we can now track all calls made by the Orchestrator.

You can get a complete code in GitTub.

Summary

In this article, we examine the ACP Protocol of their ACP power. Here's a quick restoration of the main points:

  • ACP (Agentastert Protocol) open protocol aims to estimate communication between agents. It enables MCP, facing partnerships between external agents and tools for data.
  • ACP follows the construction of the customer server and uses the rising api.
  • The protocol is technology – and the Agnostic, allowing you to create a new systems and create new interactions between agents outside the seams.
  • With ACP, you can use a wide range of agent interaction range, from a simple transaction range from the router pattern, where the orchestrator may transfer power services to other agents.

Thank you for reading. I hope this article had understanding. Recall Einstein's advice: “The important thing does not stop asking questions. Curiosity has reason for existence.” May your curiosity lead you to your good good guidance.

Indication

This article is inspired by “ACP: Agent Communication Protocolol” a brief course from Deepleling.ai.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button