Generative AI

How I built intelligent systems with Autogen, Langchain, and face-to-face to demonstrate the effective flow of Age Enentic AI

In this tutorial, we dive into the goal of Agentic AI by combining Langchain, Autogen, and face-to-face in a single, fully functional framework that works without paid APIs. We start by setting up an open source pipeline and then progress through structured consultation, multi-gram work flow, and collaborative agent communication. As we move from Langchain Chains to cults, we always show how to think, plan, and execute without a seam to create independence, conscious independence, completely within our habitat and environment. Look Full codes here.

import warnings
warnings.filterwarnings('ignore')


from typing import List, Dict
import autogen
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import HuggingFacePipeline
from transformers import pipeline
import json


print("šŸš€ Loading models...n")


pipe = pipeline(
   "text2text-generation",
   model="google/flan-t5-base",
   max_length=200,
   temperature=0.7
)


llm = HuggingFacePipeline(pipeline=pipe)
print("āœ“ Models loaded!n")

We start by setting up our environment and bring in all the required libraries. We start naming the bugging face Flan-T5 as our local language model, to be sure it can produce a coherent, rich text. We make sure everything loads successfully, setting the stage for the next aventic test. Look Full codes here.

def demo_langchain_basics():
   print("="*70)
   print("DEMO 1: LangChain - Intelligent Prompt Chains")
   print("="*70 + "n")
   prompt = PromptTemplate(
       input_variables=["task"],
       template="Task: {task}nnProvide a detailed step-by-step solution:"
   )
   chain = LLMChain(llm=llm, prompt=prompt)
   task = "Create a Python function to calculate fibonacci sequence"
   print(f"Task: {task}n")
   result = chain.run(task=task)
   print(f"LangChain Response:n{result}n")
   print("āœ“ LangChain demo completen")


def demo_langchain_multi_step():
   print("="*70)
   print("DEMO 2: LangChain - Multi-Step Reasoning")
   print("="*70 + "n")
   planner = PromptTemplate(
       input_variables=["goal"],
       template="Break down this goal into 3 steps: {goal}"
   )
   executor = PromptTemplate(
       input_variables=["step"],
       template="Explain how to execute this step: {step}"
   )
   plan_chain = LLMChain(llm=llm, prompt=planner)
   exec_chain = LLMChain(llm=llm, prompt=executor)
   goal = "Build a machine learning model"
   print(f"Goal: {goal}n")
   plan = plan_chain.run(goal=goal)
   print(f"Plan:n{plan}n")
   print("Executing first step...")
   execution = exec_chain.run(step="Collect and prepare data")
   print(f"Execution:n{execution}n")
   print("āœ“ Multi-step reasoning completen")

We test Langchain's capabilities by building intelligent consent templates that allow our model to think about tasks. We create both simple step-by-step and flowery step-by-step flows that break down complex goals into clear subtasks. We look at how Langchain enables systematic thinking and turns clear commands into detailed, actionable answers. Look Full codes here.

class SimpleAgent:
   def __init__(self, name: str, role: str, llm_pipeline):
       self.name = name
       self.role = role
       self.pipe = llm_pipeline
       self.memory = []
   def process(self, message: str) -> str:
       prompt = f"You are a {self.role}.nUser: {message}nYour response:"
       response = self.pipe(prompt, max_length=150)[0]['generated_text']
       self.memory.append({"user": message, "agent": response})
       return response
   def __repr__(self):
       return f"Agent({self.name}, role={self.role})"


def demo_simple_agents():
   print("="*70)
   print("DEMO 3: Simple Multi-Agent System")
   print("="*70 + "n")
   researcher = SimpleAgent("Researcher", "research specialist", pipe)
   coder = SimpleAgent("Coder", "Python developer", pipe)
   reviewer = SimpleAgent("Reviewer", "code reviewer", pipe)
   print("Agents created:", researcher, coder, reviewer, "n")
   task = "Create a function to sort a list"
   print(f"Task: {task}n")
   print(f"[{researcher.name}] Researching...")
   research = researcher.process(f"What's the best approach to: {task}")
   print(f"Research: {research[:100]}...n")
   print(f"[{coder.name}] Coding...")
   code = coder.process(f"Write Python code to: {task}")
   print(f"Code: {code[:100]}...n")
   print(f"[{reviewer.name}] Reviewing...")
   review = reviewer.process(f"Review this approach: {code[:50]}")
   print(f"Review: {review[:100]}...n")
   print("āœ“ Multi-agent workflow completen")

We design lightweight agents powered by the same face pipeline, each assigned a specific role, such as researcher, coder, or observer. We allow these people to collaborate on a simple coding task, exchange knowledge and build on each other's results. We demonstrate how integrated integrated workflows can simulate collaboration, intelligence and self-organization with automatic determination. Look Full codes here.

def demo_autogen_conceptual():
   print("="*70)
   print("DEMO 4: AutoGen Concepts (Conceptual Demo)")
   print("="*70 + "n")
   agent_config = {
       "agents": [
           {"name": "UserProxy", "type": "user_proxy", "role": "Coordinates tasks"},
           {"name": "Assistant", "type": "assistant", "role": "Solves problems"},
           {"name": "Executor", "type": "executor", "role": "Runs code"}
       ],
       "workflow": [
           "1. UserProxy receives task",
           "2. Assistant generates solution",
           "3. Executor tests solution",
           "4. Feedback loop until complete"
       ]
   }
   print(json.dumps(agent_config, indent=2))
   print("nšŸ“ AutoGen Key Features:")
   print("  • Automated agent chat conversations")
   print("  • Code execution capabilities")
   print("  • Human-in-the-loop support")
   print("  • Multi-agent collaboration")
   print("  • Tool/function callingn")
   print("āœ“ AutoGen concepts explainedn")


class MockLLM:
   def __init__(self):
       self.responses = {
           "code": "def fibonacci(n):n    if n <= 1:n        return nn    return fibonacci(n-1) + fibonacci(n-2)",
           "explain": "This is a recursive implementation of the Fibonacci sequence.",
           "review": "The code is correct but could be optimized with memoization.",
           "default": "I understand. Let me help with that task."
       }
   def generate(self, prompt: str) -> str:
       prompt_lower = prompt.lower()
       if "code" in prompt_lower or "function" in prompt_lower:
           return self.responses["code"]
       elif "explain" in prompt_lower:
           return self.responses["explain"]
       elif "review" in prompt_lower:
           return self.responses["review"]
       return self.responses["default"]


def demo_autogen_with_mock():
   print("="*70)
   print("DEMO 5: AutoGen with Custom LLM Backend")
   print("="*70 + "n")
   mock_llm = MockLLM()
   conversation = [
       ("User", "Create a fibonacci function"),
       ("CodeAgent", mock_llm.generate("write code for fibonacci")),
       ("ReviewAgent", mock_llm.generate("review this code")),
   ]
   print("Simulated AutoGen Multi-Agent Conversation:n")
   for speaker, message in conversation:
       print(f"[{speaker}]")
       print(f"{message}n")
   print("āœ“ AutoGen simulation completen")

We illustrate Autogen's Core concept by defining the conceptual configuration of agents and workflows. We then simulate an autogen-style conversation using a custom MOCK LLM that generates logical but visual responses. We see how this framework allows multiple agents to think, test, and analyze ideas collaboratively without relying on any external APIs. Look Full codes here.

def demo_hybrid_system():
   print("="*70)
   print("DEMO 6: Hybrid LangChain + Multi-Agent System")
   print("="*70 + "n")
   reasoning_prompt = PromptTemplate(
       input_variables=["problem"],
       template="Analyze this problem: {problem}nWhat are the key steps?"
   )
   reasoning_chain = LLMChain(llm=llm, prompt=reasoning_prompt)
   planner = SimpleAgent("Planner", "strategic planner", pipe)
   executor = SimpleAgent("Executor", "task executor", pipe)
   problem = "Optimize a slow database query"
   print(f"Problem: {problem}n")
   print("[LangChain] Analyzing problem...")
   analysis = reasoning_chain.run(problem=problem)
   print(f"Analysis: {analysis[:120]}...n")
   print(f"[{planner.name}] Creating plan...")
   plan = planner.process(f"Plan how to: {problem}")
   print(f"Plan: {plan[:120]}...n")
   print(f"[{executor.name}] Executing...")
   result = executor.process(f"Execute: Add database indexes")
   print(f"Result: {result[:120]}...n")
   print("āœ“ Hybrid system completen")


if __name__ == "__main__":
   print("="*70)
   print("šŸ¤– ADVANCED AGENTIC AI TUTORIAL")
   print("AutoGen + LangChain + HuggingFace")
   print("="*70 + "n")
   demo_langchain_basics()
   demo_langchain_multi_step()
   demo_simple_agents()
   demo_autogen_conceptual()
   demo_autogen_with_mock()
   demo_hybrid_system()
   print("="*70)
   print("šŸŽ‰ TUTORIAL COMPLETE!")
   print("="*70)
   print("nšŸ“š What You Learned:")
   print("  āœ“ LangChain prompt engineering and chains")
   print("  āœ“ Multi-step reasoning with LangChain")
   print("  āœ“ Building custom multi-agent systems")
   print("  āœ“ AutoGen architecture and concepts")
   print("  āœ“ Combining LangChain + agents")
   print("  āœ“ Using HuggingFace models (no API needed!)")
   print("nšŸ’” Key Takeaway:")
   print("  You can build powerful agentic AI systems without expensive APIs!")
   print("  Combine LangChain's chains with multi-agent architectures for")
   print("  intelligent, autonomous AI systems.")
   print("="*70 + "n")

We combine the logic of Langchain with our simple agentic system to create an intelligent hybrid framework. We allow Langchain to analyze problems while agents plan and execute corresponding actions in sequence. We conclude with a demonstration by using all the modules together, showing how open tools can be seamlessly integrated to create adaptive, autonomous AI Systems.

In conclusion, we see how Agentic AI transforms from Concept to reality in a simple and general way. We combine the depth of Langchain reasoning with the collaborative power of agents to build, plan, and act independently. The result is a clear demonstration that powerful, autonomous AI systems can be built without expensive infrastructure, including open source infrastructure, creative design, and minimal testing.


Look Full codes here. Feel free to take a look at ours GitHub page for tutorials, code and notebooks. Also, feel free to follow us Kind of stubborn and don't forget to join ours 100K + ML Subreddit and sign up Our newsletter. Wait! Do you telegraph? Now you can join us by telegraph.


AsifAzzaq is the CEO of MarktechPost Media Inc.. as a visionary entrepreneur and developer, Asifi is committed to harnessing the power of social intelligence for good. His latest effort is the launch of the intelligence media platform, MarktechPpost, which stands out for its deep understanding of machine learning and deep learning stories that are technically sound and easily understood by a wide audience. The platform sticks to more than two million monthly views, which shows its popularity among the audience.

Follow Marktechpost: Add us as a favorite source on Google.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button