Close Menu
    Facebook X (Twitter) Instagram
    Articles Stock
    • Home
    • Technology
    • AI
    • Pages
      • About us
      • Contact us
      • Disclaimer For Articles Stock
      • Privacy Policy
      • Terms and Conditions
    Facebook X (Twitter) Instagram
    Articles Stock
    AI

    How I Constructed an Clever Multi-Agent Methods with AutoGen, LangChain, and Hugging Face to Show Sensible Agentic AI Workflows

    Naveed AhmadBy Naveed Ahmad22/10/2025No Comments7 Mins Read
    blog banner 75


    On this tutorial, we dive into the essence of Agentic AI by uniting LangChain, AutoGen, and Hugging Face right into a single, totally purposeful framework that runs with out paid APIs. We start by establishing a light-weight open-source pipeline after which progress by structured reasoning, multi-step workflows, and collaborative agent interactions. As we transfer from LangChain chains to simulated multi-agent methods, we expertise how reasoning, planning, and execution can seamlessly mix to type autonomous, clever habits, fully inside our management and surroundings. Try the FULL CODES here.

    import warnings
    warnings.filterwarnings('ignore')
    
    
    from typing import Checklist, Dict
    import autogen
    from langchain.prompts import PromptTemplate
    from langchain.chains import LLMChain
    from langchain_community.llms import HuggingFacePipeline
    from transformers import pipeline
    import json
    
    
    print("🚀 Loading fashions...n")
    
    
    pipe = pipeline(
       "text2text-generation",
       mannequin="google/flan-t5-base",
       max_length=200,
       temperature=0.7
    )
    
    
    llm = HuggingFacePipeline(pipeline=pipe)
    print("✓ Fashions loaded!n")

    We begin by establishing our surroundings and bringing in all the required libraries. We initialize a Hugging Face FLAN-T5 pipeline as our native language mannequin, making certain it could actually generate coherent, contextually wealthy textual content. We affirm that every part hundreds efficiently, laying the groundwork for the agentic experiments that observe. Try the FULL CODES here.

    def demo_langchain_basics():
       print("="*70)
       print("DEMO 1: LangChain - Clever Immediate Chains")
       print("="*70 + "n")
       immediate = PromptTemplate(
           input_variables=["task"],
           template="Process: {activity}nnProvide an in depth step-by-step answer:"
       )
       chain = LLMChain(llm=llm, immediate=immediate)
       activity = "Create a Python operate to calculate fibonacci sequence"
       print(f"Process: {activity}n")
       outcome = chain.run(activity=activity)
       print(f"LangChain Response:n{outcome}n")
       print("✓ LangChain demo completen")
    
    
    def demo_langchain_multi_step():
       print("="*70)
       print("DEMO 2: LangChain - Multi-Step Reasoning")
       print("="*70 + "n")
       planner = PromptTemplate(
           input_variables=["goal"],
           template="Break down this purpose into 3 steps: {purpose}"
       )
       executor = PromptTemplate(
           input_variables=["step"],
           template="Clarify the best way to execute this step: {step}"
       )
       plan_chain = LLMChain(llm=llm, immediate=planner)
       exec_chain = LLMChain(llm=llm, immediate=executor)
       purpose = "Construct a machine studying mannequin"
       print(f"Objective: {purpose}n")
       plan = plan_chain.run(purpose=purpose)
       print(f"Plan:n{plan}n")
       print("Executing first step...")
       execution = exec_chain.run(step="Gather and put together knowledge")
       print(f"Execution:n{execution}n")
       print("✓ Multi-step reasoning completen")

    We discover LangChain’s capabilities by establishing clever immediate templates that enable our mannequin to cause by duties. We construct each a easy one-step chain and a multi-step reasoning movement that break complicated targets into clear subtasks. We observe how LangChain allows structured considering and turns plain directions into detailed, actionable responses. Try the FULL CODES here.

    class SimpleAgent:
       def __init__(self, identify: str, position: str, llm_pipeline):
           self.identify = identify
           self.position = position
           self.pipe = llm_pipeline
           self.reminiscence = []
       def course of(self, message: str) -> str:
           immediate = f"You're a {self.position}.nUser: {message}nYour response:"
           response = self.pipe(immediate, max_length=150)[0]['generated_text']
           self.reminiscence.append({"consumer": message, "agent": response})
           return response
       def __repr__(self):
           return f"Agent({self.identify}, position={self.position})"
    
    
    def demo_simple_agents():
       print("="*70)
       print("DEMO 3: Easy Multi-Agent System")
       print("="*70 + "n")
       researcher = SimpleAgent("Researcher", "analysis specialist", pipe)
       coder = SimpleAgent("Coder", "Python developer", pipe)
       reviewer = SimpleAgent("Reviewer", "code reviewer", pipe)
       print("Brokers created:", researcher, coder, reviewer, "n")
       activity = "Create a operate to type a listing"
       print(f"Process: {activity}n")
       print(f"[{researcher.name}] Researching...")
       analysis = researcher.course of(f"What's the very best method to: {activity}")
       print(f"Analysis: {analysis[:100]}...n")
       print(f"[{coder.name}] Coding...")
       code = coder.course of(f"Write Python code to: {activity}")
       print(f"Code: {code[:100]}...n")
       print(f"[{reviewer.name}] Reviewing...")
       overview = reviewer.course of(f"Overview this method: {code[:50]}")
       print(f"Overview: {overview[:100]}...n")
       print("✓ Multi-agent workflow completen")

    We design light-weight brokers powered by the identical Hugging Face pipeline, every assigned a particular position, corresponding to researcher, coder, or reviewer. We let these brokers collaborate on a easy coding activity, exchanging data and constructing upon one another’s outputs. We witness how a coordinated multi-agent workflow can emulate teamwork, creativity, and self-organization in an automatic setting. Try the FULL CODES here.

    def demo_autogen_conceptual():
       print("="*70)
       print("DEMO 4: AutoGen Ideas (Conceptual Demo)")
       print("="*70 + "n")
       agent_config = {
           "brokers": [
               {"name": "UserProxy", "type": "user_proxy", "role": "Coordinates tasks"},
               {"name": "Assistant", "type": "assistant", "role": "Solves problems"},
               {"name": "Executor", "type": "executor", "role": "Runs code"}
           ],
           "workflow": [
               "1. UserProxy receives task",
               "2. Assistant generates solution",
               "3. Executor tests solution",
               "4. Feedback loop until complete"
           ]
       }
       print(json.dumps(agent_config, indent=2))
       print("n📝 AutoGen Key Options:")
       print("  • Automated agent chat conversations")
       print("  • Code execution capabilities")
       print("  • Human-in-the-loop assist")
       print("  • Multi-agent collaboration")
       print("  • Device/operate callingn")
       print("✓ AutoGen ideas explainedn")
    
    
    class MockLLM:
       def __init__(self):
           self.responses = {
               "code": "def fibonacci(n):n    if n <= 1:n        return nn    return fibonacci(n-1) + fibonacci(n-2)",
               "clarify": "It is a recursive implementation of the Fibonacci sequence.",
               "overview": "The code is right however may very well be optimized with memoization.",
               "default": "I perceive. Let me assist with that activity."
           }
       def generate(self, immediate: str) -> str:
           prompt_lower = immediate.decrease()
           if "code" in prompt_lower or "operate" in prompt_lower:
               return self.responses["code"]
           elif "clarify" in prompt_lower:
               return self.responses["explain"]
           elif "overview" in prompt_lower:
               return self.responses["review"]
           return self.responses["default"]
    
    
    def demo_autogen_with_mock():
       print("="*70)
       print("DEMO 5: AutoGen with Customized LLM Backend")
       print("="*70 + "n")
       mock_llm = MockLLM()
       dialog = [
           ("User", "Create a fibonacci function"),
           ("CodeAgent", mock_llm.generate("write code for fibonacci")),
           ("ReviewAgent", mock_llm.generate("review this code")),
       ]
       print("Simulated AutoGen Multi-Agent Dialog:n")
       for speaker, message in dialog:
           print(f"[{speaker}]")
           print(f"{message}n")
       print("✓ AutoGen simulation completen")

    We illustrate AutoGen’s core concept by defining a conceptual configuration of brokers and their workflow. We then simulate an AutoGen-style dialog utilizing a customized mock LLM that generates lifelike but controllable responses. We notice how this framework permits a number of brokers to cause, check, and refine concepts collaboratively with out counting on any exterior APIs. Try the FULL CODES here.

    def demo_hybrid_system():
       print("="*70)
       print("DEMO 6: Hybrid LangChain + Multi-Agent System")
       print("="*70 + "n")
       reasoning_prompt = PromptTemplate(
           input_variables=["problem"],
           template="Analyze this drawback: {drawback}nWhat are the important thing steps?"
       )
       reasoning_chain = LLMChain(llm=llm, immediate=reasoning_prompt)
       planner = SimpleAgent("Planner", "strategic planner", pipe)
       executor = SimpleAgent("Executor", "activity executor", pipe)
       drawback = "Optimize a sluggish database question"
       print(f"Downside: {drawback}n")
       print("[LangChain] Analyzing drawback...")
       evaluation = reasoning_chain.run(drawback=drawback)
       print(f"Evaluation: {evaluation[:120]}...n")
       print(f"[{planner.name}] Creating plan...")
       plan = planner.course of(f"Plan the best way to: {drawback}")
       print(f"Plan: {plan[:120]}...n")
       print(f"[{executor.name}] Executing...")
       outcome = executor.course of(f"Execute: Add database indexes")
       print(f"End result: {outcome[:120]}...n")
       print("✓ Hybrid system completen")
    
    
    if __name__ == "__main__":
       print("="*70)
       print("🤖 ADVANCED AGENTIC AI TUTORIAL")
       print("AutoGen + LangChain + HuggingFace")
       print("="*70 + "n")
       demo_langchain_basics()
       demo_langchain_multi_step()
       demo_simple_agents()
       demo_autogen_conceptual()
       demo_autogen_with_mock()
       demo_hybrid_system()
       print("="*70)
       print("🎉 TUTORIAL COMPLETE!")
       print("="*70)
       print("n📚 What You Discovered:")
       print("  ✓ LangChain immediate engineering and chains")
       print("  ✓ Multi-step reasoning with LangChain")
       print("  ✓ Constructing customized multi-agent methods")
       print("  ✓ AutoGen structure and ideas")
       print("  ✓ Combining LangChain + brokers")
       print("  ✓ Utilizing HuggingFace fashions (no API wanted!)")
       print("n💡 Key Takeaway:")
       print("  You'll be able to construct highly effective agentic AI methods with out costly APIs!")
       print("  Mix LangChain's chains with multi-agent architectures for")
       print("  clever, autonomous AI methods.")
       print("="*70 + "n")

    We mix LangChain’s structured reasoning with our easy agentic system to create a hybrid clever framework. We enable LangChain to investigate issues whereas the brokers plan and execute corresponding actions in sequence. We conclude the demonstration by working all modules collectively, showcasing how open-source instruments can combine seamlessly to construct adaptive, autonomous AI methods.

    In conclusion, we witness how Agentic AI transforms from idea to actuality by a easy, modular design. We mix the reasoning depth of LangChain with the cooperative energy of brokers to construct adaptable methods that suppose, plan, and act independently. The result’s a transparent demonstration that highly effective, autonomous AI methods may be constructed with out costly infrastructure, leveraging open-source instruments, inventive design, and a little bit of experimentation.


    Try the FULL CODES here. Be at liberty to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Additionally, be at liberty to observe us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.


    Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.

    🙌 Follow MARKTECHPOST: Add us as a preferred source on Google.



    Source link

    Naveed Ahmad

    Related Posts

    Amazon’s ‘Melania’ documentary stumbles in second weekend

    09/02/2026

    From Svedka to Anthropic, manufacturers make daring performs with AI in Tremendous Bowl adverts

    09/02/2026

    Okay, I’m barely much less mad about that ‘Magnificent Ambersons’ AI venture

    09/02/2026
    Leave A Reply Cancel Reply

    Categories
    • AI
    Recent Comments
      Facebook X (Twitter) Instagram Pinterest
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.