Close Menu
    Facebook X (Twitter) Instagram
    Articles Stock
    • Home
    • Technology
    • AI
    • Pages
      • About us
      • Contact us
      • Disclaimer For Articles Stock
      • Privacy Policy
      • Terms and Conditions
    Facebook X (Twitter) Instagram
    Articles Stock
    AI

    The right way to Construct Manufacturing Prepared AgentScope Workflows with ReAct Brokers, Customized Instruments, Multi-Agent Debate, Structured Output and Concurrent Pipelines

    Naveed AhmadBy Naveed Ahmad02/04/2026Updated:02/04/2026No Comments8 Mins Read
    blog 2


    On this tutorial, we construct an entire AgentScope workflow from the bottom up and run every part in Colab. We begin by wiring OpenAI via AgentScope and validating a fundamental mannequin name to grasp how messages and responses are dealt with. From there, we outline customized instrument capabilities, register them in a toolkit, and examine the auto-generated schemas to see how instruments are uncovered to the agent. We then transfer right into a ReAct-based agent that dynamically decides when to name instruments, adopted by a multi-agent debate setup utilizing MsgHub to simulate structured interplay between brokers. Lastly, we implement structured outputs with Pydantic and execute a concurrent multi-agent pipeline through which a number of specialists analyze an issue in parallel, and a synthesiser combines their insights.

    import subprocess, sys
    
    
    subprocess.check_call([
       sys.executable, "-m", "pip", "install", "-q",
       "agentscope", "openai", "pydantic", "nest_asyncio",
    ])
    
    
    print("✅  All packages put in.n")
    
    
    import nest_asyncio
    nest_asyncio.apply()
    
    
    import asyncio
    import json
    import getpass
    import math
    import datetime
    from typing import Any
    
    
    from pydantic import BaseModel, Discipline
    
    
    from agentscope.agent import ReActAgent
    from agentscope.formatter import OpenAIChatFormatter, OpenAIMultiAgentFormatter
    from agentscope.reminiscence import InMemoryMemory
    from agentscope.message import Msg, TextBlock, ToolUseBlock
    from agentscope.mannequin import OpenAIChatModel
    from agentscope.pipeline import MsgHub, sequential_pipeline
    from agentscope.instrument import Toolkit, ToolResponse
    
    
    OPENAI_API_KEY = getpass.getpass("🔑  Enter your OpenAI API key: ")
    MODEL_NAME = "gpt-4o-mini"
    
    
    print(f"n✅  API key captured. Utilizing mannequin: {MODEL_NAME}n")
    print("=" * 72)
    
    
    def make_model(stream: bool = False) -> OpenAIChatModel:
       return OpenAIChatModel(
           model_name=MODEL_NAME,
           api_key=OPENAI_API_KEY,
           stream=stream,
           generate_kwargs={"temperature": 0.7, "max_tokens": 1024},
       )
    
    
    print("n" + "═" * 72)
    print("  PART 1: Fundamental Mannequin Name")
    print("═" * 72)
    
    
    async def part1_basic_model_call():
       mannequin = make_model()
       response = await mannequin(
           messages=[{"role": "user", "content": "What is AgentScope in one sentence?"}],
       )
       textual content = response.content material[0]["text"]
       print(f"n🤖  Mannequin says: {textual content}")
       print(f"📊  Tokens used: {response.utilization}")
    
    
    asyncio.run(part1_basic_model_call())

    We set up all required dependencies and patch the occasion loop to make sure asynchronous code runs easily in Colab. We securely seize the OpenAI API key and configure the mannequin via a helper operate for reuse. We then run a fundamental mannequin name to confirm the setup and examine the response and token utilization.

    print("n" + "═" * 72)
    print("  PART 2: Customized Device Features & Toolkit")
    print("═" * 72)
    
    
    async def calculate_expression(expression: str) -> ToolResponse:
       allowed = {
           "abs": abs, "spherical": spherical, "min": min, "max": max,
           "sum": sum, "pow": pow, "int": int, "float": float,
           "sqrt": math.sqrt, "pi": math.pi, "e": math.e,
           "log": math.log, "sin": math.sin, "cos": math.cos,
           "tan": math.tan, "factorial": math.factorial,
       }
       attempt:
           outcome = eval(expression, {"__builtins__": {}}, allowed)
           return ToolResponse(content material=[TextBlock(type="text", text=str(result))])
       besides Exception as exc:
           return ToolResponse(content material=[TextBlock(type="text", text=f"Error: {exc}")])
    
    
    async def get_current_datetime(timezone_offset: int = 0) -> ToolResponse:
       now = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=timezone_offset)))
       return ToolResponse(
           content material=[TextBlock(type="text", text=now.strftime("%Y-%m-%d %H:%M:%S %Z"))],
       )
    
    
    toolkit = Toolkit()
    toolkit.register_tool_function(calculate_expression)
    toolkit.register_tool_function(get_current_datetime)
    
    
    schemas = toolkit.get_json_schemas()
    print("n📋  Auto-generated instrument schemas:")
    print(json.dumps(schemas, indent=2))
    
    
    async def part2_test_tool():
       result_gen = await toolkit.call_tool_function(
           ToolUseBlock(
               sort="tool_use", id="test-1",
               title="calculate_expression",
               enter={"expression": "factorial(10)"},
           ),
       )
       async for resp in result_gen:
           print(f"n🔧  Device outcome for factorial(10): {resp.content material[0]['text']}")
    
    
    asyncio.run(part2_test_tool())

    We outline customized instrument capabilities for mathematical analysis and datetime retrieval utilizing managed execution. We register these instruments right into a toolkit and examine their auto-generated JSON schemas to grasp how AgentScope exposes them. We then simulate a direct instrument name to validate that the instrument execution pipeline works appropriately.

    print("n" + "═" * 72)
    print("  PART 3: ReAct Agent with Instruments")
    print("═" * 72)
    
    
    async def part3_react_agent():
       agent = ReActAgent(
           title="MathBot",
           sys_prompt=(
               "You're MathBot, a useful assistant that solves math issues. "
               "Use the calculate_expression instrument for any computation. "
               "Use get_current_datetime when requested concerning the time."
           ),
           mannequin=make_model(),
           reminiscence=InMemoryMemory(),
           formatter=OpenAIChatFormatter(),
           toolkit=toolkit,
           max_iters=5,
       )
    
    
       queries = [
           "What's the current time in UTC+5?",
       ]
       for q in queries:
           print(f"n👤  Consumer: {q}")
           msg = Msg("person", q, "person")
           response = await agent(msg)
           print(f"🤖  MathBot: {response.get_text_content()}")
           agent.reminiscence.clear()
    
    
    asyncio.run(part3_react_agent())
    
    
    print("n" + "═" * 72)
    print("  PART 4: Multi-Agent Debate (MsgHub)")
    print("═" * 72)
    
    
    DEBATE_TOPIC = (
       "Ought to synthetic normal intelligence (AGI) analysis be open-sourced, "
       "or ought to it stay behind closed doorways at main labs?"
    )
    

    We assemble a ReAct agent that causes about when to make use of instruments and dynamically executes them. We cross person queries and observe how the agent combines reasoning with instrument utilization to provide solutions. We additionally reset reminiscence between queries to make sure unbiased and clear interactions.

    async def part4_debate():
       proponent = ReActAgent(
           title="Proponent",
           sys_prompt=(
               f"You're the Proponent in a debate. You argue IN FAVOR of open-sourcing AGI analysis. "
               f"Matter: {DEBATE_TOPIC}n"
               "Maintain every response to 2-3 concise paragraphs. Deal with the opposite facet's factors instantly."
           ),
           mannequin=make_model(),
           reminiscence=InMemoryMemory(),
           formatter=OpenAIMultiAgentFormatter(),
       )
    
    
       opponent = ReActAgent(
           title="Opponent",
           sys_prompt=(
               f"You're the Opponent in a debate. You argue AGAINST open-sourcing AGI analysis. "
               f"Matter: {DEBATE_TOPIC}n"
               "Maintain every response to 2-3 concise paragraphs. Deal with the opposite facet's factors instantly."
           ),
           mannequin=make_model(),
           reminiscence=InMemoryMemory(),
           formatter=OpenAIMultiAgentFormatter(),
       )
    
    
       num_rounds = 2
       for rnd in vary(1, num_rounds + 1):
           print(f"n{'─' * 60}")
           print(f"  ROUND {rnd}")
           print(f"{'─' * 60}")
    
    
           async with MsgHub(
               individuals=[proponent, opponent],
               announcement=Msg("Moderator", f"Spherical {rnd} — start. Matter: {DEBATE_TOPIC}", "assistant"),
           ):
               pro_msg = await proponent(
                   Msg("Moderator", "Proponent, please current your argument.", "person"),
               )
               print(f"n✅  Proponent:n{pro_msg.get_text_content()}")
    
    
               opp_msg = await opponent(
                   Msg("Moderator", "Opponent, please reply and current your counter-argument.", "person"),
               )
               print(f"n❌  Opponent:n{opp_msg.get_text_content()}")
    
    
       print(f"n{'─' * 60}")
       print("  DEBATE COMPLETE")
       print(f"{'─' * 60}")
    
    
    asyncio.run(part4_debate())
    
    
    print("n" + "═" * 72)
    print("  PART 5: Structured Output with Pydantic")
    print("═" * 72)
    
    
    class MovieReview(BaseModel):
       12 months: int = Discipline(description="The discharge 12 months.")
       style: str = Discipline(description="Major style of the film.")
       ranking: float = Discipline(description="Score from 0.0 to 10.0.")
       professionals: listing[str] = Discipline(description="Listing of 2-3 strengths of the film.")
       cons: listing[str] = Discipline(description="Listing of 1-2 weaknesses of the film.")
       verdict: str = Discipline(description="A one-sentence closing verdict.")

    We create two brokers with opposing roles and join them utilizing MsgHub for a structured multi-agent debate. We simulate a number of rounds through which every agent responds to the others whereas sustaining context via shared communication. We observe how agent coordination allows coherent argument alternate throughout turns.

    async def part5_structured_output():
       agent = ReActAgent(
           title="Critic",
           sys_prompt="You're a skilled film critic. When requested to overview a film, present a radical evaluation.",
           mannequin=make_model(),
           reminiscence=InMemoryMemory(),
           formatter=OpenAIChatFormatter(),
       )
    
    
       msg = Msg("person", "Evaluation the film 'Inception' (2010) by Christopher Nolan.", "person")
       response = await agent(msg, structured_model=MovieReview)
    
    
       print("n🎬  Structured Film Evaluation:")
       print(f"    Title   : {response.metadata.get('title', 'N/A')}")
       print(f"    Yr    : {response.metadata.get('12 months', 'N/A')}")
       print(f"    Style   : {response.metadata.get('style', 'N/A')}")
       print(f"    Score  : {response.metadata.get('ranking', 'N/A')}/10")
       professionals = response.metadata.get('professionals', [])
       cons = response.metadata.get('cons', [])
       if professionals:
           print(f"    Professionals    : {', '.be part of(str(p) for p in professionals)}")
       if cons:
           print(f"    Cons    : {', '.be part of(str(c) for c in cons)}")
       print(f"    Verdict : {response.metadata.get('verdict', 'N/A')}")
    
    
       print(f"n📝  Full textual content response:n{response.get_text_content()}")
    
    
    asyncio.run(part5_structured_output())
    
    
    print("n" + "═" * 72)
    print("  PART 6: Concurrent Multi-Agent Pipeline")
    print("═" * 72)
    
    
    async def part6_concurrent_agents():
       specialists = {
           "Economist": "You're an economist. Analyze the given matter from an financial perspective in 2-3 sentences.",
           "Ethicist": "You're an ethicist. Analyze the given matter from an moral perspective in 2-3 sentences.",
           "Technologist": "You're a technologist. Analyze the given matter from a expertise perspective in 2-3 sentences.",
       }
    
    
       brokers = []
       for title, immediate in specialists.gadgets():
           brokers.append(
               ReActAgent(
                   title=title,
                   sys_prompt=immediate,
                   mannequin=make_model(),
                   reminiscence=InMemoryMemory(),
                   formatter=OpenAIChatFormatter(),
               )
           )
    
    
       topic_msg = Msg(
           "person",
           "Analyze the affect of enormous language fashions on the worldwide workforce.",
           "person",
       )
    
    
       print("n⏳  Operating 3 specialist brokers concurrently...")
       outcomes = await asyncio.collect(*(agent(topic_msg) for agent in brokers))
    
    
       for agent, end in zip(brokers, outcomes):
           print(f"n🧠  {agent.title}:n{outcome.get_text_content()}")
    
    
       synthesiser = ReActAgent(
           title="Synthesiser",
           sys_prompt=(
               "You're a synthesiser. You obtain analyses from an Economist, "
               "an Ethicist, and a Technologist. Mix their views into "
               "a single coherent abstract of 3-4 sentences."
           ),
           mannequin=make_model(),
           reminiscence=InMemoryMemory(),
           formatter=OpenAIMultiAgentFormatter(),
       )
    
    
       combined_text = "nn".be part of(
           f"[{agent.name}]: {r.get_text_content()}" for agent, r in zip(brokers, outcomes)
       )
       synthesis = await synthesiser(
           Msg("person", f"Listed here are the specialist analyses:nn{combined_text}nnPlease synthesise.", "person"),
       )
       print(f"n🔗  Synthesised Abstract:n{synthesis.get_text_content()}")
    
    
    asyncio.run(part6_concurrent_agents())
    
    
    print("n" + "═" * 72)
    print("  🎉  TUTORIAL COMPLETE!")
    print("  You may have lined:")
    print("    1. Fundamental mannequin calls with OpenAIChatModel")
    print("    2. Customized instrument capabilities & auto-generated JSON schemas")
    print("    3. ReAct Agent with instrument use")
    print("    4. Multi-agent debate with MsgHub")
    print("    5. Structured output with Pydantic fashions")
    print("    6. Concurrent multi-agent pipelines")
    print("═" * 72)

    We implement structured outputs utilizing a Pydantic schema to extract constant fields from mannequin responses. We then construct a concurrent multi-agent pipeline the place a number of specialist brokers analyze a subject in parallel. Lastly, we mixture their outputs utilizing a synthesiser agent to provide a unified and coherent abstract.

    In conclusion, we have now carried out a full-stack agentic system that goes past easy prompting and into orchestrated reasoning, instrument utilization, and collaboration. We now perceive how AgentScope manages reminiscence, formatting, and gear execution underneath the hood, and the way ReAct brokers bridge reasoning with motion. We additionally noticed how multi-agent programs could be coordinated each sequentially and concurrently, and the way structured outputs guarantee reliability in downstream purposes. With these constructing blocks, we’re able to design extra superior agent architectures, prolong instrument ecosystems, and deploy scalable, production-ready AI programs.


    Take a look at the Full Notebook here.  Additionally, be at liberty to observe us on Twitter and don’t neglect to affix our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.




    Source link

    Naveed Ahmad

    Related Posts

    WhatsApp notifies a whole lot of customers who put in a pretend app made by authorities adware maker

    02/04/2026

    IBM Releases Granite 4.0 3B Imaginative and prescient: A New Imaginative and prescient Language Mannequin for Enterprise Grade Doc Knowledge Extraction

    02/04/2026

    Meta’s pure fuel binge might energy South Dakota

    02/04/2026
    Leave A Reply Cancel Reply

    Categories
    • AI
    Recent Comments
      Facebook X (Twitter) Instagram Pinterest
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.