Close Menu
    Facebook X (Twitter) Instagram
    Articles Stock
    • Home
    • Technology
    • AI
    • Pages
      • About ArticlesStock — AI & Technology Journalist
      • Contact us
      • Disclaimer For Articles Stock
      • Privacy Policy
      • Terms and Conditions
    Facebook X (Twitter) Instagram
    Articles Stock
    AI

    A Groq-Powered Agentic Analysis Assistant with LangGraph, Device Calling, Sub-Brokers, and Agentic Reminiscence: Lets Constructed It

    Naveed AhmadBy Naveed Ahmad07/05/2026Updated:07/05/2026No Comments10 Mins Read
    blog 1 6


    On this tutorial, we construct a Groq-powered agentic analysis workflow that runs straight utilizing Groq’s free OpenAI-compatible inference endpoint. We configure LangChain’s ChatOpenAI interface to work with Groq by setting the Groq API key and base URL, permitting us to make use of quick hosted fashions corresponding to llama-3.3-70b-versatile for tool-based reasoning. We then join the mannequin with sensible instruments for net search, webpage fetching, file dealing with, Python execution, talent loading, sub-agent delegation, and long-term reminiscence. By the top of the tutorial, we have now a working Groq-based multi-step agent that may analysis a subject, delegate centered subtasks, generate structured outputs, and save helpful info for later runs.

    import subprocess, sys
    def _pip(*a): subprocess.check_call([sys.executable,"-m","pip","install","-q",*a])
    _pip("langgraph>=0.2.50", "langchain>=0.3.0", "langchain-openai>=0.2.0",
        "langchain-community>=0.3.0", "ddgs", "requests", "beautifulsoup4",
        "tiktoken", "pydantic>=2.0")
    
    
    import os, getpass
    if not os.environ.get("GROQ_API_KEY"):
       os.environ["GROQ_API_KEY"] = getpass.getpass("GROQ_API_KEY (free at console.groq.com/keys): ")
    
    
    os.environ["OPENAI_API_KEY"]  = os.environ["GROQ_API_KEY"]
    os.environ["OPENAI_BASE_URL"] = "https://api.groq.com/openai/v1"
    
    
    MODEL_NAME = "llama-3.3-70b-versatile"
    
    
    import json, re, io, contextlib, pathlib
    from typing import Annotated, TypedDict, Sequence, Literal, Checklist, Dict, Any
    from datetime import datetime, timezone
    from langchain_openai import ChatOpenAI
    from langchain_core.messages import (
       SystemMessage, HumanMessage, AIMessage, ToolMessage, BaseMessage)
    from langchain_core.instruments import instrument
    from langgraph.graph import StateGraph, END
    from langgraph.graph.message import add_messages
    from langgraph.prebuilt import ToolNode

    We set up the core libraries required to construct the Groq-powered agent workflow, together with LangGraph, LangChain, DuckDuckGo search utilities, and supporting parsing libraries. We securely accumulate the Groq API key and configure Groq as an OpenAI-compatible endpoint by setting the API key and base URL. We then import all required modules for messages, instruments, graph development, typing, filesystem dealing with, and mannequin initialization.

    SANDBOX = pathlib.Path("/content material/deerflow_sandbox").resolve()
    for sub in ["uploads","workspace","outputs","skills/public","skills/custom","memory"]:
       (SANDBOX/sub).mkdir(mother and father=True, exist_ok=True)
    
    
    def _safe(p: str) -> pathlib.Path:
       full = (SANDBOX/p.lstrip("/")).resolve()
       if not str(full).startswith(str(SANDBOX)):
           increase ValueError(f"path escapes sandbox: {p}")
       return full
    
    
    SKILLS: Dict[str, Dict[str,str]] = {}
    def register_skill(title, description, content material, location="public"):
       d = SANDBOX/"abilities"/location/title; d.mkdir(mother and father=True, exist_ok=True)
       (d/"SKILL.md").write_text(content material)
       SKILLS[name] = {"description": description, "content material": content material,
                       "path": str(d/"SKILL.md")}
    
    
    register_skill("analysis",
       "Conduct multi-source net analysis on a subject and produce structured notes.",
       """# Analysis Talent
    ## Workflow
    1. Decompose the query into 3-5 sub-questions.
    2. For every sub-question name `web_search` and choose 2 authoritative URLs.
    3. `web_fetch` these URLs; extract concrete information, numbers, dates.
    4. Cross-reference for consensus vs. disagreement.
    5. Append findings to `workspace/research_notes.md`: declare → proof → URL.
    ## Finest practices
    - Favor major sources. Word dates. By no means fabricate URLs or numbers.""")
    
    
    register_skill("report-generation",
       "Synthesize analysis notes into a cultured markdown report in outputs/.",
       """# Report Era Talent
    ## Workflow
    1. file_read('workspace/research_notes.md').
    2. Define: exec abstract, key findings, evaluation, conclusion, sources.
    3. file_write('outputs/report.md', ...).
    ## Construction
    - # Title
    - ## Government Abstract  (3–5 sentences)
    - ## Key Findings       (bullets)
    - ## Detailed Evaluation  (sections)
    - ## Conclusion
    - ## Sources            (numbered URL record)""")
    
    
    register_skill("code-execution",
       "Run Python within the sandbox for computation, knowledge wrangling, charts.",
       """# Code Execution Talent
    1. Plan in plain language first.
    2. python_exec the code; persistent artifacts go to /outputs/.
    3. Confirm earlier than quoting outcomes.""")
    
    
    MEM = SANDBOX/"reminiscence/long_term.json"
    if not MEM.exists():
       MEM.write_text(json.dumps({"information":[],"preferences":{}}, indent=2))
    def _load_mem(): return json.hundreds(MEM.read_text())
    def _save_mem(m): MEM.write_text(json.dumps(m, indent=2))

    We create a sandboxed venture listing in Colab to maintain uploads, workspace information, outputs, abilities, and reminiscence organized in a single managed location. We outline reusable abilities for analysis, report technology, and code execution so the agent can uncover and comply with structured workflows. We additionally initialize a easy long-term reminiscence JSON file that shops information and preferences throughout a number of runs inside the identical sandbox.

    @instrument
    def list_skills() -> str:
       """Checklist all abilities with one-line descriptions. Name this primary for complicated duties."""
       return "n".be a part of(f"- {n}: {s['description']}" for n,s in SKILLS.gadgets())
    
    
    @instrument
    def load_skill(title: str) -> str:
       """Load full SKILL.md for `title`. Name earlier than operating its workflow."""
       if title not in SKILLS: return f"Unknown. Obtainable: {record(SKILLS)}"
       return SKILLS[name]["content"]
    
    
    @instrument
    def web_search(question: str, max_results: int = 5) -> str:
       """Search the online (DuckDuckGo). Returns titles, URLs, snippets."""
       from ddgs import DDGS
       out = []
       strive:
           with DDGS() as d:
               for r in d.textual content(question, max_results=max_results):
                   out.append(f"- {r.get('title','')}n  URL: {r.get('href','')}n  "
                              f"{(r.get('physique') or '')[:220]}")
       besides Exception as e:
           return f"search error: {e}"
       return "n".be a part of(out) or "no outcomes"
    
    
    @instrument
    def web_fetch(url: str, max_chars: int = 4000) -> str:
       """Fetch a URL, return cleaned textual content (scripts/nav stripped)."""
       import requests
       from bs4 import BeautifulSoup
       strive:
           r = requests.get(url, timeout=15,
                            headers={"Consumer-Agent":"Mozilla/5.0 DeerFlow-Lite"})
           soup = BeautifulSoup(r.textual content, "html.parser")
           for s in soup(["script","style","nav","footer","aside","header"]): s.decompose()
           textual content = re.sub(r"ns*n", "nn", soup.get_text("n")).strip()
           return textual content[:max_chars] or "(empty web page)"
       besides Exception as e:
           return f"fetch error: {e}"
    
    
    @instrument
    def file_write(path: str, content material: str) -> str:
       """Write content material to a sandbox path, e.g. 'workspace/notes.md' or 'outputs/x.md'."""
       p = _safe(path); p.guardian.mkdir(mother and father=True, exist_ok=True)
       p.write_text(content material)
       return f"wrote {len(content material)} chars → {path}"
    
    
    @instrument
    def file_read(path: str) -> str:
       """Learn a sandbox file (first 8 KB)."""
       p = _safe(path)
       return p.read_text()[:8000] if p.exists() else f"not discovered: {path}"
    
    
    @instrument
    def file_list(path: str = "") -> str:
       """Checklist information underneath a sandbox dir."""
       base = _safe(path) if path else SANDBOX
       if not base.exists(): return "not discovered"
       gadgets = []
       for c in sorted(base.rglob("*")):
           if "reminiscence" in c.relative_to(SANDBOX).elements: proceed
           gadgets.append(f"  {'D' if c.is_dir() else 'F'}  {c.relative_to(SANDBOX)}")
       return "n".be a part of(gadgets[:60]) or "(empty)"
    
    
    @instrument
    def python_exec(code: str) -> str:
       """Run Python within the sandbox. SANDBOX_ROOT is preset."""
       g = {"__name__":"__sb__", "SANDBOX_ROOT": str(SANDBOX)}
       buf = io.StringIO()
       strive:
           with contextlib.redirect_stdout(buf), contextlib.redirect_stderr(buf):
               exec(code, g)
           return (buf.getvalue() or "(no stdout)")[:4000]
       besides Exception as e:
           return f"{kind(e).__name__}: {e}n{buf.getvalue()[:1500]}"
    
    
    @instrument
    def keep in mind(reality: str) -> str:
       """Persist a single reality to long-term reminiscence (survives throughout runs)."""
       m = _load_mem()
       m["facts"].append({"reality": reality, "ts": datetime.now(timezone.utc).isoformat()})
       _save_mem(m)
       return f"remembered ({len(m['facts'])} whole)"
    
    
    @instrument
    def recall() -> str:
       """Retrieve every part in long-term reminiscence."""
       m = _load_mem()
       if not m["facts"]: return "(reminiscence empty)"
       return "n".be a part of(f"- {f['fact']}" for f in m["facts"][-20:])

    We outline the primary instruments the Groq-backed agent can name throughout execution, together with itemizing abilities, loading talent directions, looking the online, fetching webpages, studying information, and writing information. We additionally present the agent with a sandboxed Python execution atmosphere so it may possibly run computations or generate artifacts when wanted. We add reminiscence instruments that enable the agent to recollect essential information and recall beforehand saved info.

    @instrument
    def spawn_subagent(position: str, activity: str,
                      allowed_tools: str = "web_search,web_fetch,file_write,file_read") -> str:
       """Spawn an remoted sub-agent with a centered position and scoped instruments.
       Returns its ultimate report string. Use for parallelizable / centered subtasks."""
       bag = {t.title: t for t in BASE_TOOLS}
       sub_tools = [bag[n.strip()] for n in allowed_tools.break up(",") if n.strip() in bag]
       sub_llm = ChatOpenAI(mannequin=MODEL_NAME, temperature=0.2).bind_tools(sub_tools)
       sys_msg = SystemMessage(content material=(
           f"You're a specialised sub-agent. Position: {position}.n"
           f"You use in an ISOLATED context — no entry to guide historical past.n"
           f"Instruments: {', '.be a part of(t.title for t in sub_tools)}.n"
           "Finish with a ultimate assistant message beginning 'FINAL REPORT:' "
           "containing a structured ≤700-word abstract together with any URLs."))
       msgs: Checklist[BaseMessage] = [sys_msg, HumanMessage(content=task)]
       for _ in vary(8):
           r = sub_llm.invoke(msgs); msgs.append(r)
           if not getattr(r, "tool_calls", None):
               return f"[sub-agent: {role}]n" + (r.content material if isinstance(r.content material,str) else str(r.content material))
           for tc in r.tool_calls:
               t = bag.get(tc["name"])
               strive:
                   res = t.invoke(tc["args"]) if t else f"unknown instrument {tc['name']}"
               besides Exception as e:
                   res = f"instrument error: {e}"
               msgs.append(ToolMessage(content material=str(res)[:3000], tool_call_id=tc["id"]))
       return f"[sub-agent: {role}] step-limit reached."
    
    
    BASE_TOOLS = [list_skills, load_skill, web_search, web_fetch, file_write,
                 file_read, file_list, python_exec, remember, recall]
    ALL_TOOLS = BASE_TOOLS + [spawn_subagent]
    
    
    LEAD_SYSTEM = f"""You're DeerFlow-Lite, a long-horizon super-agent harness.
    
    
    Sandbox format (relative to {SANDBOX}):
     uploads/    – person information
     workspace/  – your scratchpad
     outputs/    – ultimate deliverables
     abilities/     – functionality modules (load_skill)
    
    
    Ideas:
     • For non-trivial duties: list_skills → load_skill → execute.
     • Use spawn_subagent for centered subtasks (remoted context retains lead lean).
     • Persist intermediates to workspace/, deliverables to outputs/.
     • Use keep in mind(reality) for cross-session data.
     • End with a brief abstract of what was produced and the place.
    
    
    Immediately: {datetime.now(timezone.utc).strftime('%Y-%m-%d')}."""
    
    
    class AgentState(TypedDict):
       messages: Annotated[Sequence[BaseMessage], add_messages]
    
    
    llm = ChatOpenAI(mannequin=MODEL_NAME, temperature=0.3).bind_tools(ALL_TOOLS)
    
    
    def call_model(state: AgentState):
       msgs = record(state["messages"])
       if not msgs or not isinstance(msgs[0], SystemMessage):
           msgs = [SystemMessage(content=LEAD_SYSTEM)] + msgs
       return {"messages": [llm.invoke(msgs)]}
    
    
    def route(state: AgentState) -> Literal["tools","__end__"]:
       final = state["messages"][-1]
       return "instruments" if getattr(final, "tool_calls", None) else END
    
    
    g = StateGraph(AgentState)
    g.add_node("agent", call_model)
    g.add_node("instruments", ToolNode(ALL_TOOLS))
    g.set_entry_point("agent")
    g.add_conditional_edges("agent", route, {"instruments":"instruments", END: END})
    g.add_edge("instruments", "agent")
    APP = g.compile()

    We create a sub-agent instrument that enables the primary Groq-powered agent to delegate centered duties to an remoted assistant with a restricted set of instruments. We then accumulate all obtainable instruments, outline the lead system immediate, initialize the Groq-backed chat mannequin, and bind the instruments to it. We lastly constructed the LangGraph workflow so the agent can alternate between reasoning and gear execution till it reaches a ultimate reply.

    def run(activity: str, max_steps: int = 25):
       print("="*78); print(f"🦌 TASK: {activity}"); print("="*78)
       state = {"messages":[HumanMessage(content=task)]}
       n = 0
       for ev in APP.stream(state, {"recursion_limit": max_steps*2}, stream_mode="updates"):
           for node, payload in ev.gadgets():
               for m in payload.get("messages", []):
                   n += 1
                   if isinstance(m, AIMessage):
                       if m.tool_calls:
                           for tc in m.tool_calls:
                               args = json.dumps(tc["args"], ensure_ascii=False)
                               args = args[:140] + ("…" if len(args)>140 else "")
                               print(f"[{n:02}] 🔧 {tc['name']}({args})")
                       else:
                           txt = m.content material if isinstance(m.content material,str) else str(m.content material)
                           print(f"[{n:02}] 🦌 {txt[:800]}")
                   elif isinstance(m, ToolMessage):
                       s = str(m.content material).substitute("n"," ")[:220]
                       print(f"[{n:02}] 📤 {s}")
       print("n"+"="*78); print("✅ COMPLETE — sandbox state:"); print("="*78)
       print(file_list.invoke({"path":""}))
       print("n🧠 Lengthy-term reminiscence:"); print(recall.invoke({}))
       for f in sorted((SANDBOX/"outputs").rglob("*")):
           if f.is_file():
               print(f"n--- 📄 {f.relative_to(SANDBOX)} (first 800 chars) ---")
               print(f.read_text()[:800])
    
    
    run(
       "Give me a briefing on small language fashions (SLMs) in 2025. "
       "(1) uncover abilities; (2) spawn a researcher sub-agent to collect "
       "specifics on three notable SLMs from 2024-2025 with sizes, benchmarks, "
       "and use instances — sub-agent saves to workspace/slm_research.md; "
       "(3) load report-generation talent and write outputs/slm_briefing.md "
       "(~400 phrases) with a Sources part; (4) save the one most "
       "essential takeaway to long-term reminiscence; (5) summarize.",
       max_steps=25,
    )
    

    We outline the run() operate that begins a person activity, streams every agent step, and prints instrument calls, instrument outputs, and ultimate responses in a readable format. We additionally show the sandbox file construction, long-term reminiscence, and generated output information after the workflow completes. We end by operating a demo activity during which the Groq-powered agent researches small language fashions, prepares a briefing, saves a report, and shops one key takeaway in reminiscence.

    In conclusion, we created a compact but succesful Groq-based agent framework that demonstrates how Groq’s OpenAI-compatible API can function a quick, accessible backend for superior LLM workflows. We used LangGraph to handle the agent loop, LangChain to bind instruments to the Groq-hosted mannequin, and customized Python utilities to offer the system managed entry to look, information, code execution, and reminiscence. We additionally demonstrated how remoted sub-agents will help deal with centered analysis duties whereas the primary agent coordinates the general workflow. Additionally, we completed with a sensible Groq-powered agentic system that may be prolonged into analysis assistants, automated briefing turbines, and multi-step AI functions.


    Take a look at the Full Codes with Notebook here. Additionally, be at liberty to comply with us on Twitter and don’t overlook to affix our 130k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

    Must companion with us for selling your GitHub Repo OR Hugging Face Web page OR Product Launch OR Webinar and so on.? Connect with us




    Source link

    Naveed Ahmad

    Naveed Ahmad is a technology journalist and AI writer at ArticlesStock, covering artificial intelligence, machine learning, and emerging tech policy. Read his latest articles.

    Related Posts

    Barry Diller trusts Sam Altman. However ‘belief is irrelevant’ as AGI nears, he says.

    07/05/2026

    Elon Musk’s Final-Ditch Effort to Management OpenAI: Recruit Sam Altman to Tesla

    07/05/2026

    A 20-minute pitch wins Indian startup Pronto backing from Lachy Groom

    07/05/2026
    Leave A Reply Cancel Reply

    Categories
    • AI
    Recent Comments
      Facebook X (Twitter) Instagram Pinterest
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.