How do you exchange actual agent traces into reinforcement studying RL transitions to enhance coverage LLMs with out altering your current agent stack? Microsoft AI group releases Agent Lightning to assist optimize multi-agent programs. Agent Lightning is a open-sourced framework that makes reinforcement studying work for any AI agent with out rewrites. It separates coaching from execution, defines a unified hint format, and introduces LightningRL, a hierarchical technique that converts advanced agent runs into transitions that normal single flip RL trainers can optimize.
What Agent Lightning does?
The framework fashions an agent as a call course of. It formalizes the agent as {a partially} observable Markov choice course of the place the remark is the present enter to the coverage LLM, the motion is the mannequin name, and the reward might be terminal or intermediate. From every run it extracts solely the calls made by the coverage mannequin, together with inputs, outputs, and rewards. This trims away different framework noise and yields clear transitions for coaching.
LightningRL performs credit score project throughout multi step episodes, then optimizes the coverage with a single flip RL goal. The analysis group describes compatibility with single flip RL strategies. In follow, groups typically use trainers that implement PPO or GRPO, comparable to VeRL, which inserts this interface.
System structure
Agent Lightning makes use of Coaching Agent Disaggregation. A Lightning Server runs coaching and serving, and exposes an OpenAI like API for the up to date mannequin. A Lightning Consumer runs the agent runtime the place it already lives, captures traces of prompts, instrument calls, and rewards, and streams them again to the server. This retains instruments, browsers, shells, and different dependencies near manufacturing whereas the GPU coaching stays within the server tier.
The runtime helps two tracing paths. A default path makes use of OpenTelemetry spans, so you may pipe agent telemetry by normal collectors. There may be additionally a light-weight embedded tracer for groups that don’t need to deploy OpenTelemetry. Each paths find yourself in the identical retailer for coaching.
Unified knowledge interface
Agent Lightning information every mannequin name and every instrument name as a span with inputs, outputs, and metadata. The algorithm layer adapts spans into ordered triplets of immediate, response, and reward. This selective extraction helps you to optimize one agent in a multi agent workflow, or a number of brokers without delay, with out touching orchestration code. The identical traces can even drive automated immediate optimization or supervised finetuning.
Experiments and datasets
The analysis group experiences three duties. For textual content to SQL, the group makes use of the Spider benchmark. Spider accommodates greater than 10,000 questions throughout 200 databases that span 138 domains. The coverage mannequin is Llama 3.2 3B Instruct. The implementation makes use of LangChain with a author agent, a rewriter agent, and a checker. The author and the rewriter are optimized, and the checker is left fastened. Rewards enhance steadily throughout coaching and at check time.
For retrieval augmented era, the setup makes use of the MuSiQue benchmark and a Wikipedia scale index with about 21 million paperwork. The retriever makes use of BGE embeddings with cosine similarity. The agent is constructed with the OpenAI Brokers SDK. The reward is a weighted sum of a format rating and an F1 correctness rating. Reward curves present secure beneficial properties throughout coaching and analysis with the identical base mannequin.
For math query answering with instrument use, the agent is applied with AutoGen and calls a calculator instrument. The dataset is Calc X. The bottom mannequin once more is Llama 3.2 3B Instruct. Coaching improves the flexibility to invoke instruments appropriately and combine outcomes into remaining solutions.
Key Takeaways
- Agent Lightning makes use of Coaching Agent Disaggregation and a unified hint interface, so current brokers in LangChain, OpenAI Brokers SDK, AutoGen, or CrewAI join with close to zero code change.
- LightningRL converts trajectories to transitions. It applies credit score project to multi step runs, then optimizes the coverage with single flip RL strategies comparable to PPO or GRPO in normal trainers.
- Computerized Intermediate Rewarding, AIR, provides dense suggestions. AIR turns system alerts comparable to instrument return standing into intermediate rewards to cut back sparse reward points in lengthy workflows.
- The analysis evaluates textual content to SQL on Spider, RAG on MuSiQue with a Wikipedia scale index utilizing BGE embeddings and cosine similarity, and math instrument use on Calc X, all with Llama 3.2 3B Instruct as the bottom mannequin.
- The runtime information traces by OpenTelemetry, streams them to the coaching server, and exposes an OpenAI suitable endpoint for up to date fashions, enabling scalable rollouts with out transferring instruments.
Agent Lightning is a sensible bridge between agent execution and reinforcement studying, not one other framework rewrite. It formalizes agent runs as an Markov Determination Course of (MDP), introduces LightningRL for credit score project, and extracts transitions that slot into single flip RL trainers. The Coaching Agent Disaggregation design separates a consumer that runs the agent from a server that trains and serves an OpenAI suitable endpoint, so groups hold current stacks. Computerized Intermediate Rewarding converts runtime alerts into dense suggestions, lowering sparse rewards in lengthy workflows. Total, Agent Lightning is a clear, minimal-integration path to make brokers be taught from their very own traces.
Take a look at the Paper and Repo. Be at liberty to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Additionally, be at liberty to observe us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
Michal Sutter is an information science skilled with a Grasp of Science in Knowledge Science from the College of Padova. With a strong basis in statistical evaluation, machine studying, and knowledge engineering, Michal excels at remodeling advanced datasets into actionable insights.