Stanford Researchers Launched MedAgentBench: A Actual-World Benchmark for Healthcare AI Brokers


A group of Stanford College researchers have launched MedAgentBench, a brand new benchmark suite designed to judge massive language mannequin (LLM) brokers in healthcare contexts. Not like prior question-answering datasets, MedAgentBench supplies a digital digital well being document (EHR) surroundings the place AI programs should work together, plan, and execute multi-step scientific duties. This marks a big shift from testing static reasoning to assessing agentic capabilities in reside, tool-based medical workflows.

https://ai.nejm.org/doi/full/10.1056/AIdbp2500144

Why Do We Want Agentic Benchmarks in Healthcare?

Current LLMs have moved past static chat-based interactions towards agentic conduct—decoding high-level directions, calling APIs, integrating affected person information, and automating complicated processes. In drugs, this evolution may assist deal with employees shortages, documentation burden, and administrative inefficiencies.

Whereas general-purpose agent benchmarks (e.g., AgentBench, AgentBoard, tau-bench) exist, healthcare lacked a standardized benchmark that captures the complexity of medical information, FHIR interoperability, and longitudinal affected person data. MedAgentBench fills this hole by providing a reproducible, clinically related analysis framework.

What Does MedAgentBench Include?

How Are the Duties Structured?

MedAgentBench consists of 300 duties throughout 10 classes, written by licensed physicians. These duties embrace affected person info retrieval, lab end result monitoring, documentation, check ordering, referrals, and medicine administration. Duties common 2–3 steps and mirror workflows encountered in inpatient and outpatient care.

What Affected person Knowledge Helps the Benchmark?

The benchmark leverages 100 lifelike affected person profiles extracted from Stanford’s STARR information repository, comprising over 700,000 data together with labs, vitals, diagnoses, procedures, and medicine orders. Knowledge was de-identified and jittered for privateness whereas preserving scientific validity.

How Is the Surroundings Constructed?

The surroundings is FHIR-compliant, supporting each retrieval (GET) and modification (POST) of EHR information. AI programs can simulate lifelike scientific interactions reminiscent of documenting vitals or putting medicine orders. This design makes the benchmark straight translatable to reside EHR programs.

How Are Fashions Evaluated?

  • Metric: Job success charge (SR), measured with strict move@1 to mirror real-world security necessities.
  • Fashions Examined: 12 main LLMs together with GPT-4o, Claude 3.5 Sonnet, Gemini 2.0, DeepSeek-V3, Qwen2.5, and Llama 3.3.
  • Agent Orchestrator: A baseline orchestration setup with 9 FHIR capabilities, restricted to eight interplay rounds per process.

Which Fashions Carried out Finest?

  • Claude 3.5 Sonnet v2: Finest general with 69.67% success, particularly sturdy in retrieval duties (85.33%).
  • GPT-4o: 64.0% success, exhibiting balanced retrieval and motion efficiency.
  • DeepSeek-V3: 62.67% success, main amongst open-weight fashions.
  • Remark: Most fashions excelled at question duties however struggled with action-based duties requiring protected multi-step execution.
https://ai.nejm.org/doi/full/10.1056/AIdbp2500144

What Errors Did Fashions Make?

Two dominant failure patterns emerged:

  1. Instruction adherence failures — invalid API calls or incorrect JSON formatting.
  2. Output mismatch — offering full sentences when structured numerical values have been required.

These errors spotlight gaps in precision and reliability, each essential in scientific deployment.

Abstract

MedAgentBench establishes the primary large-scale benchmark for evaluating LLM brokers in lifelike EHR settings, pairing 300 clinician-authored duties with a FHIR-compliant surroundings and 100 affected person profiles. Outcomes present sturdy potential however restricted reliability—Claude 3.5 Sonnet v2 leads at 69.67%—highlighting the hole between question success and protected motion execution. Whereas constrained by single-institution information and EHR-focused scope, MedAgentBench supplies an open, reproducible framework to drive the subsequent era of reliable healthcare AI brokers


Try the PAPER and Technical Blog. Be at liberty to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Additionally, be at liberty to comply with us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Newsletter.


Michal Sutter is a knowledge science skilled with a Grasp of Science in Knowledge Science from the College of Padova. With a strong basis in statistical evaluation, machine studying, and information engineering, Michal excels at reworking complicated datasets into actionable insights.



Source link

Leave a Comment