On this tutorial, we implement a whole workflow for constructing, tracing, and evaluating an LLM pipeline utilizing Opik. We construction the system step-by-step, starting with a light-weight mannequin, including prompt-based planning, making a dataset, and at last working automated evaluations. As we transfer by way of every snippet, we see how Opik helps us monitor each perform span, visualize the pipeline’s conduct, and measure output high quality with clear, reproducible metrics. By the tip, we’ve a totally instrumented QA system that we are able to prolong, examine, and monitor with ease. Try the FULL CODES here.
!pip set up -q opik transformers speed up torch
import torch
from transformers import pipeline
import textwrap
import opik
from opik import Opik, Immediate, monitor
from opik.analysis import consider
from opik.analysis.metrics import Equals, LevenshteinRatio
gadget = 0 if torch.cuda.is_available() else -1
print("Utilizing gadget:", "cuda" if gadget == 0 else "cpu")
opik.configure()
PROJECT_NAME = "opik-hf-tutorial"
We arrange our surroundings by putting in the required libraries and initializing Opik. We load the core modules, detect the gadget, and configure our mission so that each hint flows into the proper workspace. We lay the inspiration for the remainder of the tutorial. Try the FULL CODES here.
llm = pipeline(
"text-generation",
mannequin="distilgpt2",
gadget=gadget,
)
def hf_generate(immediate: str, max_new_tokens: int = 80) -> str:
end result = llm(
immediate,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=0.3,
pad_token_id=llm.tokenizer.eos_token_id,
)[0]["generated_text"]
return end result[len(prompt):].strip()
We load a light-weight Hugging Face mannequin and create a small helper perform to generate textual content cleanly. We put together the LLM to function domestically with out exterior APIs. This provides us a dependable and reproducible era layer for the remainder of the pipeline. Try the FULL CODES here.
plan_prompt = Immediate(
title="hf_plan_prompt",
immediate=textwrap.dedent("""
You're an assistant that creates a plan to reply a query
utilizing ONLY the given context.
Context:
{{context}}
Query:
{{query}}
Return precisely 3 bullet factors as a plan.
""").strip(),
)
answer_prompt = Immediate(
title="hf_answer_prompt",
immediate=textwrap.dedent("""
You reply based mostly solely on the given context.
Context:
{{context}}
Query:
{{query}}
Plan:
{{plan}}
Reply the query in 2–4 concise sentences.
""").strip(),
)
We outline two structured prompts utilizing Opik’s Immediate class. We management the planning section and answering section by way of clear templates. This helps us keep consistency and observe how structured prompting impacts mannequin conduct. Try the FULL CODES here.
DOCS = {
"overview": """
Opik is an open-source platform for debugging, evaluating,
and monitoring LLM and RAG purposes. It gives tracing,
datasets, experiments, and analysis metrics.
""",
"tracing": """
Tracing in Opik logs nested spans, LLM calls, token utilization,
suggestions scores, and metadata to examine complicated LLM pipelines.
""",
"analysis": """
Opik evaluations are outlined by datasets, analysis duties,
scoring metrics, and experiments that combination scores,
serving to detect regressions or points.
""",
}
@monitor(project_name=PROJECT_NAME, kind="instrument", title="retrieve_context")
def retrieve_context(query: str) -> str:
q = query.decrease()
if "hint" in q or "span" in q:
return DOCS["tracing"]
if "metric" in q or "dataset" in q or "consider" in q:
return DOCS["evaluation"]
return DOCS["overview"]
We assemble a tiny doc retailer and a retrieval perform that Opik tracks as a instrument. We let the pipeline choose context based mostly on the person’s query. This permits us to simulate a minimal RAG-style workflow while not having an precise vector database. Try the FULL CODES here.
@monitor(project_name=PROJECT_NAME, kind="llm", title="plan_answer")
def plan_answer(context: str, query: str) -> str:
rendered = plan_prompt.format(context=context, query=query)
return hf_generate(rendered, max_new_tokens=80)
@monitor(project_name=PROJECT_NAME, kind="llm", title="answer_from_plan")
def answer_from_plan(context: str, query: str, plan: str) -> str:
rendered = answer_prompt.format(
context=context,
query=query,
plan=plan,
)
return hf_generate(rendered, max_new_tokens=120)
@monitor(project_name=PROJECT_NAME, kind="basic", title="qa_pipeline")
def qa_pipeline(query: str) -> str:
context = retrieve_context(query)
plan = plan_answer(context, query)
reply = answer_from_plan(context, query, plan)
return reply
print("Pattern reply:n", qa_pipeline("What does Opik assist builders do?"))
We convey collectively planning, reasoning, and answering in a totally traced LLM pipeline. We seize every step with Opik’s decorators so we are able to analyze spans within the dashboard. By testing the pipeline, we affirm that each one parts combine easily. Try the FULL CODES here.
consumer = Opik()
dataset = consumer.get_or_create_dataset(
title="HF_Opik_QA_Dataset",
description="Small QA dataset for HF + Opik tutorial",
)
dataset.insert([
{
"question": "What kind of platform is Opik?",
"context": DOCS["overview"],
"reference": "Opik is an open-source platform for debugging, evaluating and monitoring LLM and RAG purposes.",
},
{
"query": "What does tracing in Opik log?",
"context": DOCS["tracing"],
"reference": "Tracing logs nested spans, LLM calls, token utilization, suggestions scores, and metadata.",
},
{
"query": "What are the parts of an Opik analysis?",
"context": DOCS["evaluation"],
"reference": "An Opik analysis makes use of datasets, analysis duties, scoring metrics and experiments that combination scores.",
},
])
We create and populate a dataset inside Opik that our analysis will use. We insert a number of query–reply pairs that cowl completely different elements of Opik. This dataset will function the bottom reality for our QA analysis later. Try the FULL CODES here.
equals_metric = Equals()
lev_metric = LevenshteinRatio()
def evaluation_task(merchandise: dict) -> dict:
output = qa_pipeline(merchandise["question"])
return {
"output": output,
"reference": merchandise["reference"],
}
We outline the analysis job and choose two metrics—Equals and LevenshteinRatio—to measure mannequin high quality. We guarantee the duty produces outputs within the actual format required for scoring. This connects our pipeline to Opik’s analysis engine. Try the FULL CODES here.
evaluation_result = consider(
dataset=dataset,
job=evaluation_task,
scoring_metrics=[equals_metric, lev_metric],
experiment_name="HF_Opik_QA_Experiment",
project_name=PROJECT_NAME,
task_threads=1,
)
print("nExperiment URL:", evaluation_result.experiment_url)
We run the analysis experiment utilizing Opik’s consider perform. We preserve the execution sequential for stability in Colab. As soon as full, we obtain a hyperlink to view the experiment particulars contained in the Opik dashboard. Try the FULL CODES here.
agg = evaluation_result.aggregate_evaluation_scores()
print("nAggregated scores:")
for metric_name, stats in agg.aggregated_scores.gadgets():
print(metric_name, "=>", stats)
We combination and print the analysis scores to know how effectively our pipeline performs. We examine the metric outcomes to see the place outputs align with references and the place enhancements are wanted. This closes the loop on our totally instrumented LLM workflow.
In conclusion, we arrange a small however totally purposeful LLM analysis ecosystem powered fully by Opik and an area mannequin. We observe how traces, prompts, datasets, and metrics come collectively to provide us clear visibility into the mannequin’s reasoning course of. As we finalize our analysis and overview the aggregated scores, we respect how Opik lets us iterate shortly, experiment systematically, and validate enhancements in a structured and dependable method.
Try the FULL CODES here. Be happy to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Additionally, be at liberty to comply with us on Twitter and don’t neglect to hitch our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.
