<h1 align="center">
<a href="https://prompts.chat">
globs:
Sign in to like and favorite skills
The
arize-phoenix package consists of a sub-package named arize-phoenix-client which has the full functionality to communicate to the phoenix server. There is a legacy client exported at the top level of the arize-phoenix package that is being deprecated. Here are common migration patterns.
Automatic Migration Rules:
import phoenix as px → from phoenix.client import Client (or AsyncClient)px.Client() → Client() (variable name: px_client)client.query_spans(...) → client.spans.get_spans_dataframe(...)client.get_spans_dataframe() → client.spans.get_spans_dataframe()client.upload_dataset(...) → client.datasets.create_dataset(...)client.get_dataset(...) → client.datasets.get_dataset(...)px.Client().log_evaluations(SpanEvaluations(...)) → px_client.spans.log_span_annotations_dataframe(...)px.Client().log_evaluations(DocumentEvaluations(...)) → px_client.spans.log_document_annotations_dataframe(...)from phoenix.experiments import → from phoenix.client.experiments importfrom phoenix.trace.dsl import SpanQuery → from phoenix.client.types.spans import SpanQueryget_spans_dataframe(query="filter_string") → get_spans_dataframe(query=SpanQuery().where("filter_string"))project_name= → project_identifier=dataset_name= → name=eval_name= → annotation_name=Identify Legacy Patterns (RegEx-like matching):
import phoenix as pxpx\.Client\(\)\.query_spans\(\.get_spans_dataframe\(\)\.upload_dataset\(\.get_dataset\(\.log_evaluations\(SpanEvaluations\(\.log_evaluations\(DocumentEvaluations\(from phoenix\.experiments importfrom phoenix\.trace\.dsl import SpanQueryget_spans_dataframe\(query=".*"\)project_name\s*=dataset_name\s*=eval_name\s*=Validation Rules After Migration:
from phoenix.client import at toppx_client as variable name (preferred)annotator_kind="LLM" for span evaluationsAsyncClient method calls with awaitimport phoenix as px (unless used for other purposes)SpanEvaluations or DocumentEvaluations importsClient Type Selection:
IF file extension == ".ipynb": USE AsyncClient ADD await before method calls ELIF file extension == ".py": USE Client (synchronous) NO await needed
Evaluations Migration:
IF found "log_evaluations(SpanEvaluations(": MIGRATE to "spans.log_span_annotations_dataframe(" ADD "annotator_kind='LLM'," parameter CHANGE "eval_name=" to "annotation_name=" IF found "log_evaluations(DocumentEvaluations(": MIGRATE to "spans.log_document_annotations_dataframe(" ADD "annotator_kind='LLM'," parameter CHANGE "eval_name=" to "annotation_name="
Import Consolidation:
IF file contains multiple phoenix.client imports: CONSOLIDATE to single line: "from phoenix.client import Client, AsyncClient" IF file uses both Client and other legacy phoenix features: KEEP both imports: - "import phoenix as px" (for legacy features like launch_app) - "from phoenix.client import Client" (for new client)
❌ Wrong Patterns:
# DON'T mix old and new imports incorrectly import phoenix as px from phoenix.client import Client client = px.Client() # Should use Client() # DON'T forget await with AsyncClient px_client = AsyncClient() px_client.spans.get_spans_dataframe() # Missing await # DON'T use wrong resource path for annotations px_client.annotations.log_span_annotations_dataframe( # Wrong! Should be spans.log_span_annotations_dataframe dataframe=df, annotation_name="test" ) # DON'T forget required annotator_kind px_client.spans.log_span_annotations_dataframe( dataframe=df, annotation_name="test" # Missing annotator_kind="LLM" )
# Complete context - typical legacy file header import phoenix as px # Legacy client instantiation client = px.Client() # or px_client = px.Client()
Synchronous Client (for .py files):
# Complete context - new file header from phoenix.client import Client # New client instantiation px_client = Client()
Asynchronous Client (for .ipynb notebooks):
# Complete context - new notebook cell from phoenix.client import AsyncClient # New async client instantiation px_client = AsyncClient()
import phoenix as px → from phoenix.client import Client/AsyncClientpx.Client() → Client() or AsyncClient()px_client (instead of generic client)import phoenix as px # Querying spans spans_df = px.Client().query_spans(query, project_name="my-project") # Getting spans dataframe spans_df = px.Client().get_spans_dataframe()
Synchronous:
from phoenix.client import Client px_client = Client() spans_df = px_client.spans.get_spans_dataframe(query=query, project_identifier="my-project")
Asynchronous:
from phoenix.client import AsyncClient px_client = AsyncClient() spans_df = await px_client.spans.get_spans_dataframe(query=query, project_identifier="my-project")
query_spans() → spans.get_spans_dataframe()get_spans_dataframe() → spans.get_spans_dataframe()project_name → project_identifierclient.spans.*from phoenix.experiments import run_experiment, evaluate_experiment
from phoenix.client.experiments import run_experiment, evaluate_experiment
phoenix.experiments → phoenix.client.experimentsfrom phoenix.trace.dsl import SpanQuery # Old way with string query filters spans_df = px.Client().get_spans_dataframe(query="span_kind == 'LLM'") # Or with SpanQuery object (older import) query = SpanQuery().where("span_kind == 'LLM'").select(input="input.value") spans_df = px.Client().query_spans(query)
from phoenix.client import AsyncClient from phoenix.client.types.spans import SpanQuery px_client = AsyncClient() # New way: SpanQuery object only (no string queries) query = SpanQuery().where("span_kind == 'LLM'") spans_df = await px_client.spans.get_spans_dataframe(query=query)
phoenix.trace.dsl → phoenix.client.types.spansimport phoenix as px dataset = px.Client().upload_dataset( dataframe=df, dataset_name="my-dataset", input_keys=["question"], output_keys=["answer"] ) dataset = px.Client().get_dataset(name="my-dataset")
from phoenix.client import Client px_client = Client() dataset = px_client.datasets.create_dataset( dataframe=df, name="my-dataset", input_keys=["question"], output_keys=["answer"] ) dataset = px_client.datasets.get_dataset(dataset="my-dataset")
upload_dataset() → datasets.create_dataset()get_dataset() → datasets.get_dataset()dataset_name → namename parameter → dataset parameter (for get_dataset)Legacy Pattern (Complete Example):
# Complete legacy file context import phoenix as px from phoenix.trace import SpanEvaluations import pandas as pd # Some evaluation dataframe relevance_df = pd.DataFrame({"score": [0.8, 0.9], "label": ["good", "excellent"]}) hallucination_df = pd.DataFrame({"score": [0.1, 0.2], "label": ["low", "low"]}) # Legacy single evaluation px.Client().log_evaluations( SpanEvaluations( dataframe=relevance_df, eval_name="Recommendation Relevance", ), ) # Legacy multiple evaluations (single call) px.Client().log_evaluations( SpanEvaluations(eval_name="Hallucination", dataframe=hallucination_df), SpanEvaluations(eval_name="QA Correctness", dataframe=qa_correctness_df), )
New Pattern (Synchronous - for .py files):
# Complete new file context from phoenix.client import Client import pandas as pd # Same evaluation dataframes relevance_df = pd.DataFrame({"score": [0.8, 0.9], "label": ["good", "excellent"]}) hallucination_df = pd.DataFrame({"score": [0.1, 0.2], "label": ["low", "low"]}) # New single evaluation px_client = Client() px_client.annotations.log_span_annotations_dataframe( dataframe=relevance_df, annotation_name="Recommendation Relevance", annotator_kind="LLM", ) # New multiple evaluations (separate calls) px_client.annotations.log_span_annotations_dataframe( dataframe=hallucination_df, annotation_name="Hallucination", annotator_kind="LLM", ) px_client.annotations.log_span_annotations_dataframe( dataframe=qa_correctness_df, annotation_name="QA Correctness", annotator_kind="LLM", )
New Pattern (Asynchronous - for .ipynb notebooks):
# Complete new notebook cell context from phoenix.client import AsyncClient import pandas as pd # Same evaluation dataframes relevance_df = pd.DataFrame({"score": [0.8, 0.9], "label": ["good", "excellent"]}) hallucination_df = pd.DataFrame({"score": [0.1, 0.2], "label": ["low", "low"]}) # New async single evaluation px_client = AsyncClient() await px_client.annotations.log_span_annotations_dataframe( dataframe=relevance_df, annotation_name="Recommendation Relevance", annotator_kind="LLM", ) # New async multiple evaluations (separate calls with await) await px_client.annotations.log_span_annotations_dataframe( dataframe=hallucination_df, annotation_name="Hallucination", annotator_kind="LLM", ) await px_client.annotations.log_span_annotations_dataframe( dataframe=qa_correctness_df, annotation_name="QA Correctness", annotator_kind="LLM", )
log_evaluations(SpanEvaluations(...)) → annotations.log_span_annotations_dataframe(...)eval_name → annotation_nameannotator_kind parameterSpanEvaluations importLegacy Pattern (Complete Example):
# Complete legacy file context import phoenix as px from phoenix.trace import DocumentEvaluations import pandas as pd # Document evaluation dataframe with required columns: span_id, document_position document_relevance_df = pd.DataFrame({ "span_id": ["span_1", "span_1", "span_2"], "document_position": [0, 1, 0], "score": [1, 1, 0], "label": ["relevant", "relevant", "irrelevant"], "explanation": ["it's apropos", "it's germane", "it's rubbish"] }) # Legacy single document evaluation px.Client().log_evaluations( DocumentEvaluations( dataframe=document_relevance_df, eval_name="Relevance", ), ) # Legacy multiple evaluations (single call) px.Client().log_evaluations( DocumentEvaluations(eval_name="Relevance", dataframe=document_relevance_df), DocumentEvaluations(eval_name="Accuracy", dataframe=document_accuracy_df), )
New Pattern (Synchronous - for .py files):
# Complete new file context from phoenix.client import Client import pandas as pd # Same document evaluation dataframe document_relevance_df = pd.DataFrame({ "span_id": ["span_1", "span_1", "span_2"], "document_position": [0, 1, 0], "score": [1, 1, 0], "label": ["relevant", "relevant", "irrelevant"], "explanation": ["it's apropos", "it's germane", "it's rubbish"] }) # New single document evaluation px_client = Client() px_client.spans.log_document_annotations_dataframe( dataframe=document_relevance_df, annotation_name="Relevance", annotator_kind="LLM", ) # New multiple evaluations (separate calls) px_client.spans.log_document_annotations_dataframe( dataframe=document_relevance_df, annotation_name="Relevance", annotator_kind="LLM", ) px_client.spans.log_document_annotations_dataframe( dataframe=document_accuracy_df, annotation_name="Accuracy", annotator_kind="LLM", )
New Pattern (Asynchronous - for .ipynb notebooks):
# Complete new notebook cell context from phoenix.client import AsyncClient import pandas as pd # Same document evaluation dataframe document_relevance_df = pd.DataFrame({ "span_id": ["span_1", "span_1", "span_2"], "document_position": [0, 1, 0], "score": [1, 1, 0], "label": ["relevant", "relevant", "irrelevant"], "explanation": ["it's apropos", "it's germane", "it's rubbish"] }) # New async single document evaluation px_client = AsyncClient() await px_client.spans.log_document_annotations_dataframe( dataframe=document_relevance_df, annotation_name="Relevance", annotator_kind="LLM", ) # New async multiple evaluations (separate calls with await) await px_client.spans.log_document_annotations_dataframe( dataframe=document_relevance_df, annotation_name="Relevance", annotator_kind="LLM", ) await px_client.spans.log_document_annotations_dataframe( dataframe=document_accuracy_df, annotation_name="Accuracy", annotator_kind="LLM", )
log_evaluations(DocumentEvaluations(...)) → spans.log_document_annotations_dataframe(...)eval_name → annotation_nameannotator_kind parameterspan_id and document_position columnsDocumentEvaluations importEvaluations Parameter Changes:
| Legacy Parameter | New Parameter | Notes |
|---|---|---|
| | Name of the evaluation |
| N/A | | Required: typically "LLM" |
| | Same parameter name |
DocumentEvaluations DataFrame Requirements:
| Required Column | Description |
|---|---|
| ID of the span containing the documents |
| 0-based index of document within the span |
| Optional: numeric evaluation score |
| Optional: categorical evaluation label |
| Optional: text explanation of the evaluation |
Complete API Transformation Table:
| Legacy Pattern | New Pattern | Key Changes |
|---|---|---|
| | Resource path + parameter name |
| | Resource path only |
| | Resource path + parameter name |
| | Resource path + parameter name |
| | Resource path + parameters + required field |
| | Resource path + parameters + required field |
| | Import path only |
| | String queries → SpanQuery objects |
Remove unused imports after migration:
# Remove these after migration: from phoenix.trace import SpanEvaluations # ❌ Remove from phoenix.trace import DocumentEvaluations # ❌ Remove from phoenix.trace.dsl import SpanQuery # ❌ Remove import phoenix as px # ❌ Remove if only used for Client() # Keep these: import phoenix as px # ✅ Keep if used for other functionality (launch_app, etc.) # Replace with new imports: from phoenix.client import Client # ✅ New client import from phoenix.client import AsyncClient # ✅ New async client import from phoenix.client.types.spans import SpanQuery # ✅ New SpanQuery import