Scores API
You can submit scores and annotations to Latitude programmatically, enabling custom quality signals from your own code, user feedback systems, or external evaluation pipelines.Custom Scores
Submit custom scores through the scores endpoint:| Field | Type | Required | Description |
|---|---|---|---|
traceId | string | Yes | The trace to attach the score to |
value | number | Yes | Normalized score between 0 and 1 |
passed | boolean | Yes | Pass/fail verdict |
feedback | string | Yes | Human-readable explanation of the verdict |
source_id | string | Yes | Your custom source identifier (e.g., "user-satisfaction", "task-completion") |
spanId | string | No | Attach to a specific span within the trace |
sessionId | string | No | Associate with a session |
metadata | object | No | Arbitrary JSON metadata |
Example
Use Cases
- User satisfaction ratings: Convert thumbs up/down or star ratings into scores
- Task completion metrics: Track whether the agent’s output led to a successful outcome
- Business KPIs: Conversion rates, resolution rates, or other downstream metrics
- External validation: Results from your own evaluation pipeline or third-party tools
Annotations API
Submit human annotations through the dedicated annotations endpoint:| Field | Type | Required | Description |
|---|---|---|---|
traceId | string | Yes | The trace being annotated |
value | number | Yes | Normalized score between 0 and 1 |
passed | boolean | Yes | Pass/fail verdict |
feedback | string | Yes | The reviewer’s feedback text |
issueId | string | No | Link to an existing issue |
How Scores Feed the System
Once submitted, custom scores and annotations flow through the same reliability pipeline as internally generated scores:- Issue discovery: Failed scores automatically enter the discovery pipeline, where Latitude clusters similar failures into issues
- Analytics: Finalized scores appear in time-series dashboards
- Alignment: Annotation scores are compared against evaluation scores for the same traces to compute alignment metrics
Next Steps
- Scores Overview: How the score model works
- Annotations: How the annotation workflow works
- Issues: How failed scores become trackable issues