Developer Quick Start
This guide walks you through connecting an existing AI agent to Latitude. By the end, you’ll have live traces flowing into your project and understand how scores and annotations work.Prerequisites
- A Latitude account (sign up at latitude.so)
- An existing AI-powered application using a supported provider or framework
Step 1: Create a Project
After signing in, create a new project from the dashboard. Projects are the main boundary for all reliability features: issues, evaluations, annotation queues, and simulations are all scoped to a project. Give your project a descriptive name that matches the agent or feature you’re monitoring.Step 2: Connect Telemetry
Latitude captures your agent’s interactions through OpenTelemetry-compatible telemetry. See the Telemetry section for detailed setup instructions for your specific provider or framework. Once telemetry is connected, every LLM call, tool invocation, and agent step your application makes will appear as spans in Latitude. Related spans are grouped into traces (single interactions) and sessions (multi-turn conversations).Step 3: View Your Traces
Navigate to your project in the Latitude dashboard. You should see traces appearing in real time as your agent handles requests. Each trace shows:- The full conversation between user and agent
- Individual spans (LLM calls, tool calls, etc.)
- Timing, token usage, and cost
- Any scores attached to the trace
Step 4: Explore Scores
Scores are the fundamental measurement unit. Every score is a normalized value between 0 and 1 with a pass/fail verdict and human-readable feedback. Scores come from three sources:- Evaluations: automated scripts that run on your traces
- Annotations: human review verdicts from your team
- Custom: scores you submit from your own code via the API
Step 5: Review an Annotation Queue
Open the Annotation Queues page in your project. Each queue is a focused review backlog. Click into a queue to enter the review screen:- Read the conversation in the center panel
- Create an annotation: mark it as positive (thumbs up) or negative (thumbs down) with feedback
- Optionally link the annotation to an existing issue, or leave issue assignment automatic
- Mark the item as fully annotated and move to the next
What’s Next
- Observability: Understand spans, traces, and sessions in depth
- Scores: Learn how the scoring system works
- Annotations: Build human review workflows
- Evaluations: Set up automated monitoring
- Issues: Understand how failure patterns are discovered
- Simulations: Test your agent before shipping