Skip to main content
Work in progress: Telemetry documentation is still being updated. Integration steps and APIs may be incomplete or out of date. Verify against your SDK versions and check back for revisions.
Latitude Telemetry instruments your AI application and sends traces to Latitude. Built entirely on OpenTelemetry, it works alongside your existing observability stack (Datadog, Sentry, Jaeger, etc.) without conflicts or vendor lock-in. Once connected, every LLM execution becomes a trace in Latitude that you can inspect in the Traces view, enrich with scores and annotations, and evaluate with Evaluations.

Quick Start

One function sets up everything: auto-instrumentation, the Latitude exporter, and async context propagation:
npm install @latitude-data/telemetry
import { initLatitude } from "@latitude-data/telemetry"
import OpenAI from "openai"

const latitude = initLatitude({
  apiKey: process.env.LATITUDE_API_KEY!,
  projectSlug: process.env.LATITUDE_PROJECT_SLUG!,
  instrumentations: ["openai"],
})

await latitude.ready

const openai = new OpenAI()
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello" }],
})

await latitude.shutdown()
That’s it. Your LLM calls now appear as traces in Latitude.

Adding Context with capture()

Auto-instrumentation traces LLM calls without any extra code. Use capture() when you want to attach business context such as user IDs, session IDs, tags, or metadata to group and filter traces in Latitude.
import { initLatitude, capture } from "@latitude-data/telemetry"

const latitude = initLatitude({
  apiKey: process.env.LATITUDE_API_KEY!,
  projectSlug: process.env.LATITUDE_PROJECT_SLUG!,
  instrumentations: ["openai"],
})

await latitude.ready

await capture(
  "handle-user-request",
  async () => {
    const response = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: userMessage }],
    })
    return response.choices[0].message.content
  },
  {
    userId: "user_123",
    sessionId: "session_abc",
    tags: ["production", "v2-agent"],
    metadata: { requestId: "req-xyz" },
  },
)

await latitude.shutdown()
capture() does not create spans. It only attaches context to spans created by auto-instrumentation. Wrap the request or agent entrypoint once; you don’t need to wrap every internal step.

Streaming

When streaming responses, consume the stream inside the capture() callback so the span duration covers the full operation and child spans nest correctly:
await capture("stream-reply", async () => {
  const stream = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: input }],
    stream: true,
  })

  for await (const chunk of stream) {
    const content = chunk.choices[0]?.delta?.content
    if (content) res.write(content)
  }
  res.end()
})

How It Fits Into Your Stack

Latitude Telemetry is built on OpenTelemetry standards:
  1. Auto-instrumentation patches your LLM SDK (OpenAI, Anthropic, etc.) to emit spans for every call.
  2. LatitudeSpanProcessor filters for LLM-relevant spans (gen_ai.*, ai.*, openinference.* attributes) and exports them to Latitude via OTLP.
  3. capture() uses OpenTelemetry’s native context.with() to attach Latitude-specific attributes (user, session, tags) to spans within its scope.
If you already run OpenTelemetry for other backends, you can add LatitudeSpanProcessor alongside your existing processors. See the TypeScript SDK or Python SDK reference for advanced setup.

Supported Integrations

Providers

ProviderInstrumentationPackage (TS)Package (Python)
OpenAI"openai"openaiopenai
Anthropic"anthropic"@anthropic-ai/sdkanthropic
Amazon Bedrock"bedrock"@aws-sdk/client-bedrock-runtimeboto3
Cohere"cohere"cohere-aicohere
Together AI"togetherai"together-aitogether
Vertex AI"vertexai"@google-cloud/vertexaigoogle-cloud-aiplatform
Google AI Platform"aiplatform"@google-cloud/aiplatformgoogle-cloud-aiplatform
Azure OpenAI"openai"openaiopenai

Frameworks

FrameworkInstrumentationPackage (TS)Package (Python)
Vercel AI SDK-ai-
LangChain"langchain"langchainlangchain-core
LlamaIndex"llamaindex"llamaindexllama-index

Next Steps