Skip to main content
Work in progress: Telemetry documentation is still being updated. Integration steps and APIs may be incomplete or out of date. Verify against your SDK versions and check back for revisions.

Overview

This guide shows you how to integrate Latitude Telemetry into an application that uses Azure OpenAI. Azure OpenAI uses the same openai SDK under the hood, so the "openai" instrumentation handles it automatically.
You’ll keep calling Azure OpenAI exactly as you do today. Telemetry simply observes and enriches those calls.

Requirements

  • A Latitude account and API key
  • A Latitude project slug
  • A project that uses the Azure OpenAI SDK (via the openai package)

Steps

1

Install

npm install @latitude-data/telemetry
2

Initialize and use

Azure OpenAI uses the "openai" instrumentation: the same one used for standard OpenAI.
import { initLatitude, capture } from "@latitude-data/telemetry"
import { AzureOpenAI } from "openai"

const latitude = initLatitude({
  apiKey: process.env.LATITUDE_API_KEY!,
  projectSlug: process.env.LATITUDE_PROJECT_SLUG!,
  instrumentations: ["openai"],
})

await latitude.ready

const client = new AzureOpenAI({
  endpoint: process.env.AZURE_OPENAI_ENDPOINT,
  apiKey: process.env.AZURE_OPENAI_API_KEY,
  apiVersion: "2024-02-01",
})

await capture("generate-support-reply", async () => {
  const completion = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Hello" }],
  })
  return completion.choices[0].message.content
})

await latitude.shutdown()

Streaming

When streaming, consume the stream inside capture() so the span covers the full operation:
await capture("stream-reply", async () => {
  const stream = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: input }],
    stream: true,
  })

  for await (const chunk of stream) {
    const content = chunk.choices[0]?.delta?.content
    if (content) res.write(content)
  }
  res.end()
})

Seeing Your Traces

Once connected, traces appear automatically in Latitude:
  1. Open your project in the Latitude dashboard
  2. Each execution shows input/output messages, model, token usage, latency, and errors

That’s It

No changes to your Azure OpenAI calls: just initialize Latitude and your LLM calls are traced.