Work in progress: Telemetry documentation is still being updated. Integration steps and APIs may be incomplete or out of date. Verify against your SDK versions and check back for revisions.
Overview
This guide shows you how to integrate Latitude Telemetry into an application that uses Google Cloud AI Platform.
You’ll keep calling AI Platform exactly as you do today. Telemetry simply
observes and enriches those calls.
Requirements
- A Latitude account and API key
- A Latitude project slug
- A project that uses the Google Cloud AI Platform SDK
Steps
Install
npm install @latitude-data/telemetry
pip install latitude-telemetry
Initialize and use
import { initLatitude, capture } from "@latitude-data/telemetry"
import { PredictionServiceClient } from "@google-cloud/aiplatform"
const latitude = initLatitude({
apiKey: process.env.LATITUDE_API_KEY!,
projectSlug: process.env.LATITUDE_PROJECT_SLUG!,
instrumentations: ["aiplatform"],
})
await latitude.ready
const client = new PredictionServiceClient()
await capture("generate-prediction", async () => {
const [response] = await client.predict({
endpoint: `projects/${process.env.GCP_PROJECT_ID}/locations/us-central1/publishers/google/models/text-bison`,
instances: [{ content: "Hello" }],
parameters: { temperature: 0.2, maxOutputTokens: 256 },
})
return response.predictions
})
await latitude.shutdown()
from latitude_telemetry import init_latitude, capture
from google.cloud import aiplatform
latitude = init_latitude(
api_key="your-api-key",
project_slug="your-project-slug",
instrumentations=["aiplatform"],
)
aiplatform.init(project="your-gcp-project", location="us-central1")
def generate_prediction():
model = aiplatform.TextGenerationModel.from_pretrained("text-bison")
response = model.predict("Hello", temperature=0.2, max_output_tokens=256)
return response.text
capture("generate-prediction", generate_prediction)
latitude.shutdown()
Seeing Your Traces
Once connected, traces appear automatically in Latitude:
- Open your project in the Latitude dashboard
- Each execution shows input/output messages, model, token usage, latency, and errors
That’s It
No changes to your AI Platform calls: just initialize Latitude and your LLM calls are traced.