Onboarding WalkthroughInteractive
Step 1 of 5
Step 1 of 5

Where the SDK lives

Where does the SDK fit?

Songlines Control sits alongside your existing AI calls — not in the critical path. You add a single trackAIRequest() call after every model response. That's the entire integration surface.

Your AppinvoiceProcessor.ts
Songlines SDKtrackAIRequest()
Songlines Gatewayapi.songlinesai.com
LLM ProviderAzure OpenAI / Bedrock
Control PlatformDashboard + Alerts
← one line here

The SDK snippet — added once per LLM call

typescript
import { SonglinesClient } from "@songlines/sdk";

const songlines = new SonglinesClient({
  apiKey: process.env.SONGLINES_API_KEY,
});

// Wrap every LLM call — that's all it takes
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: prompt }],
});

// ← Add this single line after every model response
await songlines.trackAIRequest({
  model:        "gpt-4o",
  provider:     "azure-openai",
  workflow:     "invoice-processor",
  promptTokens:     response.usage.prompt_tokens,
  completionTokens: response.usage.completion_tokens,
  latencyMs:        responseTime,
  cost:             calculateCost(response.usage),
});

No prompt content is captured by default. Songlines stores only metadata — model name, token counts, latency, workflow ID, and cost. Prompt and completion text remain in your infrastructure unless you explicitly enable prompt logging.