Quick Start
Get started with Burnwise in under 5 minutes. Track your LLM costs with just a few lines of code.
Need an API key? Sign up for free to get your Burnwise API key.
Compatibility: Node.js 18+, Vercel Edge Runtime, Cloudflare Workers
1Install the SDK
npm install @burnwise/sdk2Initialize Burnwise
import { burnwise } from "@burnwise/sdk";
// Initialize once at app startup
burnwise.init({
apiKey: process.env.BURNWISE_API_KEY!, // Get this from burnwise.io/onboarding
debug: true, // Optional: shows init confirmation
});
// Check if SDK is ready (useful for conditional environments)
if (burnwise.isInitialized()) {
// Safe to use burnwise.trace(), wrappers, etc.
}3Wrap your AI client
import OpenAI from "openai";
import { burnwise } from "@burnwise/sdk";
// Wrap your OpenAI client
const openai = burnwise.openai.wrap(new OpenAI(), {
feature: "chat-support",
});
// Use normally - costs are tracked automatically!
const response = await openai.chat.completions.create({
model: "gpt-5.2",
messages: [{ role: "user", content: "Hello!" }],
});That's it! Your LLM costs are now being tracked. Check your dashboard to see real-time cost analytics.
Supported Providers
Burnwise supports all major LLM providers out of the box.
OpenAI
Full support for GPT-5.2, GPT-5.2-mini, GPT-4.1, o3, o4-mini, and all OpenAI models.
import OpenAI from "openai";
import { burnwise } from "@burnwise/sdk";
// Wrap your OpenAI client
const openai = burnwise.openai.wrap(new OpenAI(), {
feature: "chat-support",
});
// Use normally - costs are tracked automatically!
const response = await openai.chat.completions.create({
model: "gpt-5.2",
messages: [{ role: "user", content: "Hello!" }],
});Anthropic
Track costs for Claude 4.5 Opus, Claude 4.5 Sonnet, and Claude 4.5 Haiku.
import Anthropic from "@anthropic-ai/sdk";
import { burnwise } from "@burnwise/sdk";
const anthropic = burnwise.anthropic.wrap(new Anthropic(), {
feature: "analysis",
});
const message = await anthropic.messages.create({
model: "claude-4.5-sonnet",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello!" }],
});Google Gemini
Support for Gemini 3.0 Pro, Gemini 3.0 Flash, and all Google AI models.
import { GoogleGenerativeAI } from "@google/generative-ai";
import { burnwise } from "@burnwise/sdk";
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
const model = burnwise.google.wrapModel(
genAI.getGenerativeModel({ model: "gemini-3.0-flash" }),
{ feature: "summarization" }
);
const result = await model.generateContent("Hello!");Mistral AI
Track Mistral Large 3, Mistral Medium 3, Mistral Small 3, and Devstral models.
import { Mistral } from "@mistralai/mistralai";
import { burnwise } from "@burnwise/sdk";
const mistral = burnwise.mistral.wrap(new Mistral(), {
feature: "code-completion",
});
const response = await mistral.chat.complete({
model: "mistral-large-3",
messages: [{ role: "user", content: "Hello!" }],
});xAI (Grok)
Support for Grok 4.1, Grok 4, and all Grok models via the OpenAI-compatible API.
import OpenAI from "openai";
import { burnwise } from "@burnwise/sdk";
const xai = burnwise.xai.wrap(
new OpenAI({
baseURL: "https://api.x.ai/v1",
apiKey: process.env.XAI_API_KEY!,
}),
{ feature: "reasoning" }
);
const response = await xai.chat.completions.create({
model: "grok-4.1",
messages: [{ role: "user", content: "Hello!" }],
});DeepSeek
Support for DeepSeek V3.2, DeepSeek R1, and all DeepSeek models.
import OpenAI from "openai";
import { burnwise } from "@burnwise/sdk";
const deepseek = burnwise.deepseek.wrap(
new OpenAI({
baseURL: "https://api.deepseek.com/v1",
apiKey: process.env.DEEPSEEK_API_KEY!,
}),
{ feature: "coding" }
);
const response = await deepseek.chat.completions.create({
model: "deepseek-v3.2",
messages: [{ role: "user", content: "Hello!" }],
});Perplexity
Support for Sonar Pro, Sonar Reasoning, and all Perplexity models.
import OpenAI from "openai";
import { burnwise } from "@burnwise/sdk";
const perplexity = burnwise.perplexity.wrap(
new OpenAI({
baseURL: "https://api.perplexity.ai",
apiKey: process.env.PERPLEXITY_API_KEY!,
}),
{ feature: "research" }
);
const response = await perplexity.chat.completions.create({
model: "sonar-pro",
messages: [{ role: "user", content: "What is Burnwise?" }],
});Streaming Support
All provider wrappers support streaming responses with automatic token tracking. The SDK intercepts the stream, captures usage data from stream events, and tracks costs when the stream completes.
How It Works
- •OpenAI-compatible APIs (OpenAI, xAI, DeepSeek, Perplexity): The SDK automatically adds
stream_options.include_usage = true - •Anthropic: Usage is extracted from
message_startandmessage_deltaevents - •Google Gemini: Both
generateContent()andgenerateContentStream()are wrapped - •Mistral: The
chat.stream()method is wrapped to capture usage
OpenAI Streaming
import OpenAI from "openai";
import { burnwise } from "@burnwise/sdk";
const openai = burnwise.openai.wrap(new OpenAI(), {
feature: "chat-support",
});
// Streaming - usage is tracked automatically when stream completes
const stream = await openai.chat.completions.create({
model: "gpt-5.2",
messages: [{ role: "user", content: "Tell me a story" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
// Usage tracked automatically when loop completesAnthropic Streaming
import Anthropic from "@anthropic-ai/sdk";
import { burnwise } from "@burnwise/sdk";
const anthropic = burnwise.anthropic.wrap(new Anthropic(), {
feature: "analysis",
});
// Streaming - usage is tracked automatically
const stream = await anthropic.messages.create({
model: "claude-sonnet-4-5-20250929",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a poem" }],
stream: true,
});
for await (const event of stream) {
if (event.type === "content_block_delta") {
process.stdout.write(event.delta.text || "");
}
}
// Usage tracked automatically after stream completesGoogle Gemini Streaming
import { GoogleGenerativeAI } from "@google/generative-ai";
import { burnwise } from "@burnwise/sdk";
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
const model = burnwise.google.wrapModel(
genAI.getGenerativeModel({ model: "gemini-3.0-flash" }),
{ feature: "summarization" }
);
// Streaming
const result = await model.generateContentStream("Explain quantum computing");
for await (const chunk of result.stream) {
process.stdout.write(chunk.text());
}
// Usage tracked automaticallyFeature Tracking
Understand where your AI costs are going by tagging calls with features.
What are Features?
Features are labels you attach to your AI calls to track costs by use case. For example, you might have features like "chat-support", "document-analysis", or "auto-summary".
How to Use Features
// Track different features separately
const chatClient = burnwise.openai.wrap(new OpenAI(), {
feature: "chat-support",
});
const analysisClient = burnwise.openai.wrap(new OpenAI(), {
feature: "document-analysis",
});
const summaryClient = burnwise.openai.wrap(new OpenAI(), {
feature: "auto-summary",
});
// Now you can see costs broken down by feature in the dashboardPro Tip
Use consistent feature names across your codebase. This makes it easier to track costs and identify optimization opportunities in the dashboard.
Hierarchical Agent Tracing
Track costs for multi-agent systems with parent-child relationships. See both individual sub-agent costs AND total orchestration costs.
Perfect for agent orchestration: When your main agent calls 10+ sub-agents, you can see the cost breakdown for each sub-agent and the total cost for the entire execution tree.
Basic Usage
Wrap your agent functions with burnwise.trace() to create hierarchical spans. Context propagates automatically via AsyncLocalStorage.
import { burnwise } from "@burnwise/sdk";
// Wrap agent execution to create a trace span
await burnwise.trace("idea-analysis", async () => {
// All LLM calls inside are automatically tagged with:
// - traceId: unique ID for the entire execution tree
// - spanId: unique ID for this specific span
// - spanName: "idea-analysis"
// - traceDepth: 0 (root level)
const market = await burnwise.trace("market-scan", async () => {
// Nested span - same traceId, own spanId, parentSpanId points to parent
return await marketAgent.run(idea);
});
const competitors = await burnwise.trace("competitor-analysis", async () => {
return await competitorAgent.run(idea);
});
return { market, competitors };
});How It Works
1. Automatic Context Propagation
When you call burnwise.trace(), it creates a trace context using Node.js AsyncLocalStorage. All LLM calls made within that function automatically inherit the trace context.
2. Tree Structure
Each span has the following fields:
- • traceId: UUID shared by all spans in the same execution tree
- • spanId: UUID unique to this specific span
- • parentSpanId: UUID of the parent span (undefined for root)
- • spanName: Human-readable name (e.g., "market-scan")
- • traceDepth: Level in the tree (0 = root, max 3)
3. Depth Limit
Maximum 3 levels of nesting. If you exceed this, a warning is logged and the function runs without creating a new span.
Full Example: Multi-Agent Analysis
A complete example showing how to track an "idea-analysis" agent that orchestrates multiple sub-agents.
import { burnwise } from "@burnwise/sdk";
import Anthropic from "@anthropic-ai/sdk";
burnwise.init({ apiKey: process.env.BURNWISE_API_KEY! });
const anthropic = burnwise.anthropic.wrap(new Anthropic(), {
feature: "idea-analysis",
});
async function analyzeIdea(idea: string) {
return burnwise.trace("idea-analysis", async () => {
// Market analysis sub-agent
const market = await burnwise.trace("market-scan", async () => {
const response = await anthropic.messages.create({
model: "claude-sonnet-4-5-20250929",
max_tokens: 2000,
messages: [{ role: "user", content: `Analyze market for: ${idea}` }],
});
return response.content[0].text;
});
// Competitor analysis sub-agent
const competitors = await burnwise.trace("competitor-analysis", async () => {
const response = await anthropic.messages.create({
model: "claude-sonnet-4-5-20250929",
max_tokens: 2000,
messages: [{ role: "user", content: `Find competitors for: ${idea}` }],
});
return response.content[0].text;
});
// Final synthesis with more powerful model
const synthesis = await burnwise.trace("synthesis", async () => {
const response = await anthropic.messages.create({
model: "claude-opus-4-5-20251101",
max_tokens: 4000,
messages: [{
role: "user",
content: `Synthesize:\nMarket: ${market}\nCompetitors: ${competitors}`,
}],
});
return response.content[0].text;
});
return { market, competitors, synthesis };
});
}
// All 4 LLM calls tracked with same traceId
const analysis = await analyzeIdea("AI-powered recipe generator");Tracing API Reference
// Async trace (most common)
const result = await burnwise.trace("span-name", async () => {
return await doSomething();
});
// Sync trace for synchronous functions
const result = burnwise.traceSync("span-name", () => {
return doSomethingSync();
});
// Trace with detailed result info
const { result, spanId, traceId, durationMs } = await burnwise.traceWithResult(
"span-name",
async () => await doSomething()
);
// Check if currently inside a trace
if (burnwise.isInTrace()) {
console.log("Currently in a trace");
}
// Get current trace context
const context = burnwise.getTraceContext();
if (context) {
console.log(`Trace: ${context.traceId}, Span: ${context.spanId}`);
}Dashboard Features
- • View all spans belonging to a trace grouped together
- • See the total cost of an agent orchestration (sum of all spans)
- • See individual sub-agent costs
- • Visualize the call tree timeline
API Reference
Complete reference for all SDK methods and configuration options.
burnwise.init(config)
Initialize the Burnwise SDK with your configuration.
burnwise.init({
// Required: Your Burnwise API key
apiKey: "bw_live_xxx",
// Optional: Base URL (for self-hosted)
baseUrl: "https://api.burnwise.io",
// Optional: Enable debug logging (shows init confirmation)
debug: true,
// Optional: Batch events (default: true)
batchEvents: true,
// Optional: Batch flush interval in ms (default: 5000)
batchFlushInterval: 5000,
// Optional: Maximum batch size (default: 100)
maxBatchSize: 100,
// Optional: Environment
environment: "production", // "production" | "staging" | "development"
});
// With debug: true, you'll see:
// [Burnwise] ✅ Initialized (production)
// [Burnwise] API Key: bw_live_xx...
// [Burnwise] Endpoint: https://burnwise.io/api
// [Burnwise] Batching: enabled (5000ms)
// Check if SDK is initialized
if (burnwise.isInitialized()) {
// SDK is ready to use
}| Option | Type | Description |
|---|---|---|
| apiKey | string | Your Burnwise API key (required) |
| baseUrl | string | Custom API endpoint (default: https://api.burnwise.io) |
| debug | boolean | Enable debug logging (default: false) |
| batchEvents | boolean | Batch events before sending (default: true) |
| batchFlushInterval | number | Flush interval in ms (default: 5000) |
| maxBatchSize | number | Maximum batch size (default: 100) |
| environment | string | Environment: "production" | "staging" | "development" |
burnwise.isInitialized()
Check if the SDK has been initialized. Useful for conditional environments where the SDK might not be initialized.
if (burnwise.isInitialized()) {
// SDK is ready - safe to use burnwise.trace(), wrappers, etc.
}track(event)
Manually track an LLM event. Useful for custom integrations or providers not directly supported.
import { track } from "@burnwise/sdk";
// For advanced use cases, track events manually
await track({
provider: "openai",
model: "gpt-5.2",
feature: "custom-feature",
promptTokens: 100,
completionTokens: 50,
latencyMs: 1200,
costUsd: 0.002,
status: "success",
});Privacy
Burnwise is designed with privacy as a core principle.
What We Track
- Token counts (input and output)
- Model name and provider
- Cost (calculated from tokens)
- Latency
- Feature tags you define
What We NEVER Track
- Prompt content
- Completion content
- User data within prompts
- System prompts
- Function/tool definitions
Compliance
- GDPR compliant
- SOC 2 Type II (in progress)
- All data encrypted in transit (TLS 1.3)
- All data encrypted at rest (AES-256)
- EU data residency available