The wide event already contains the full ai metadata, but you often want the same data inside your handler — to persist it, surface it to end-users, bill against it, or stream incremental progress to the client.
AILogger exposes three methods for that, with no need to touch internal state.
getMetadata() — final snapshot
Returns a structured AIMetadata object that mirrors the ai field on the wide event. Safe to call at any point, including after the run completes or inside the AI SDK's onFinish:
import { useLogger } from 'evlog'
import { createAILogger } from 'evlog/ai'
import { generateText } from 'ai'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const ai = createAILogger(log, {
cost: { 'claude-sonnet-4.6': { input: 3, output: 15 } },
})
await generateText({
model: ai.wrap('anthropic/claude-sonnet-4.6'),
prompt: 'Summarize this document',
})
const metadata = ai.getMetadata()
await db.aiRuns.insert({
userId: event.context.userId,
model: metadata.model,
inputTokens: metadata.inputTokens,
outputTokens: metadata.outputTokens,
estimatedCost: metadata.estimatedCost,
finishReason: metadata.finishReason,
responseId: metadata.responseId,
})
return { ok: true }
})
The snapshot is a fresh copy: mutating it never affects the underlying state or subsequent calls.
getEstimatedCost() — quick cost check
Convenience for getMetadata().estimatedCost. Returns the cost in dollars, or undefined if no cost map was provided or the model is not in the map.
const ai = createAILogger(log, {
cost: { 'claude-sonnet-4.6': { input: 3, output: 15 } },
})
await generateText({ model: ai.wrap('anthropic/claude-sonnet-4.6'), prompt })
const cost = ai.getEstimatedCost()
console.log(`This call cost $${cost?.toFixed(4)}`)
onUpdate(callback) — incremental updates
Subscribe to metadata updates. The callback fires every time the underlying state flushes:
- Once per step in multi-step agent runs
- Once per
captureEmbedcall - On model errors
- On
createEvlogIntegration'sonFinish
Each invocation receives a fresh snapshot. Returns an unsubscribe function. Subscriber errors are isolated and never break the AI flow.
import { ToolLoopAgent, createAgentUIStreamResponse, stepCountIs } from 'ai'
import { useLogger } from 'evlog'
import { createAILogger } from 'evlog/ai'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const { messages } = await readBody(event)
const ai = createAILogger(log)
ai.onUpdate((metadata) => {
pushToClient(event, {
type: 'ai-progress',
step: metadata.steps,
tokens: metadata.totalTokens,
cost: metadata.estimatedCost,
})
})
const agent = new ToolLoopAgent({
model: ai.wrap('anthropic/claude-sonnet-4.6'),
tools: { searchWeb, queryDatabase },
stopWhen: stepCountIs(5),
})
return createAgentUIStreamResponse({ agent, uiMessages: messages })
})
For one-off cleanup:
const off = ai.onUpdate((metadata) => { /* ... */ })
// later
off()
AIMetadata shape
AIMetadata is a public type alias for the snapshot returned by getMetadata() and passed to onUpdate listeners. It has the same shape as the ai field on the wide event.
import type { AIMetadata, AIMetadataListener } from 'evlog/ai'
function handleProgress(metadata: AIMetadata) {
console.log(`${metadata.calls} calls, $${metadata.estimatedCost ?? 0}`)
}
const listener: AIMetadataListener = handleProgress
ai.onUpdate(listener)
Captured Data Reference
Every field that may show up under ai.*:
| Wide event field | Source | Description |
|---|---|---|
ai.calls | Call count | Number of AI calls in this request |
ai.model | response.modelId | Model that served the response |
ai.models | All model IDs | Array of all models used (only when > 1) |
ai.provider | model.provider | Provider (anthropic, openai, google, etc.) |
ai.inputTokens | usage.inputTokens.total | Total input tokens across all calls |
ai.outputTokens | usage.outputTokens.total | Total output tokens across all calls |
ai.totalTokens | Computed | inputTokens + outputTokens |
ai.cacheReadTokens | usage.inputTokens.cacheRead | Tokens served from prompt cache |
ai.cacheWriteTokens | usage.inputTokens.cacheWrite | Tokens written to prompt cache |
ai.reasoningTokens | usage.outputTokens.reasoning | Reasoning tokens (extended thinking) |
ai.finishReason | finishReason.unified | Why generation ended (stop, tool-calls, etc.) |
ai.toolCalls | Content / stream chunks | string[] of tool names by default, or Array<{ name, input }> when toolInputs is enabled |
ai.responseId | response.id | Provider-assigned response ID (e.g. Anthropic's msg_...) |
ai.steps | Step count | Number of LLM calls (only when > 1) |
ai.stepsUsage | Per-step accumulation | Per-step token and tool call breakdown (only when > 1 step) |
ai.msToFirstChunk | Stream timing | Time to first text chunk (streaming only) |
ai.msToFinish | Stream timing | Total stream duration (streaming only) |
ai.tokensPerSecond | Computed | Output tokens per second (streaming only) |
ai.error | Error capture | Error message if a model call fails |
ai.tools | TelemetryIntegration | Per-tool { name, durationMs, success, error? } (requires createEvlogIntegration) |
ai.totalDurationMs | TelemetryIntegration | Total generation wall time (requires createEvlogIntegration) |
ai.embedding | captureEmbed | { model?, tokens, dimensions?, count? } — embedding metadata |
ai.estimatedCost | Computed | Estimated cost in dollars (requires cost option) |