Building blocks

Custom Adapters

Build your own adapter to send logs to any destination using defineHttpDrain — config resolution, retries, timeouts, and error handling are handled for you.

evlog/toolkit ships defineHttpDrain — the same factory every built-in adapter uses. You provide two pure functions (resolve() for config, encode() for the payload) and the toolkit handles batching, retries, timeouts, and error isolation.

A drain at its lowest level is still just a function that receives a DrainContext and sends data somewhere — but in practice, you should reach for defineHttpDrain first.

Build a custom drain adapter

Recipe — defineHttpDrain in 15 lines

The recipe every built-in adapter follows. Replace myservice with your service name.

lib/my-drain.ts
import {
  defineHttpDrain,
  resolveAdapterConfig,
  type ConfigField,
} from 'evlog/toolkit'

interface MyServiceConfig {
  apiKey: string
  endpoint?: string
  timeout?: number
}

const FIELDS: ConfigField<MyServiceConfig>[] = [
  { key: 'apiKey', env: ['MYSERVICE_API_KEY'] },
  { key: 'endpoint', env: ['MYSERVICE_ENDPOINT'] },
  { key: 'timeout' },
]

export function createMyServiceDrain(overrides?: Partial<MyServiceConfig>) {
  return defineHttpDrain<MyServiceConfig>({
    name: 'myservice',
    resolve: async () => {
      const cfg = await resolveAdapterConfig<MyServiceConfig>('myservice', FIELDS, overrides)
      if (!cfg.apiKey) {
        console.error('[evlog/myservice] Missing apiKey')
        return null
      }
      return cfg as MyServiceConfig
    },
    encode: (events, cfg) => ({
      url: `${cfg.endpoint ?? 'https://api.myservice.com'}/v1/ingest`,
      headers: {
        'Content-Type': 'application/json',
        Authorization: `Bearer ${cfg.apiKey}`,
      },
      body: JSON.stringify(events),
    }),
  })
}

That's it. defineHttpDrain handles retries (default 2), timeouts (default 5000ms), and error isolation — your app pipeline keeps running even if your destination is down.

Then wire it to your framework:

// server/plugins/evlog-drain.ts
export default defineNitroPlugin((nitroApp) => {
  nitroApp.hooks.hook('evlog:drain', drain)
})

DrainContext Reference

types.ts
interface DrainContext {
  /** The complete wide event with all accumulated context */
  event: WideEvent

  /** Request metadata */
  request?: {
    method: string
    path: string
    requestId: string
  }

  /** Safe HTTP headers (sensitive headers filtered) */
  headers?: Record<string, string>
}

interface WideEvent {
  timestamp: string
  level: 'debug' | 'info' | 'warn' | 'error'
  service: string
  environment?: string
  version?: string
  region?: string
  commitHash?: string
  requestId?: string
  // ... plus all fields added via log.set()
  [key: string]: unknown
}

Standardized config priority

resolveAdapterConfig(namespace, fields, overrides) walks the standard chain so users get the same configuration UX as built-in adapters:

  1. Explicit overrides passed to your factory
  2. runtimeConfig.evlog.<namespace> (Nuxt/Nitro)
  3. runtimeConfig.<namespace> (legacy Nuxt/Nitro)
  4. NUXT_<NS>_<FIELD> env vars
  5. <NS>_<FIELD> env vars

Field names should follow the project conventions: apiKey, endpoint, serviceName, timeout. If you're renaming an existing field (e.g. tokenapiKey), keep both as ConfigField entries for one major version — see axiom.ts and better-stack.ts for the deprecation pattern.

Filtering and transforming events

encode() receives the full batch of WideEvent[] plus the resolved config. Filter or transform inline:

encode: (events, cfg) => {
  const filtered = events.filter(e => e.level === 'error' && e.path !== '/health')
  if (filtered.length === 0) return null

  const payload = filtered.map(e => ({
    ts: new Date(e.timestamp).getTime(),
    severity: e.level.toUpperCase(),
    attributes: { method: e.method, path: e.path, status: e.status, duration: e.duration },
  }))

  return {
    url: `${cfg.endpoint}/v1/push`,
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(payload),
  }
}

Returning null from encode() is a clean opt-out — the drain stays a no-op for that batch.

When you can't use defineHttpDrain

If your destination requires a non-HTTP transport (gRPC, websocket, vendor SDK), drop one level lower with defineDrain:

import { defineDrain, type DrainContext } from 'evlog/toolkit'

export const createCustomTransportDrain = () =>
  defineDrain<{ apiKey: string }>({
    name: 'custom',
    resolve: async () => ({ apiKey: process.env.MY_KEY! }),
    send: async (events, cfg) => {
      await myVendorSdk.publish(events, { token: cfg.apiKey })
    },
  })

You still get config resolution, error isolation, and a consistent shape — you just own the wire transport.

Batching

For high-throughput scenarios, use the Drain Pipeline to batch events, retry on failure, and handle buffer overflow automatically:

lib/my-drain.ts
import type { DrainContext } from 'evlog'
import { createDrainPipeline } from 'evlog/pipeline'

const pipeline = createDrainPipeline<DrainContext>({
  batch: { size: 100, intervalMs: 5000 },
})

const drain = pipeline(async (batch) => {
  await fetch('https://api.example.com/logs/batch', {
    method: 'POST',
    body: JSON.stringify(batch.map(ctx => ctx.event)),
  })
})
See the Pipeline documentation for the full options reference, retry strategies, and buffer overflow handling.

Error Handling — already done for you

defineHttpDrain enforces every best practice automatically:

  1. Never throws — failures are caught and logged with the [evlog/<name>] prefix.
  2. Retries — defaults to 2 attempts on transient errors (configurable via retries).
  3. Timeouts — defaults to 5000ms (configurable via timeout).
  4. Graceful degradationresolve() returning null makes the drain a no-op.

If you fall back to defineDrain for non-HTTP transports, follow the same rules manually — wrap the transport in try/catch, log with console.error('[evlog/<name>] …'), and never re-throw.

Next Steps