Custom Adapters
evlog/toolkit ships defineHttpDrain — the same factory every built-in adapter uses. You provide two pure functions (resolve() for config, encode() for the payload) and the toolkit handles batching, retries, timeouts, and error isolation.
A drain at its lowest level is still just a function that receives a DrainContext and sends data somewhere — but in practice, you should reach for defineHttpDrain first.
Build a custom drain adapter
Recipe — defineHttpDrain in 15 lines
The recipe every built-in adapter follows. Replace myservice with your service name.
import {
defineHttpDrain,
resolveAdapterConfig,
type ConfigField,
} from 'evlog/toolkit'
interface MyServiceConfig {
apiKey: string
endpoint?: string
timeout?: number
}
const FIELDS: ConfigField<MyServiceConfig>[] = [
{ key: 'apiKey', env: ['MYSERVICE_API_KEY'] },
{ key: 'endpoint', env: ['MYSERVICE_ENDPOINT'] },
{ key: 'timeout' },
]
export function createMyServiceDrain(overrides?: Partial<MyServiceConfig>) {
return defineHttpDrain<MyServiceConfig>({
name: 'myservice',
resolve: async () => {
const cfg = await resolveAdapterConfig<MyServiceConfig>('myservice', FIELDS, overrides)
if (!cfg.apiKey) {
console.error('[evlog/myservice] Missing apiKey')
return null
}
return cfg as MyServiceConfig
},
encode: (events, cfg) => ({
url: `${cfg.endpoint ?? 'https://api.myservice.com'}/v1/ingest`,
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${cfg.apiKey}`,
},
body: JSON.stringify(events),
}),
})
}
That's it. defineHttpDrain handles retries (default 2), timeouts (default 5000ms), and error isolation — your app pipeline keeps running even if your destination is down.
Then wire it to your framework:
// server/plugins/evlog-drain.ts
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook('evlog:drain', drain)
})
// lib/evlog.ts
import { createEvlog } from 'evlog/next'
export const { withEvlog, useLogger, log, createError } = createEvlog({
service: 'my-app',
drain,
})
app.use(evlog({ drain }))
app.use(evlog({ drain }))
await app.register(evlog, { drain })
app.use(evlog({ drain }))
EvlogModule.forRoot({ drain })
initLogger({ drain })
DrainContext Reference
interface DrainContext {
/** The complete wide event with all accumulated context */
event: WideEvent
/** Request metadata */
request?: {
method: string
path: string
requestId: string
}
/** Safe HTTP headers (sensitive headers filtered) */
headers?: Record<string, string>
}
interface WideEvent {
timestamp: string
level: 'debug' | 'info' | 'warn' | 'error'
service: string
environment?: string
version?: string
region?: string
commitHash?: string
requestId?: string
// ... plus all fields added via log.set()
[key: string]: unknown
}
Standardized config priority
resolveAdapterConfig(namespace, fields, overrides) walks the standard chain so users get the same configuration UX as built-in adapters:
- Explicit
overridespassed to your factory runtimeConfig.evlog.<namespace>(Nuxt/Nitro)runtimeConfig.<namespace>(legacy Nuxt/Nitro)NUXT_<NS>_<FIELD>env vars<NS>_<FIELD>env vars
Field names should follow the project conventions: apiKey, endpoint, serviceName, timeout. If you're renaming an existing field (e.g. token → apiKey), keep both as ConfigField entries for one major version — see axiom.ts and better-stack.ts for the deprecation pattern.
Filtering and transforming events
encode() receives the full batch of WideEvent[] plus the resolved config. Filter or transform inline:
encode: (events, cfg) => {
const filtered = events.filter(e => e.level === 'error' && e.path !== '/health')
if (filtered.length === 0) return null
const payload = filtered.map(e => ({
ts: new Date(e.timestamp).getTime(),
severity: e.level.toUpperCase(),
attributes: { method: e.method, path: e.path, status: e.status, duration: e.duration },
}))
return {
url: `${cfg.endpoint}/v1/push`,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
}
}
Returning null from encode() is a clean opt-out — the drain stays a no-op for that batch.
When you can't use defineHttpDrain
If your destination requires a non-HTTP transport (gRPC, websocket, vendor SDK), drop one level lower with defineDrain:
import { defineDrain, type DrainContext } from 'evlog/toolkit'
export const createCustomTransportDrain = () =>
defineDrain<{ apiKey: string }>({
name: 'custom',
resolve: async () => ({ apiKey: process.env.MY_KEY! }),
send: async (events, cfg) => {
await myVendorSdk.publish(events, { token: cfg.apiKey })
},
})
You still get config resolution, error isolation, and a consistent shape — you just own the wire transport.
Batching
For high-throughput scenarios, use the Drain Pipeline to batch events, retry on failure, and handle buffer overflow automatically:
import type { DrainContext } from 'evlog'
import { createDrainPipeline } from 'evlog/pipeline'
const pipeline = createDrainPipeline<DrainContext>({
batch: { size: 100, intervalMs: 5000 },
})
const drain = pipeline(async (batch) => {
await fetch('https://api.example.com/logs/batch', {
method: 'POST',
body: JSON.stringify(batch.map(ctx => ctx.event)),
})
})
Error Handling — already done for you
defineHttpDrain enforces every best practice automatically:
- Never throws — failures are caught and logged with the
[evlog/<name>]prefix. - Retries — defaults to 2 attempts on transient errors (configurable via
retries). - Timeouts — defaults to 5000ms (configurable via
timeout). - Graceful degradation —
resolve()returningnullmakes the drain a no-op.
If you fall back to defineDrain for non-HTTP transports, follow the same rules manually — wrap the transport in try/catch, log with console.error('[evlog/<name>] …'), and never re-throw.
Next Steps
- Axiom Adapter - See a production-ready adapter implementation
- OTLP Adapter - OpenTelemetry Protocol adapter
- PostHog Adapter - PostHog product analytics adapter
- Best Practices - Security and production tips
HTTP
Framework-agnostic HTTP log transport for sending client-side logs to your server via fetch or sendBeacon. Works in the browser or any environment with fetch. Use the `evlog/http` entry point.
Toolkit
The evlog/toolkit public API — every primitive used to build adapters, enrichers, plugins, and framework integrations.