OpenTelemetry
Point any OTLP/HTTP exporter at EvalGuard. Traces, metrics, and logs ingest — no agent install.
Endpoint: https://evalguard.ai/api/v1/ingest/otlp
Ingest endpoints
OTLP/HTTP/JSON is supported for all three signal types. Protobuf is on the 2026 roadmap — JSON works with the standard OpenTelemetry Collector and every first-party SDK.
| Signal | Path | Max payload |
|---|---|---|
| Traces | POST /api/v1/ingest/otlp/traces | 2 MB |
| Metrics | POST /api/v1/ingest/otlp/metrics | 2 MB |
| Logs | POST /api/v1/ingest/otlp/logs | 2 MB |
Authentication
Use your EvalGuard API key (starts with eg_). Projects are inferred from the key, or override per-request via x-project-id.
Authorization: Bearer eg_live_your_key_here Content-Type: application/json x-project-id: proj_optional_override
OpenTelemetry Collector
Add EvalGuard as an OTLP/HTTP exporter. Your collector routes the same spans to EvalGuard and any existing backend (Jaeger, Tempo, etc.) in parallel.
exporters:
otlphttp/evalguard:
endpoint: https://evalguard.ai/api
headers:
Authorization: "Bearer eg_live_your_key"
encoding: json
compression: gzip
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlphttp/evalguard]
metrics:
receivers: [otlp]
exporters: [otlphttp/evalguard]
logs:
receivers: [otlp]
exporters: [otlphttp/evalguard]The collector appends /v1/traces, /v1/metrics, /v1/logs automatically — that matches EvalGuard's ingest paths when the endpoint is set to https://evalguard.ai/api.
Node.js SDK
import { NodeSDK } from "@opentelemetry/sdk-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
const sdk = new NodeSDK({
traceExporter: new OTLPTraceExporter({
url: "https://evalguard.ai/api/v1/ingest/otlp/traces",
headers: { Authorization: `Bearer ${process.env.EVALGUARD_API_KEY}` },
}),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();Python SDK
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
provider = TracerProvider()
provider.add_span_processor(
BatchSpanProcessor(
OTLPSpanExporter(
endpoint="https://evalguard.ai/api/v1/ingest/otlp/traces",
headers={"Authorization": f"Bearer {EVALGUARD_API_KEY}"},
)
)
)
trace.set_tracer_provider(provider)LLM semantic conventions
EvalGuard indexes spans on the standard OTel GenAI attributes. Your model, prompt, completion, and token counts flow through without custom mapping.
- gen_ai.request.model — indexed as
model - gen_ai.usage.prompt_tokens / completion_tokens — aggregated into cost
- gen_ai.response.finish_reason — surfaced in the trace waterfall
- llm.model (legacy) — also accepted
- service.name (resource attr) — becomes the trace service column
Response format
Ingest returns an OTLP partial-success response so your collector can retry only the failed spans.
{
"partialSuccess": {
"rejectedSpans": 0,
"errorMessage": ""
}
}{
"partialSuccess": {
"rejectedSpans": 3,
"errorMessage": "3 spans failed DB persistence; others accepted"
}
}Quotas
- Traces are counted against your plan's monthly trace quota.
- A 429 with
error_code: QUOTA_EXCEEDEDmeans you hit the cap — upgrade or wait for reset. - Payload over 2 MB → 413. Batch your exporter to stay under that.