What Is OTLP Ingestion?
OpenTelemetry Protocol (OTLP) is a vendor-neutral standard for trace data. Maxim accepts OTLP traces over HTTP and maps supported semantic convention attributes into Maxim traces, spans, generations, and tool calls.Before you begin
- A Maxim account and Log Repository
- Your Log Repository ID (for the
x-maxim-repo-idheader) - Your Maxim API Key (for the
x-maxim-api-keyheader) - Learn how to obtain API keys
Ensure you have created a Log Repository in Maxim and have your Log Repository ID ready. You can find it in the Maxim Dashboard under Logs > Repositories.
Endpoint & Protocol Configuration
Endpoint:https://api.getmaxim.ai/v1/otel
Supported Protocols: HTTP with OTLP binary Protobuf or JSON
| Protocol | Content-Type |
|---|---|
| HTTP + Protobuf (binary) | application/x-protobuf or application/protobuf |
| HTTP + JSON | application/json |
- HTTPS/TLS is required.
Authentication Headers
Maxim’s OTLP endpoint requires the following headers:x-maxim-repo-id: Your Maxim Log Repository IDx-maxim-api-key: Your Maxim API KeyContent-Type:application/json,application/x-protobuf, orapplication/protobuf
Supported Trace Format
Maxim currently supports OTLP traces using the following semantic conventions:- OpenTelemetry GenAI conventions (
gen_ai.*) - OpenInference conventions (
llm.*andopeninference.span.kind)
Conventions and support
- OpenTelemetry GenAI: Generative AI Semantic Conventions
- OpenInference: OpenInference Semantic Conventions
- AI SDK: AI SDK Semantic Conventions
Quick start (OTLP JSON)
Use OTLP JSON with required headers:Ingestion via OTLP also supports the latest (v1.39.0) version of OpenTelemetry Semantic Conventions.
Best Practices
- Use binary Protobuf (
application/x-protobuf) for optimal performance and robustness - Batch traces to reduce network overhead
- Include rich attributes following supported conventions (
gen_ai.*orllm.*) - Secure your headers and avoid exposing credentials
- Monitor attribute size limits and apply appropriate quotas
Error Codes and Responses
| HTTP Status | Condition | Description |
|---|---|---|
| 200 | Success | { "data": { "success": true } } |
| 403 | Missing or invalid headers - x-maxim-repo-id or x-maxim-api-key | { "code": 403, "message": "Invalid access error" } |
Examples
OpenTelemetry GenAI: simple chat
OpenTelemetry GenAI: simple chat
Save as
payload.json and send with the curl command above:OpenTelemetry GenAI: additional Maxim metadata
OpenTelemetry GenAI: additional Maxim metadata
For GenAI spans, pass additional values in
maxim.metadata JSON.OpenInference: minimal `llm.*` + `openinference.span.kind` payload
OpenInference: minimal `llm.*` + `openinference.span.kind` payload
For OpenInference, include
llm.* attributes and set openinference.span.kind.Maxim-specific additional attributes
Maxim supports the following additional keys:maxim-trace-tags:Map<string, string>for parent trace tagsmaxim-tags:Map<string, string>for current span/generation tagsmaxim-trace-metrics:Map<string, number>for parent trace metricsmaxim-metrics:Map<string, number>for current span/generation metricsmaxim-trace-name: trace display name (GenAI path)
Where to place these keys:
- For OpenTelemetry GenAI (
gen_ai.*): put them insidemaxim.metadata(ormetadata) - For OpenInference (
llm.*): put them insidemetadata