Skip to main content

What Is OTLP Ingestion?

OpenTelemetry Protocol (OTLP) is a vendor-neutral standard for trace data. Maxim accepts OTLP traces over HTTP and maps supported semantic convention attributes into Maxim traces, spans, generations, and tool calls.

Before you begin

  • A Maxim account and Log Repository
  • Your Log Repository ID (for the x-maxim-repo-id header)
  • Your Maxim API Key (for the x-maxim-api-key header) - Learn how to obtain API keys
Ensure you have created a Log Repository in Maxim and have your Log Repository ID ready. You can find it in the Maxim Dashboard under Logs > Repositories.

Endpoint & Protocol Configuration

Endpoint: https://api.getmaxim.ai/v1/otel Supported Protocols: HTTP with OTLP binary Protobuf or JSON
ProtocolContent-Type
HTTP + Protobuf (binary)application/x-protobuf or application/protobuf
HTTP + JSONapplication/json
Transport Security:
  • HTTPS/TLS is required.

Authentication Headers

Maxim’s OTLP endpoint requires the following headers:
  • x-maxim-repo-id: Your Maxim Log Repository ID
  • x-maxim-api-key: Your Maxim API Key
  • Content-Type: application/json, application/x-protobuf, or application/protobuf

Supported Trace Format

Maxim currently supports OTLP traces using the following semantic conventions:
  • OpenTelemetry GenAI conventions (gen_ai.*)
  • OpenInference conventions (llm.*)
  • AI SDK conventions (ai.*)

Conventions and support

Quick start (OTLP JSON)

Use OTLP JSON with required headers:
curl -X POST "https://api.getmaxim.ai/v1/otel" \
  -H "Content-Type: application/json" \
  -H "x-maxim-api-key: <MAXIM_API_KEY>" \
  -H "x-maxim-repo-id: <LOG_REPOSITORY_ID>" \
  --data @payload.json
Ingestion via OTLP also supports the latest (v1.39.0) version of OpenTelemetry Semantic Conventions.

Best Practices

  • Use binary Protobuf (application/x-protobuf) for optimal performance and robustness
  • Batch traces to reduce network overhead
  • Include rich attributes following supported conventions (gen_ai.*, llm.*, or ai.*)
  • Secure your headers and avoid exposing credentials
  • Monitor attribute size limits and apply appropriate quotas

Error Codes and Responses

HTTP StatusConditionDescription
200Success{ "data": { "success": true } }
403Missing or invalid headers - x-maxim-repo-id or x-maxim-api-key{ "code": 403, "message": "Invalid access error" }

Examples

Save as payload.json and send with the curl command above:
{
  "resourceSpans": [
    {
      "resource": {
        "attributes": [
          { "key": "service.name", "value": { "stringValue": "otel-test-genai" } }
        ]
      },
      "scopeSpans": [
        {
          "scope": { "name": "@opentelemetry/instrumentation-genai", "version": "0.1.0" },
          "spans": [
            {
              "traceId": "5f8c7f9a3ef14f67af716ef4cf4a9d23",
              "spanId": "8f2c9c1c6e5d4b2a",
              "name": "chat gpt-4o",
              "kind": "2",
              "startTimeUnixNano": "1739340000000000000",
              "endTimeUnixNano": "1739340000850000000",
              "attributes": [
                { "key": "gen_ai.operation.name", "value": { "stringValue": "chat" } },
                { "key": "gen_ai.provider.name", "value": { "stringValue": "openai" } },
                { "key": "gen_ai.request.model", "value": { "stringValue": "gpt-4o" } },
                { "key": "gen_ai.response.model", "value": { "stringValue": "gpt-4o-2024-08-06" } },
                { "key": "gen_ai.response.id", "value": { "stringValue": "chatcmpl-AYk3gR7Lz5yMPnOGH8kT1wQ" } },
                {
                  "key": "gen_ai.response.finish_reasons",
                  "value": { "arrayValue": { "values": [{ "stringValue": "stop" }] } }
                },
                { "key": "gen_ai.usage.input_tokens", "value": { "intValue": "24" } },
                { "key": "gen_ai.usage.output_tokens", "value": { "intValue": "156" } },
                {
                  "key": "gen_ai.input.messages",
                  "value": {
                    "stringValue": "[{\"role\":\"user\",\"parts\":[{\"type\":\"text\",\"content\":\"Explain the difference between REST and GraphQL APIs in a few sentences.\"}]}]"
                  }
                },
                {
                  "key": "gen_ai.output.messages",
                  "value": {
                    "stringValue": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"REST uses resource-specific endpoints; GraphQL uses a single query endpoint where clients request exact fields.\"}],\"finish_reason\":\"stop\"}]"
                  }
                }
              ],
              "events": [],
              "links": [],
              "status": {},
              "flags": 0
            }
          ]
        }
      ]
    }
  ]
}
For GenAI spans, pass additional values in maxim.metadata JSON.
{
  "key": "maxim.metadata",
  "value": {
    "stringValue": "{\"maxim-trace-name\":\"simple-chat\",\"maxim-trace-tags\":{\"environment\":\"production\"},\"maxim-tags\":{\"prompt-version\":\"v3.2.1\"},\"maxim-trace-metrics\":{\"user-satisfaction\":4.5},\"maxim-metrics\":{\"latency_ms\":870}}"
  }
}
For OpenInference, include llm.* attributes and set openinference.span.kind.
{
  "resourceSpans": [
    {
      "resource": {
        "attributes": [
          { "key": "service.name", "value": { "stringValue": "openinference-example" } }
        ]
      },
      "scopeSpans": [
        {
          "scope": { "name": "openinference.instrumentation", "version": "1.0.0" },
          "spans": [
            {
              "traceId": "2ea0f3d8a9d74f86bc889fbeeb2ed5d4",
              "spanId": "3f4a7b9d1c2e8f60",
              "name": "llm response",
              "kind": "2",
              "startTimeUnixNano": "1739340100000000000",
              "endTimeUnixNano": "1739340100320000000",
              "attributes": [
                { "key": "openinference.span.kind", "value": { "stringValue": "LLM" } },
                { "key": "llm.system", "value": { "stringValue": "openai" } },
                { "key": "llm.model_name", "value": { "stringValue": "gpt-4o-mini" } },
                { "key": "llm.token_count.prompt", "value": { "intValue": "36" } },
                { "key": "llm.token_count.completion", "value": { "intValue": "48" } },
                {
                  "key": "metadata",
                  "value": {
                    "stringValue": "{\"maxim-trace-tags\":{\"team\":\"support-ai\"},\"maxim-tags\":{\"workflow\":\"triage\"},\"maxim-trace-metrics\":{\"quality\":0.93},\"maxim-metrics\":{\"latency_ms\":320}}"
                  }
                }
              ],
              "events": [
                {
                  "timeUnixNano": "1739340100050000000",
                  "name": "llm.prompt",
                  "attributes": [
                    { "key": "content", "value": { "stringValue": "{\"content\":\"Summarize the ticket in one sentence.\"}" } }
                  ]
                },
                {
                  "timeUnixNano": "1739340100250000000",
                  "name": "llm.completion",
                  "attributes": [
                    { "key": "content", "value": { "stringValue": "{\"content\":\"Customer requests prorated refund after annual plan cancellation.\"}" } }
                  ]
                }
              ],
              "links": [],
              "status": {},
              "flags": 0
            }
          ]
        }
      ]
    }
  ]
}
For AI SDK spans, use ai.* attributes. Messages can be provided via the ai.prompt.messages attribute or via events (ai.prompt.system, ai.prompt.prompt, ai.response.text, etc.).
{
  "resourceSpans": [
    {
      "resource": {
        "attributes": [
          { "key": "service.name", "value": { "stringValue": "otel-test-ai" } }
        ]
      },
      "scopeSpans": [
        {
          "scope": { "name": "@ai-sdk/openai", "version": "1.0.0" },
          "spans": [
            {
              "traceId": "7a0f3c9e2d4b1e8f6c5a9d3b7e1f4c8a",
              "spanId": "9e2c4a1b8d7f3e6c",
              "name": "ai.streamText",
              "kind": "2",
              "startTimeUnixNano": "1739340000000000000",
              "endTimeUnixNano": "1739340000850000000",
              "attributes": [
                { "key": "operation.name", "value": { "stringValue": "ai.streamText" } },
                { "key": "ai.model.provider", "value": { "stringValue": "openai.chat" } },
                { "key": "ai.model.id", "value": { "stringValue": "gpt-4o" } },
                { "key": "ai.response.model", "value": { "stringValue": "gpt-4o-2024-08-06" } },
                { "key": "ai.response.id", "value": { "stringValue": "chatcmpl-AYk3gR7Lz5yMPnOGH8kT1wQ" } },
                { "key": "ai.response.finishReason", "value": { "stringValue": "stop" } },
                { "key": "ai.usage.promptTokens", "value": { "intValue": "24" } },
                { "key": "ai.usage.completionTokens", "value": { "intValue": "156" } },
                {
                  "key": "ai.prompt.messages",
                  "value": {
                    "stringValue": "[{\"role\":\"user\",\"content\":\"Explain the difference between REST and GraphQL APIs in a few sentences.\"}]"
                  }
                },
                {
                  "key": "ai.response.text",
                  "value": {
                    "stringValue": "REST uses resource-specific endpoints; GraphQL uses a single query endpoint where clients request exact fields."
                  }
                }
              ],
              "events": [],
              "links": [],
              "status": {},
              "flags": 0
            }
          ]
        }
      ]
    }
  ]
}
For AI SDK spans, pass additional values in maxim.metadata JSON.
{
  "key": "maxim.metadata",
  "value": {
    "stringValue": "{\"maxim-trace-name\":\"ai-sdk-chat\",\"maxim-trace-tags\":{\"environment\":\"production\"},\"maxim-tags\":{\"prompt-version\":\"v3.2.1\"},\"maxim-trace-metrics\":{\"user-satisfaction\":4.5},\"maxim-metrics\":{\"latency_ms\":870}}"
  }
}
On the root span of a trace, set maxim.metadata (or metadata) to a JSON string that includes maxim-session-id. Maxim will create the session (with optional name, tags, and metrics) and attach the OTLP trace to that session.
{
  "key": "maxim.metadata",
  "value": {
    "stringValue": "{\"maxim-session-id\":\"550e8400-e29b-41d4-a716-446655440000\",\"maxim-session-name\":\"Support chat\",\"maxim-session-tags\":{\"channel\":\"web\"},\"maxim-session-metrics\":{\"turns\":1},\"maxim-trace-name\":\"otel-session-example\"}"
  }
}
Use the same OTLP traceId for every span that belongs to one logical user interaction so traces group correctly in the UI.
To end a session from OTLP without going through the Logging API, send a root-level span whose only required Maxim-specific attribute is maxim.session.end with a string value equal to the session id. The span’s endTimeUnixNano is used as the session end time. That span does not emit ordinary trace or child-span logs—it only closes the session.If this span appears in the same OTLP payload as other spans for the same conversation, give it the same traceId as those spans so the export appears as a single trace in Maxim.
{
  "traceId": "5f8c7f9a3ef14f67af716ef4cf4a9d23",
  "spanId": "aaaaaaaaaaaaaaaa",
  "name": "session.end",
  "kind": "2",
  "startTimeUnixNano": "1739340000000000000",
  "endTimeUnixNano": "1739340000100000000",
  "attributes": [
    {
      "key": "maxim.session.end",
      "value": { "stringValue": "550e8400-e29b-41d4-a716-446655440000" }
    }
  ],
  "events": [],
  "links": [],
  "status": {},
  "flags": 0
}

Linking traces to sessions via OTLP

When you ingest OTLP traces, Maxim can create a session and associate the trace with it if the root span (a span with no parent in the payload, or whose parent is missing from the batch) includes session details inside structured metadata.

Where to put session fields

Put session fields in a JSON object serialized as the string value of either:
  • maxim.metadata, or
  • metadata
The JSON may contain these keys (all except maxim-session-id are optional):
KeyPurpose
maxim-session-idRequired to link the trace to a session. Non-empty string; becomes the session id in Maxim.
maxim-session-nameDisplay name for the session.
maxim-session-tagsObject map of string tags applied to the session (values may be coerced to strings).
maxim-session-metricsObject map of numeric metrics (numbers, or strings that parse as finite numbers).
You can combine these with existing Maxim trace fields in the same JSON object, such as maxim-trace-name, maxim-trace-tags, maxim-tags, maxim-trace-metrics, and maxim-metrics.

Ending a session

A session can be closed in more than one way. Choose the option that fits how you instrument your app (pure OTLP, hybrid with HTTP APIs, or SDK elsewhere).

Behavior you should know

  • Trace end ≠ session end. When a root OTLP span finishes, Maxim records a trace end with that span’s end time. That does not end the session.
  • Session end is a separate signal: it finalizes the session lifecycle (including triggers such as session-level evaluations, depending on your setup).

Option 1 — OTLP span attribute maxim.session.end

Add a span attribute:
  • Key: maxim.session.end
  • Value: string (or value that resolves to a string), equal to the session id you want to close.
Ingest behavior:
  • The span’s endTimeUnixNano is used as the session end timestamp.
  • No Maxim trace or generation logs are produced from that span; it acts as a session-end marker only.
This is useful when everything goes through your OpenTelemetry pipeline and you want to close the session at a specific point in that pipeline.

Option 2 — Public Logging API (POST /v1/logging)

You can end a session with the single-action public logging endpoint, which is the same API used by Maxim SDKs under the hood for structured logging. Endpoint: https://api.getmaxim.ai/v1/logging Headers:
  • Content-Type: application/json
  • x-maxim-api-key: your Maxim API key
Body (session end):
{
  "repoId": "<LOG_REPOSITORY_ID>",
  "entity": "session",
  "action": "end",
  "entityId": "<SESSION_ID>"
}
  • repoId — Log Repository id (same repository you use for OTLP x-maxim-repo-id).
  • entityId — The session id to end (must match the id you used when creating or linking the session).
On success, the API responds with a JSON body that includes confirmation of the entity, action, and entityId. Use this when an HTTP client or backend that does not speak OTLP needs to explicitly close a session (for example after a webhook, batch job, or admin action). Other session actions (create, add-trace, add-tag, add-metric, feedback, evaluate, and more) use the same endpoint with different action and data fields. Refer to your OpenAPI / API reference for the full Logging schema if you need those operations.

Choosing OTLP vs Logging API for session end

ApproachBest when
maxim.session.end on an OTLP spanSession boundaries are known inside the traced request flow; you already export OTLP to Maxim.
POST /v1/logging with session + endA non-OTLP component must end the session, or you already use the public logging API for other entities.
You can mix approaches in the same application: for example, link traces to a session via OTLP metadata, then end the session later from a worker using the Logging API.

Maxim-specific additional attributes

Trace and span metadata (inside JSON)

Maxim supports the following additional keys inside the maxim.metadata or metadata JSON string (in addition to session fields above):
  • maxim-trace-tags: Map<string, string> for parent trace tags
  • maxim-tags: Map<string, string> for current span/generation tags
  • maxim-trace-metrics: Map<string, number> for parent trace metrics
  • maxim-metrics: Map<string, number> for current span/generation metrics
  • maxim-trace-name: trace display name (GenAI and AI SDK paths)

Span-level OTLP attributes

  • maxim.session.end — String value = session id to end; see Ending a session (OTLP maxim.session.end). Not part of the JSON metadata blob; set as a normal OTLP span attribute.
Where to place metadata keys:
  • For OpenTelemetry GenAI (gen_ai.*): put them inside maxim.metadata (or metadata)
  • For OpenInference (llm.*): put them inside metadata
  • For AI SDK (ai.*): put them inside maxim.metadata (or metadata)