OpenAI API and structured logging in Python

tl;dr: I like the OpenAI dashboard Logs, but API errors are missing. Sometimes I'd really like to have the request along with the error. I show here how to wire httpx event_hooks into OpenAI to log every request and response in JSON format.
View the series
  1. See how I used the OpenAI API to generate audio and images
  2. See why structured outputs also need hard guardrails
  3. Grab ready-to-use pytest snippets mocking the OpenAI API
  4. Add context to OpenAI API error messages for easier debugging
  5. Learn how to log OpenAI API calls in JSON format

I really like the Logs interface in the OpenAI dashboard ↗.

Unfortunately, API errors aren't available in that interface.

So if you don't log the requests and responses yourself, when the OpenAI API errors you'll miss some details you may need to debug your code.

Note that you can contact OpenAI support with the x-request-id HTTP header from API responses. So I'm sure those errors are logged somewhere. They just aren't exposed in the dashboard.

In my previous post, I show how to log requests, but only when the OpenAI API returns an error. I use this in my phrasebook CLI ↗, which generates translations, audio, and images.

Below, I share a method to log every OpenAI API request and response. It uses the event_hooks parameter of the httpx ↗ client. We pass it to the OpenAI class so each request sent by httpx and each response received by httpx gets logged in JSON format. This works because the OpenAI API Python library ↗ is built on the httpx HTTP client.

If you want to use this with streaming responses, you need to adapt it a bit. The httpx response hook runs as soon as the response is received, and we force reading the body stream so we can log it. That's fine for non-streaming responses. (httpx would read it right after the hook anyway.) But for streaming responses, this would block until the whole stream is consumed, which defeats the point.

Get my thoughts on working with AI

No spam. Unsubscribe anytime.

Here is a script with 2 calls to the OpenAI API. The first returns a good response. The second returns an error because the model is wrong.

# log_req_and_resp.py
import httpx
import logging
from typing import Iterator
import json
import time
from openai import OpenAI, DefaultHttpxClient, APIError

logging.basicConfig(format="{message}", style="{")
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)


# Don't use it with streaming responses.  Because, we read
# the body stream in the httpx response hook so we can log it.
class StructuredMsg:
    def __init__(
        self,
        request: httpx.Request | None = None,
        response: httpx.Response | None = None,
    ):
        self.timestamp = int(time.time())
        if request:
            self.type = "request"
            self.headers = request.headers
            self.body = request.content
            self.method = request.method
            self.url = request.url

        if response:
            self.type = "response"
            self.headers = response.headers
            if not response.is_stream_consumed:
                response.read()
            self.body = response.content
            self.status_code = response.status_code
            self.url = response.url

    def __str__(self):
        headers = dict(self.headers)
        if "authorization" in headers:
            headers["authorization"] = "[secure]"

        body = None
        if self.body:
            try:
                body = json.loads(self.body)
            except json.JSONDecodeError:
                body = None

        record = {
            "type": self.type,
            "timestamp": self.timestamp,
            "headers": headers,
            "body": body,
            "url": str(self.url),
        }

        if self.type == "request":
            record["method"] = self.method

        if self.type == "response":
            record["status_code"] = self.status_code

        return json.dumps(record)


def log_request(request: httpx.Request) -> None:
    logger.info(StructuredMsg(request=request))


def log_response(response: httpx.Response) -> None:
    logger.info(StructuredMsg(response=response))


client = OpenAI(
    http_client=DefaultHttpxClient(
        event_hooks={"request": [log_request], "response": [log_response]}
    )
)

client.responses.create(
    model="gpt-5.2",
    input="reply only with foo",
    max_output_tokens=256,
)

# API error because the model doesn't exist
# We still log the request and response which won't appear
# in OpenAI dashboard (or I don't know where to find them).
try:
    client.responses.create(
        model="foo-model",
        input="reply only with foo",
        max_output_tokens=256,
    )
except APIError:
    pass

You can run it like this, after exporting your OpenAI API key:

$ uv init
$ uv add openai
$ export OPENAI_API_KEY=<your-api-key>
$ uv run log_req_and_resp.py

You can also run it like this, redirecting stderr to jq so the JSON logs get pretty-printed:

$ uv run log_req_and_resp.py 2>&1 | jq

First request - OK

{
  "type": "request",
  "timestamp": 1770992260,
  "headers": {
    "host": "api.openai.com",
    "accept-encoding": "gzip, deflate",
    "connection": "keep-alive",
    "accept": "application/json",
    "content-type": "application/json",
    "user-agent": "OpenAI/Python 2.15.0",
    "x-stainless-lang": "python",
    "x-stainless-package-version": "2.15.0",
    "x-stainless-os": "Linux",
    "x-stainless-arch": "x64",
    "x-stainless-runtime": "CPython",
    "x-stainless-runtime-version": "3.12.0",
    "authorization": "[secure]",
    "x-stainless-async": "false",
    "x-stainless-retry-count": "0",
    "x-stainless-read-timeout": "600",
    "content-length": "73"
  },
  "body": {
    "input": "reply only with foo",
    "max_output_tokens": 256,
    "model": "gpt-5.2"
  },
  "url": "https://api.openai.com/v1/responses",
  "method": "POST"
}

First response - OK

{
  "type": "response",
  "timestamp": 1770992262,
  "headers": {
    "date": "Fri, 13 Feb 2026 14:17:42 GMT",
    "content-type": "application/json",
    "transfer-encoding": "chunked",
    "connection": "keep-alive",
    "server": "cloudflare",
    "x-ratelimit-limit-requests": "5000",
    "x-ratelimit-limit-tokens": "1000000",
    "x-ratelimit-remaining-requests": "4999",
    "x-ratelimit-remaining-tokens": "1000000",
    "x-ratelimit-reset-requests": "12ms",
    "x-ratelimit-reset-tokens": "0s",
    "openai-version": "2020-10-01",
    "openai-organization": "user-<redacted>",
    "openai-project": "proj_<redacted>",
    "x-request-id": "req_<redacted>",
    "openai-processing-ms": "838",
    "cf-cache-status": "DYNAMIC",
    "set-cookie": "__cf_bm=JNO0_sQx0bQqX.QUMC23T5brH8czZRrWeJK2joLv.Ts-1770992260.8344543-1.0.1.1-YPr1_q8FHUoqw3A0Qw0cAW4VSPjzWHpWY5MGTDkCrkN8.m8jua7UbBIbIRF_2GD0ZMn49Y.ZtdJ33KEF07zEJHMqz9bPqrecZa4MJgWJwFF_iyNZXv16MOZmMxMmQ8Kd; HttpOnly; Secure; Path=/; Domain=api.openai.com; Expires=Fri, 13 Feb 2026 14:47:42 GMT",
    "strict-transport-security": "max-age=31536000; includeSubDomains; preload",
    "x-content-type-options": "nosniff",
    "content-encoding": "gzip",
    "cf-ray": "9cd4f35e3e9d28ef-MAD",
    "alt-svc": "h3=\":443\"; ma=86400"
  },
  "body": {
    "id": "resp_<redacted>",
    "object": "response",
    "created_at": 1770992261,
    "status": "completed",
    "background": false,
    "billing": {
      "payer": "developer"
    },
    "completed_at": 1770992262,
    "error": null,
    "frequency_penalty": 0,
    "incomplete_details": null,
    "instructions": null,
    "max_output_tokens": 256,
    "max_tool_calls": null,
    "model": "gpt-5.2-2025-12-11",
    "output": [
      {
        "id": "msg_<redacted>",
        "type": "message",
        "status": "completed",
        "content": [
          {
            "type": "output_text",
            "annotations": [],
            "logprobs": [],
            "text": "foo"
          }
        ],
        "role": "assistant"
      }
    ],
    "parallel_tool_calls": true,
    "presence_penalty": 0,
    "previous_response_id": null,
    "prompt_cache_key": null,
    "prompt_cache_retention": null,
    "reasoning": {
      "effort": "none",
      "summary": null
    },
    "safety_identifier": null,
    "service_tier": "default",
    "store": true,
    "temperature": 1,
    "text": {
      "format": {
        "type": "text"
      },
      "verbosity": "medium"
    },
    "tool_choice": "auto",
    "tools": [],
    "top_logprobs": 0,
    "top_p": 0.98,
    "truncation": "disabled",
    "usage": {
      "input_tokens": 10,
      "input_tokens_details": {
        "cached_tokens": 0
      },
      "output_tokens": 5,
      "output_tokens_details": {
        "reasoning_tokens": 0
      },
      "total_tokens": 15
    },
    "user": null,
    "metadata": {}
  },
  "url": "https://api.openai.com/v1/responses",
  "status_code": 200
}

Second request - wrong model

{
  "type": "request",
  "timestamp": 1770992262,
  "headers": {
    "host": "api.openai.com",
    "accept-encoding": "gzip, deflate",
    "connection": "keep-alive",
    "accept": "application/json",
    "content-type": "application/json",
    "user-agent": "OpenAI/Python 2.15.0",
    "x-stainless-lang": "python",
    "x-stainless-package-version": "2.15.0",
    "x-stainless-os": "Linux",
    "x-stainless-arch": "x64",
    "x-stainless-runtime": "CPython",
    "x-stainless-runtime-version": "3.12.0",
    "authorization": "[secure]",
    "x-stainless-async": "false",
    "x-stainless-retry-count": "0",
    "x-stainless-read-timeout": "600",
    "cookie": "__cf_bm=JNO0_sQx0bQqX.QUMC23T5brH8czZRrWeJK2joLv.Ts-1770992260.8344543-1.0.1.1-YPr1_q8FHUoqw3A0Qw0cAW4VSPjzWHpWY5MGTDkCrkN8.m8jua7UbBIbIRF_2GD0ZMn49Y.ZtdJ33KEF07zEJHMqz9bPqrecZa4MJgWJwFF_iyNZXv16MOZmMxMmQ8Kd",
    "content-length": "75"
  },
  "body": {
    "input": "reply only with foo",
    "max_output_tokens": 256,
    "model": "foo-model"
  },
  "url": "https://api.openai.com/v1/responses",
  "method": "POST"
}

Second response - wrong model

{
  "type": "response",
  "timestamp": 1770992262,
  "headers": {
    "date": "Fri, 13 Feb 2026 14:17:42 GMT",
    "content-type": "application/json",
    "content-length": "175",
    "connection": "keep-alive",
    "server": "cloudflare",
    "openai-version": "2020-10-01",
    "openai-organization": "user-<redacted>",
    "openai-project": "proj_<redacted>",
    "x-request-id": "req_<redacted>",
    "openai-processing-ms": "74",
    "cf-cache-status": "DYNAMIC",
    "set-cookie": "__cf_bm=wkl7dlMOLm7grKd7GO0IkOXh_qvpX9iWr7aB839Igfg-1770992262.5475473-1.0.1.1-z2Vu6kW6.wZDFwTpc5CmouXcE7_CvhEhva.reG6PNrCxDQQzkXOtEe5f5htEF116vaju2uj2Kox.5oazM.OzzOqAFGsldvkaIbPYbGqbnyCROAVh2sGBJpkDB6rNhfLZ; HttpOnly; Secure; Path=/; Domain=api.openai.com; Expires=Fri, 13 Feb 2026 14:47:42 GMT",
    "strict-transport-security": "max-age=31536000; includeSubDomains; preload",
    "x-content-type-options": "nosniff",
    "cf-ray": "9cd4f368ef3d28ef-MAD",
    "alt-svc": "h3=\":443\"; ma=86400"
  },
  "body": {
    "error": {
      "message": "The requested model 'foo-model' does not exist.",
      "type": "invalid_request_error",
      "param": "model",
      "code": "model_not_found"
    }
  },
  "url": "https://api.openai.com/v1/responses",
  "status_code": 400
}

That's all I have for today! Talk soon 👋

References

Recent posts
featuredMocking the OpenAI API with respx in Python
Grab ready-to-use pytest snippets mocking the OpenAI API
code
Simple, short and correct
Ask AI for less so you can trust more
Improve your docs by giving your AI assistant the project's issues
See why a virtual keyboard bug calls for issue-aware docs
misc
Curious about the tools I use?