Logging requests when the OpenAI API errors in Python
View the series
- See how I used the OpenAI API to generate audio and images
- See why structured outputs also need hard guardrails
- Grab ready-to-use pytest snippets mocking the OpenAI API
- Add context to OpenAI API error messages for easier debugging
When the OpenAI API returns an error, the HTTP status of the response and the body show up in the error message:
Traceback (most recent call last):
...
openai.BadRequestError: Error code: 400 - {'error': {'message': "The requested model 'foo-model' does not exist.", 'type': 'invalid_request_error', 'param': 'model', 'code': 'model_not_found'}}
This is helpful, but it isn't always enough to debug the code that fails.
I think linking the request to that response gives a clearer picture of the situation.
That's why, in my phrasebook CLI ↗, which generates translations, audio, and images, I decided to log requests to the OpenAI API when it returns an error.
I do this with the following context manager. If the request succeeds, we proceed normally. If an API error happens, we log the request, request headers, and request body, then re-raises the error:
@contextlib.contextmanager
def log_request_info_when_api_error_raised() -> Iterator[None]:
try:
yield
except APIError as exc:
logger.error(f"{exc.request!r}")
# It's safe to log httpx headers because its repr
# sets 'authorization' to '[secure]' hidding our API key
logger.error(f"Request headers - {exc.request.headers!r}")
logger.error(f"Request body - {exc.request.content.decode()}")
raise exc
No spam. Unsubscribe anytime.
Here's a script where we try to use the model
foo-model, which
doesn't exist:
from typing import Iterator
import contextlib
import logging
from openai import OpenAI, APIError
logger = logging.getLogger(__name__)
client = OpenAI(api_key="<your-api-key>")
@contextlib.contextmanager
def log_request_info_when_api_error_raised() -> Iterator[None]:
try:
yield
except APIError as exc:
logger.error(f"{exc.request!r}")
# It's safe to log httpx headers because its repr
# sets 'authorization' to '[secure]' hidding our API key
logger.error(f"Request headers - {exc.request.headers!r}")
logger.error(f"Request body - {exc.request.content.decode()}")
raise exc
with log_request_info_when_api_error_raised():
response = client.responses.parse(
model="foo-model",
input="reply only with foo",
)
If you run that script, you'll get something like this:
<Request('POST', 'https://api.openai.com/v1/responses')>
Request headers - Headers({'host': 'api.openai.com', 'accept-encoding': 'gzip, deflate', 'connection': 'keep-alive', 'accept': 'application/json', 'content-type': 'application/json', 'user-agent': 'OpenAI/Python 2.20.0', 'x-stainless-lang': 'python', 'x-stainless-package-version': '2.20.0', 'x-stainless-os': 'Linux', 'x-stainless-arch': 'x64', 'x-stainless-runtime': 'CPython', 'x-stainless-runtime-version': '3.12.0', 'authorization': '[secure]', 'x-stainless-async': 'false', 'x-stainless-retry-count': '0', 'x-stainless-read-timeout': '600', 'content-length': '51'})
Request body - {"input":"reply only with foo","model":"foo-model"}
Traceback (most recent call last):
...
openai.BadRequestError: Error code: 400 - {'error': {'message': "The requested model 'foo-model' does not exist.", 'type': 'invalid_request_error', 'param': 'model', 'code': 'model_not_found'}}
What's interesting with this approach is that
we don't have to re-run the code to look for
more information, by setting the log level to
DEBUG, and trying to
reproduce the error.
When an error happens, and only when it happens, we log the extra information right away.
That's all I have for today! Talk soon 👋