Models sometimes get worse

About: Every Saturday, I share my thoughts on working with AI. Subscribe for free.

In Models will only get better I've been a bit too enthusiastic about the steady progress of models.

I was just starting to use the new GPT-5.4 model, and I found it better at following instructions, writing, and formatting answers.

But then I asked about the existence of a built-in library in some niche programming language. And boom, a lie: "Yes, it exists. Here's how you can use it..." It was wrong. There was no sign of this built-in library.

Previous models knew that fact, or at least GPT-4o and GPT-5.2 did.

So what? What do you do about this?

I keep asking about topics the models seem to be good at, and I ask niche questions only if I can give the models specific context they can reason about.

Did you notice this pattern?

Recent posts
featuredHow I learned the OpenAI Agents SDK by breaking down a Stripe workflow from the OpenAI cookbook (part 1)
See how I learned the Stripe API to follow Dan's agent workflow
code
Models will only get better
See why overnight improvement is rarely what it seems
Improve your docs by giving your AI assistant the project's issues
See why a virtual keyboard bug calls for issue-aware docs
misc
Curious about the tools I use?