Models sometimes get worse

About: Every Saturday, I share my thoughts on working with AI. Subscribe for free.

In Models will only get better I've been a bit too enthusiastic about the steady progress of models.

I was just starting to use the new GPT-5.4 model, and I found it better at following instructions, writing, and formatting answers.

But then I asked about the existence of a built-in library in some niche programming language. And boom, a lie: "Yes, it exists. Here's how you can use it..." It was wrong. There was no sign of this built-in library.

Previous models knew that fact, or at least GPT-4o and GPT-5.2 did.

So what? What do you do about this?

I keep asking about topics the models seem to be good at, and I ask niche questions only if I can give the models specific context they can reason about.

Did you notice this pattern?

Recent posts
latestYou can always pay more to wait less
About the trade-offs of building AI systems
How to improve the Python import time of your Typer CLI
Cut down your Python import time with this 6-step method
code
Improve your docs by giving your AI assistant the project's issues
See why a virtual keyboard bug calls for issue-aware docs
misc
Curious about the tools I use?