Models sometimes get worse

About: Every Saturday, I share my thoughts on working with AI. Subscribe for free.

In Models will only get better I've been a bit too enthusiastic about the steady progress of models.

I was just starting to use the new GPT-5.4 model, and I found it better at following instructions, writing, and formatting answers.

But then I asked about the existence of a built-in library in some niche programming language. And boom, a lie: "Yes, it exists. Here's how you can use it..." It was wrong. There was no sign of this built-in library.

Previous models knew that fact, or at least GPT-4o and GPT-5.2 did.

So what? What do you do about this?

I keep asking about topics the models seem to be good at, and I ask niche questions only if I can give the models specific context they can reason about.

Did you notice this pattern?

Recent posts
featuredHow I crafted TL;DRs with LLMs and modernized my blog (part 1)
See how I crafted story-like tl;drs for my posts with LLMs
prompt
Models will only get better
See why overnight improvement is rarely what it seems
How to improve the Python import time of your Typer CLI
Cut down your Python import time with this 6-step method
code
Curious about the tools I use?