Models sometimes get worse
In Models will only get better I've been a bit too enthusiastic about the steady progress of models.
I was just starting to use the new GPT-5.4 model, and I found it better at following instructions, writing, and formatting answers.
But then I asked about the existence of a built-in library in some niche programming language. And boom, a lie: "Yes, it exists. Here's how you can use it..." It was wrong. There was no sign of this built-in library.
Previous models knew that fact, or at least GPT-4o and GPT-5.2 did.
So what? What do you do about this?
I keep asking about topics the models seem to be good at, and I ask niche questions only if I can give the models specific context they can reason about.
Did you notice this pattern?