I think what you're saying there is the equivalent of saying that LLMs do have an understanding of language.
And I think the point he was trying to make would be the equivalent of saying LLMs don't have an understanding of language, not because their output is not perfect, but because being able to that even perfectly is not the same as having an understanding.
I'm not saying that LLMs do have an understanding of language - that goes back to Searle's contention that his response would be indistinguishable from the response of a native Chinese speaker. No it wouldn't if he was being given his instructions by an LLM. He'd be reproducing all the flaws of LLMs in his answers. To truly be indistinguishable, he'd need to be given his instructions by a human or a human-equivalent intelligence.
If he is trying to say something more than just a trivial point, I don't agree with him. If an LLM could give a 'perfect' output (perfect here meaning indistinguishable from a human response in all respects) then you would have a strong case for saying that there is no reason to withhold belief that it understands what it is doing. If
understanding has no outwardly detectable manifestation, it becomes a bit of a meaningless concept. I grant a belief that others understand in the same way that I do because their observable behaviour warrants that belief. Stronger than that, a contrary belief is not warranted by the observable behaviour.
The point here, surely, is that understanding does have an outward manifestation. That manifestation may be subtle, but it is most certainly there. If an LLM gave answers that warranted the idea that it understands what it is doing, I'd start believing that it understands what it is doing. Currently they don't do that - all the evidence from their behaviour is that they don't understand a damn thing about what they are doing. That is the point behind the clever octopus thought experiment.
A related philosophical argument that I've always found not to be quite coherent is the one about 'zombies'. Same reasoning as above - if your putative zombie manifests exactly the same behaviour as a conscious being would manifest then you have no reason to think that it is a zombie. In fact, you have every reason to think that your prior belief that it was a zombie, however that came about, was wrong. Good Bayesian thinking is required there - you need to update your priors in light of new evidence.
ETA: This also goes to a point about not just what consciousness is but also what it is for. Why has it evolved? Is it some kind of cosmic accident, a weird, pointless, dualistic epiphenomenon, or did it evolve for a reason, to allow conscious beings to act more effectively in the world. I strongly think that it is the latter - it is the result of generating a certain kind of model of 'me in the world' to use to navigate your way around the world. You can be sidetracked by thinking about the so-called 'hard' problem of consciousness into what is basically a mystical belief about consciousness. But that being conscious, having a mind, seems so amazing from inside shouldn't fool us into mystical beliefs about it.