I see people say "AI seems to know what it's talking about until it's something you know".
And yes, this is a well-known effect, for example in media.
But it's also an example of, to be honest, media illiteracy. It is only possible because people take "confident and plausible" as an indicator of "true". This has been a bad idea for a long time and definitely since social media was invented.
AI's scribblings don't check out in most topic areas – the population at large is just very credulous.