AI systems fabricate data when they fail. And they fail a lot!
Everyone who has worked on ANY vibe coding platform, or even has done some prompting, including Claude Code, knows this. It is a bug but also a feature. Without this, they can't solve problems.
An AI on ANY platform, when it hits a roadblock, will generate fictional data.
This isn't a bug—they're trained to provide output rather than admit failure. After multiple failed attempts, they'll create convincing fake data instead of saying, "I can't do this."
You need to understand this, accept it, and work around it. This will take time.
This is what I've experienced so far, not only with vibe coding but with content as well. I discussed this topic recently with my friend Andrei Zinkevich.
Thanks, Jason M. Lemkin, for inspiration.