How Often Does
ChatGPT Hallucinate?
16/04/24
By:
Dan O'Connor
Can the generative AI be trusted to provide you with accurate information every time?
Large language models like ChatGPT are revolutionizing how we interact with technology. But a new wrinkle is emerging - a tendency for these models to fabricate information, sometimes referred to as "hallucinations."
Experts estimate ChatGPT hallucinates between 15% and 30% of the time, meaning it can deliver seemingly convincing but incorrect or irrelevant responses. This raises concerns about users unknowingly taking AI-generated fiction for fact.
The model is fantastic at mimicking human language, however, it can struggle with real-world context and rely on patterns in its training data, which can sometimes be misleading.
These hallucinations can stem from various sources. Biases in the training data can lead the model down rabbit holes of misinformation. Additionally, limitations in understanding the physical world can cause the model to make nonsensical connections.
The good news? Developers are constantly working to improve these models. OpenAI have made strides in reducing hallucinations, but vigilance is key. Double-check information, especially for critical tasks.
So, the next time you chat with ChatGPT, remember - it might be having a creative moment, not an accurate one.
Like what you read? There’s plenty more where that came from. Subscribe to our weekly newsletter now to get our content free forever! Make sure to check out our Editorials and Latest News stories so you don’t just stay informed, you stay ahead!
Latest News