AI hallucination refers to a response provided by a large language model that appears to be correct but is not grounded in factual accuracy.
This has become one of the notorious risks of using AI. More often than not, however, the term hallucination is misused by managers with little-to-no AI experience as a way to signal that they know what they’re talking about.
3 Questions to Evaluate Potential AI Hallucinations
- What is my expected outcome in this situation?
- Is how I’m framing my question the best way to get what I expect?
- Am I sure the words I’m using accurately depict my request?
But are AI hallucinations really so prominent? Or is the term simply over-used to identify with any response we assume to be incorrect, thanks to the aforementioned technical experts?
Let’s go on a mind-bending journey, where we take dynamic meaning theory (DMT) into consideration, explore the concept of hallucinations, and question whether it’s the AI that’s hallucinating or it’s us, as users, who have the communication issue.
What Is the Dynamic Meaning Theory
Dynamic meaning theory is the concept that an understanding between two persons, in this case the user and the AI, are being exchanged. But, the limitations of language and knowledge of the subjects cause a misalignment in the interpretation of the response.
Communication is key in any relationship, and it’s very easy for humankind’s nature to assume our perception and perspective of the world is the only way. Misconstrued interpretations can be seen throughout history, where symbols and localized information were used to describe concepts, as the words required were either not known or didn’t exist.
For example, one of the two major Sanskrit epics of ancient India, the Mahabharata, notes several references to Vimanas, described as flying, fiery chariots in the sky, capable of traveling anywhere and being invisible to enemies. Both visually depicted in art and described in text, taken literally, we would assume literal chariots, pulled by horses, were to be seen across the sky. In a time where there were certainly no cars or aircraft for that matter, it makes you wonder, what were ancient Indians seeing and trying to depict?
Conversational relevance of the past can not only have its gaps in concepts, but can also be lost in translation, present day relevance, or, as seen in the previous example, a combination of both.
How Language Shift Dynamics May Explain AI Hallucinations
One possible explanation for these seemingly contradictory responses lies in the realm of language shift dynamics (LSD). LSD refers to the process by which language evolves over time, influenced by various factors such as cultural shifts, technological advancements and societal changes. As language adapts to new contexts and meanings emerge, so too, does our understanding and interpretation of information.
In the case of AI-generated responses, it is possible that the underlying algorithms are not yet fully equipped to accurately interpret or generate text in a way that aligns with the expectations we have as humans. This discrepancy can lead to responses that may seem accurate on the surface, but ultimately, lack the depth or nuance required for true understanding.
This problem can only be handled by one of the future evolutions of AI, where your knowledge and limitations of information can be used to depict the answer you are expecting. The understanding of what you are attempting to ask and your restricted ability to understand a response would need to be taken into consideration to personally craft responses to your way of thinking.
To illustrate this concept further: imagine having a conversation between the latest GPT model about the concept of love. If asked, the user may use words like passion, devotion and commitment to describe their understanding of love, while the AI may respond with phrases such as “emotional attachment” or “romantic interest.” These responses may appear similar on the surface, but they represent fundamentally different understandings of the concept.
This discrepancy can be attributed to the fact that language is constantly evolving and adapting to new contexts. As we continue to refine their understanding of complex concepts like love, so too, must AI systems evolve in order to accurately interpret and generate text that aligns with these shifting meanings.
How to Determine If Your AI Is Hallucinating or the User
It’s essential for both developers and users of AI technology to remain aware of dynamic meaning theory in the responses they receive, as well as the dynamics of the language being used in the input.
When someone asks, “Did you see that show on Netflix?,” what do you say? Did you ever guess correctly?
If so, how can there possibly be an understanding of something so vague and generic that out of thousands of shows, a person is able to not only guess correctly, but speak in confidence and continue the conversation?
Remember, context is key. And, as humans, most of our context is understood through unspoken means, whether that be through body language, societal trends — even our tone. We can assume the show because it’s what everyone is talking about, it’s what is being advertised, and it’s probably what’s recommended to us. Even if that’s not the case, our connection with the person asking along with their likes and interests is enough context to assume the show.
We humans are a funny thing, but we also have the potential to hallucinate in response to questions like the one above. But, in our current iteration of AI, our human-to-human understanding isn’t so easily contextualized, so we need to be more critical of the context we provide in writing.
When asking and expecting something of an AI, it’s important to consider the following:
- What is my expected outcome in this situation?
- Is how I’m framing my question the best way to get what I expect?
- Am I sure the words I’m using accurately depict my request?
By working backwards from our expectations, we can capture the context we expect from the conversation, whether it be with an AI or another person, and improve our own communication skills.
Eventually, AI will be able to fill in these gaps for us, where our openly, carelessly distributed information is collected and trained to understand more about us, our interests and goals than we probably know ourselves.
In the near future, biometric identification through voice will be the next major stepping stone for this to happen, where our voice inputs are categorized, then analyzed to be used as additional context. AI will soon be able to understand and identify common occurrences, such as emergency situations, times we’re unsure, or even times we’re unwilling to answer truthfully — all in addition to who we are.