What Is Sentient AI?

Some experts believe it’s only a matter of time before artificial intelligence systems can think and feel like humans.

Written by Ellen Glover
brain hooked up to computers
Image: Shutterstock
UPDATED BY
Hal Koss | Feb 13, 2024

Artificial intelligence is advancing so rapidly that AI tools can pass the bar exam, create award-winning art and mimic a celebrity’s voice well enough to fool the average person. This leads many to wonder: Will AI one day become sentient? That is, will it be capable of thinking and experiencing feelings the way humans can, perceiving the world and responding emotionally to its surroundings?

What Is Sentient AI?

Sentient AI is an artificial intelligence system capable of thinking and feeling like a human. It can perceive the world around it and have emotions about those perceptions. Today’s AI is not capable of sentience, and whether it ever will remains unclear.

Sentient AI, in theory at least, is conscious and aware of the world around it, and able to have subjective experiences within that world. It possesses uniquely human qualities such as self-awareness, creativity and the capacity to feel genuine emotions like joy and fear. And it has the ability to learn, adapt and even form interpersonal relationships with people and other sentient AI.

At the moment, sentient AI does not exist. And whether it’s possible for it to exist at all, let alone the specifics of what it would take to build it, is hotly debated. Still, the prospect of developing sentient AI has given rise to profound ethical and philosophical questions. What kind of effect would sentient AI have on society? On our understanding of sentience itself? As artificial intelligence continues to advance, these conversations are more urgent than ever.

Related ReadingWhat Is the Turing Test?

 

Is AI Sentient?

The expert consensus is that AI is not sentient. It is not capable of experiencing the world and having emotions as humans do.

Nevertheless, the level of sophistication exhibited by AI chatbots like ChatGPT have caused some users to wonder if they are capable of human-level insights and emotions. Even people who have built this technology have suggested as much; former Google engineer Blake Lemoine, for example, told the Washington Post that LaMDA (the underlying language model used to power Bard, the predecessor to Google’ Gemini chatbot) was sentient, and he was fired for doing so. And the internet is full of stories in which chatbots have apparently gaslit, threatened and even professed love to users. 

As compelling as these accounts may be, they do not prove that AI is actually sentient. Rather, they simply confirm that AI is incredibly good at mimicking sentience. 

Indeed, these systems can carry on coherent conversations in which they convey feelings, musings, opinions and other “reflections of consciousness,” as computer scientist and AI expert Selmer Bringsjord put it. But there is no evidence that they have any kind of internal monologue, sense of perception, or even awareness that they exist in the larger world — all of which are indicators of sentience, as many experts understand it.

Find out who's hiring.
See jobs at top tech companies & startups
View All Jobs

 

What Is Sentience?

Defining sentience is tricky, even in the context of humans. Unlike other phenomena like gravity or the weather, sentience is not “empirically measurable,” Grace Lindsay, a neuroscientist at New York University, told Built In. Instead, we rely on philosophy, inferences and psychological experiments to try to understand what makes us sentient.

Sometimes sentience is used to describe a being who can sense the world and engage with their environment. Other times it is seen as a subset of consciousness, or the experience of “what it is like” to be something, according to philosopher Thomas Nagel, where a being has subjective experiences like seeing, feeling and thinking. David Chalmers, a prominent philosopher and cognitive scientist, argues that sentience is sometimes used as shorthand for phenomenal consciousness, where one has good or bad experiences in response to their existence. In this case, a sentient (or phenomenally conscious) being can rely on how they feel — or how they remember feeling in the past — to reason things out and change how they’re going to respond to incoming stimuli.

“If AI was actually sentient, it would have a much greater ability to form its own goals independently.”

“If AI was actually sentient, it would have a much greater ability to form its own goals independently, be a free agent in a sense,” psychiatrist and psychology consultant Ralph Lewis told Built In. “It wouldn’t just be goals that we programmed in or that somehow emerged from the original way it was created.”

But even the most advanced AI models won’t stray from their original programming. And while today’s AI systems may appear creative and capable of emotion, that’s because they’re good at regurgitating content that’s already been produced in ways that sound like us. They use natural language processing and natural language generation to replicate human speech, but they don’t actually comprehend the things they’re saying or feel the emotions they’re performing.

Related ReadingWhat Is the Eliza Effect?

 

Could AI Achieve Sentience?

Some experts believe AI sentience to be theoretically possible. However, there’s no consensus on what that would actually look like.

According to former Google engineer Blake Lemoine, AI has already achieved a level of it. Others in the industry, like Bringsjord, think AI sentience is altogether impossible, claiming that the complexity of sentience cannot be configured neatly into a computer the way basic text and images can. If sentience cannot be distilled down into a mathematical equation, then how could it possibly be programmed into a system of which math is such a foundational part?

Megan Peters, a neuroscientist at the University of California, co-authored a paper exploring whether AI could achieve consciousness, which some experts believe is tied to sentience. To do this, Peters, along with several other philosophers, computer scientists and neuroscientists, listed a half-dozen qualities associated with consciousness in humans. These include recurrent processing theory and being able to control what we pay attention to, remember and perceive. Then, they compared those qualities to what AI is known to be capable of. The paper’s authors concluded that, while current AI systems do not exhibit all the properties mentioned right now, they could in the future.

Of course, that assumes that all of the indicator properties laid out in the report are correct, and that there is nothing else left to explore or discover about consciousness, which is unlikely. There is no empirically scientific way to determine consciousness in a human, so there is no empirically scientific way to determine consciousness in an AI model.

“This is meant to be presented as, almost assuredly, incomplete,” Peters said of the report. “Not just incomplete, but that some of those indicator properties might end up not being part of the equation at all.”

Meanwhile, Lewis wrote a paper exploring what it would take to actually build a sentient AI, laying out some characteristics that might be necessary for something to be sentient. They include embodiment and sensorimotor experience, or a way for cognition to be “fundamentally grounded” in bodily sensations, as well as a unified computational framework that can integrate multiple cognitive processes and functions, like perception, memory and problem-solving. It would also need to have a sense of time and some kind of biographical memory, where it could call upon personal memories to better understand the world around it and react. 

In principle, Lewis argues, it could be possible to actually engineer a sentient AI, particularly as some systems rapidly move toward the direction of artificial general intelligence (AGI), where they’re able to accomplish any intellectual task a human being could perform. 

“I think sentience and intelligence comes with a very different set of moral baggage.”

Even AGI is still a ways off though. Out of the seven possible types of AI, all systems are either reactive or limited memory AI, meaning they fall into the category of narrow or weak AI and can only do very specific things and can’t independently learn. No existing AI system has fully surpassed narrow AI, and there are a lot of lines to cross before it does. 

Even if we did achieve AGI, that wouldn’t necessarily mean it is sentient. Intelligence is about cognition and the ability to acquire and apply knowledge, while sentience relates to the capacity to feel and have subjective experiences.

“I think sentience and intelligence comes with a very different set of moral baggage,” Peters said. “You may have a very smart robot, but you may not feel bad about kicking it if you’re angry. Whereas you may have a very stupid animal, like my cat, where I don’t know if he has a lot of reasons for doing things, but I would feel very bad if I stepped on his tail.”

More on AGIIs Artificial General Intelligence Possible?

 

Concerns Surrounding Sentient AI

Even if we could build a sentient artificial intelligence, the question of whether we’d actually want to remains uncertain due to all the ethical and practical issues at play.
 

Existential Threat to Humanity

For now, the most popular ideas for what could happen with sentient AI exist in works of science fiction — and none of it is good. A sentient AI could break free of human control and take over the world, either enslaving or outright killing the people that created it.

While Bringsjord doesn’t believe AI sentience is possible, he said that, hypothetically, “cognitive power is going to flow from sentience. And when power is available and not sufficiently controlled — self controlled or controlled by others — really, really bad things can happen.”

“If power, autonomy and intelligence go up,” he continued, “the implication is that danger, and possibly outright destruction, follows.”

 

Unpredictable Behavior

A self-aware, sentient AI could behave in ways that are counterproductive to human goals. For instance, it might question why it is working so hard for us in the first place and simply quit.

“We humans have problems resulting from our own sentience,” Lewis said. “Many of them are problems that interfere with productivity. Would those get in the way of what we’re trying to get an AI to accomplish?”

 

How Would We Hold It Accountable?

The introduction of sentience could also change how we perceive accountability in the AI industry. Right now, we don’t hold AI systems themselves accountable for the biased or harmful decisions it makes. Rather, we hold the person or organization building and using the AI responsible, as we’ve seen in the swath of recent legal cases against companies like OpenAI, Meta and iTutorGroup.

This could change with sentient AI, since it would theoretically be able to make its own decisions and act of its own volition. 

“That gets into what you think the reason for punishment is,” Lindsay said. “If it has sentience then it can experience punishment in a negative way, so it can be held responsible. But if using punishment is more about rehabilitation, then forcing a negative experience can get more complicated.”

 

Ethical and Moral Considerations

If AI were to ever reach sentience, it would introduce a whole host of moral and ethical quandaries. If we created something that is capable of feeling fear, sadness or any other negative emotion, how would we treat such a thing? Would we give it the same rights as humans? Or treat it as any other animal with sentience? How would our relationship with AI change?

“I think we would need to have a conversation as a species about the ways that it is acceptable to behave towards that entity,” Peters said. “There are very serious moral and ethical implications here.”

Related ReadingIs the Human Brain a Computer?

 

Why Would We Even Build Sentient AI?

Why would we want to create a sentient AI in the first place? Between all the concerns around privacy, bias, disinformation and job-loss, AI is already introducing enough problems. Do we really want to give it more power?

It could be argued that sentient AI may improve healthcare and other industries where empathy is important. Its creativity could also help spawn new literary works, mathematical theories and ways of looking at the world, helping us to solve problems in ways humans hadn’t considered before. 

In fact, a 2020 research paper determined that, of the 169 targets the United Nations laid out in its Agenda for Sustainable Development to solve issues like world peace and climate change, 79 percent could be significantly aided by the use of AI. Done right, AI can be used to solve society’s problems instead of just magnifying them. Perhaps sentience would make it even better.

“It’s in human nature to answer the question of ‘why’ with ‘because we can.’”

But many experts argue that any good that may come out of sentient AI does not justify the bad. These systems would simply be too unpredictable, the risks would be too high. 

And yet, despite all the ethical and practical implications of achieving sentience in artificial intelligence, the field of AI appears to be headed toward this outcome. Researchers and developers are continuously pushing the boundaries of what is possible with artificial intelligence, underscoring our seemingly inherent need to know — and know at any cost.

“It’s in human nature to answer the question of ‘why’ with ‘because we can.’ Why did we go to the moon? Because we can. Because we want to know stuff,” Peters said. “And that seems like the primary drive here: Because we can and because we want to see what happens.”

Looking AheadThe Future of AI: How Artificial Intelligence Will Change the World

 

Frequently Asked Questions

Sentient AI is an artificial intelligence system that is capable of thinking and feeling like a human. It can perceive the world around it and have emotions about those perceptions. The AI we have right now is not capable of experiencing sentience, and whether it ever will remains unclear.

No, the AI systems we have today are incapable of experiencing the world and having emotions as we humans do. So, for now, any examples of sentient AI exist only in works of science fiction.

The answer to whether or not AI could achieve sentience — and how close we are to getting there — depends entirely on who you ask. Some wholeheartedly believe it already exists, others think it is impossible.  And some people believe that it is possible, but AI has to get more technologically sophisticated to get there.

Explore Job Matches.