What Is Cognitive Computing?

Cognitive computing aims to mimic the way the human brain works, pushing the limits of what people and machines can do together.

Written by Ellen Glover
A brain connected to digital commponents to perform cognitive computing tasks.
Image: Shutterstock
UPDATED BY
Matthew Urwin | Jan 09, 2024

In 2011, factory floors across the country began welcoming a new employee: a six-foot-tall, 300-pound robot equipped with two long, dexterous arms and a pair of expressive digital eyes. Its name was Baxter.

Although Baxter’s success was short-lived, it helped usher in a new age of automation in which machines could work safely and harmoniously with humans. The collaborative robot pushed the limits of what humans and machines can accomplish together, thanks largely to cognitive computing — the use of computerized models to emulate the human brain.

What Is Cognitive Computing?

Cognitive computing is designed to simulate the human thought process in complex situations, particularly where the answers may be ambiguous or uncertain. By combining artificial intelligence and its many underlying technologies, and constantly ingesting new information in the form of vast amounts of data, cognitive computing systems can be taught and, by extension, “think” about problems to come up with plausible solutions.

By mimicking human thought and problem-solving, cognitive computing systems (like the AI used in Baxter) are designed to create a more “symbiotic relationship” between humans and technology, said JT Kostman, a leading expert in applied artificial intelligence and cognitive computing who has worked with everyone from Samsung to Barack Obama during his 2012 presidential campaign. He’s now the CEO of software startup ProtectedBy.AI.

“We are entering an age where cognitive computing in particular will unburden us and allow people to become more quintessentially human,” Kostman told Built In. “And I’m not talking about the far future. It’s already begun, and we’re going to see that accelerate rapidly.”

 

What Is Cognitive Computing?

Cognitive computing is the use of computerized models to not only process information in pre-programmed ways, but also look for new information, interpret it and take whatever actions it deems necessary. Systems are able to formulate responses on their own, rather than adhere to a prescribed set of responses.

This is meant to simulate the human thought process in complex situations, particularly where the answers may be ambiguous or uncertain, to provide decision-makers with the information they need to make better data-based decisions. It’s also used to build deeper relationships with people, whether they are customers, prospective employees or patients.

“Cognitive systems are probabilistic, meaning they are designed to adapt and make sense of the complexity and unpredictability of unstructured information,” John E. Kelly, a senior vice president and director of IBM Research, explained in a 2015 white paper. “They can ‘read’ text, ‘see’ images, and ‘hear’ natural speech. And they interpret that information, organize it, and offer explanations of what it means, along with the rationale of their conclusions.”

 

How Does Cognitive Computing Work?

Cognitive computing systems use artificial intelligence and its many underlying technologies, including neural networks, natural language processing, object recognition, roboticsmachine learning and deep learning. By combining these processes with self-learning algorithms, data analysis and pattern recognition — and constantly ingesting new information in the form of vast amounts of data — computers can be taught and, by extension, “think” about problems and come up with plausible solutions.

Muddu Sudhakar, CEO of tech company Aisera, likens cognitive computing to the process of teaching a child. As children grow up, people teach them things with pictures and words. In cognitive computing, this is known as ontology, or the teaching of what is. People also use dictionaries and books to teach children not only what certain words mean, but the entire context of those words — a process known as taxonomy. For instance, “weather” relates to things like temperature, precipitation and seasons. People also teach children by exhibiting behavior they hope the child will replicate and deterring behavior they don’t like. In cognitive computing, that learning piece is called reinforcement learning.

“If you add all these three things — start with the basic information, ontology and taxonomy, and then add some aspect of learning with reinforcement — then you’ll have a pretty decent system, which can interact with humans,” Sudhakar told Built In.

For instance, Sudhakar’s company Aisera has reportedly created the world’s first AI-driven platform to automate employee and customer experiences. Using cognitive computing, the platform can intuitively resolve tasks, actions and workflows across a variety of departments — automating what Sudhakar refers to as “mundane tasks.” It is also making advancements in the world of empathy and determining emotions, which can be especially useful in areas like HR, customer service and sales. For instance, if the platform can detect that a customer is confused based on their voice and language use, it can then give the customer service agent specific prompts to help clarify what might be confusing the customer.

Looking for More Industry Hot Takes? Here Are22 AI Podcasts Worth a Listen

 

Cognitive Computing vs. Artificial Intelligence

If all of this sounds a lot like artificial intelligence, you’re not wrong. Cognitive computing and AI are often used interchangeably, but they are not the same thing.

AI is an umbrella term used to describe technologies that rely on large amounts of data to model and automate tasks that typically require human intelligence. Classic examples are chatbots, self-driving cars, and smart assistants like Siri and Alexa. While artificial intelligence uses algorithms to make its decisions, cognitive computing requires human assistance to simulate human cognition.

This means systems have to be adaptive and adjust what they are doing as new information arises and their environments change. They also have to be able to retain information about situations that have already occurred, ask clarifying questions and grasp the context in which information is being used. AI is one of the building blocks to make all this possible.

“The question with cognitive is, can it have its own intelligence? That’s where the AI comes in. What intelligence can we add to the system?” Sudhakar said.

Indeed, cognitive computing employs a lot of what makes up AI, including neural networks, natural language processing, machine learning and deep learning. But, instead of using it to automate a process or reveal hidden information and patterns in large amounts of data, cognitive computing is meant to simulate the human thought process and assist humans in finding solutions to complex problems.

In other words: Cognitive computing does not automate human capabilities, it augments them.

“It’s simply a more human-centric and human-compatible tool, and it can be a better companion to humans in helping them achieve their goals,” Gadi Singer, a VP and director of emergent AI research at Intel Labs, told Built In. The goal of cognitive computing, he added, “is not to become sentient and replace a human mind, but rather to interact with human-centric concepts and priorities more successfully.”

More AI ReadingWhat Is Artificial General Intelligence?

 

Cognitive Computing Applications

Some of the most recognizable examples of cognitive computing come in the form of single-purpose demos. In 2011, IBM’s Watson computer won a game of Jeopardy! while running a software called DeepQA, which had been fed billions of pages of information from encyclopedias and open-source projects. And, in 2015, Microsoft unveiled a viral age-guessing tool called how-old.net, which used data from an uploaded image to determine the subject’s age and gender.

Although these one-off demos are impressive, they do not capture the full story of just how much cognitive computing has become inextricably woven throughout our daily lives. Today, this technology is predominantly used to accomplish tasks that require the parsing of large amounts of data. Therefore, it’s useful in analysis-intensive industries such as healthcare, finance and manufacturing.

 

Cognitive Computing in Healthcare

Cognitive computing’s ability to process immense amounts of data has proven itself to be quite useful in the healthcare industry, particularly as it relates to diagnostics. Doctors can use this technology to not only make more informed diagnoses for their patients, but also create more individualized treatment plans for them. Cognitive systems are also able to read patient images like X-rays and MRI scans, and find abnormalities that human experts often miss.

One example of this is Merative, a data company formed from IBM’s healthcare analytics assets. Merative has a variety of uses, including data analyticsclinical development and medical imaging. Cognitive computing has also been used at leading oncology centers like Memorial Sloan Kettering in New York City and MD Anderson in Houston to help make diagnosis and treatment decisions for their patients.

 

Cognitive Computing in Finance

In finance, cognitive computing is used to capture client data so that companies can make more personal recommendations. And, by combining market trends with this client behavior data, cognitive computing can help finance companies assess investment risk. Finally, cognitive computing can also help companies combat fraud by analyzing past parameters that can be used to detect fraudulent transactions.

Although it is famous for its Jeopardy! appearance, IBM’s Watson is a part of IBM Cloud, which was used by 42 out of the top 50 Fortune 500 banks in 2021. Another example is Expert System, which turns language into data for applications in virtually every facet of finance, including insurance and banking.

 

Cognitive Computing in Manufacturing

Manufacturers use cognitive computing technologies to maintain and repair their machinery and equipment, as well as reduce production times and parts management. Once goods have been produced, cognitive computing can also help with the logistics of distributing them around the world, thanks to warehouse automation and management. Cognitive systems can also help employees across the supply chain analyze structured or unstructured data to identify patterns and trends.

IBM has dubbed this corner of cognitive computing “cognitive manufacturing” and offers a suite of solutions with its Watson computer, providing performance management, quality improvement and supply chain optimization. Meanwhile, Baxter’s one-armed successor Sawyer is continuing to redefine how people and machines can collaborate on the factory floor.

Here’s More Innovative AI40 Artificial Intelligence Examples Shaking Up Business Across Industries

 

Benefits of Cognitive Computing

More Efficient Human Labor

Sudhakar said technological innovation does not mean we no longer need humans anymore. “When we invented tractors, we didn’t replace farmers. You still need farmers,” he explained. But tractors can do hard manual labor in a fraction of the time humans can, thus giving farmers the time and latitude to be more efficient elsewhere.

 

Improved Problem-Solving

The very nature of cognitive computing could solve some of the problems it currently has.

“[Humans] can come up with broad strategies. But the discreet solutions of how we operationalize, how we implement, on some of these solutions, tends to be beyond the keen of people. People are not very good at solving nonlinear, dynamically complex problems,” Kostman said. “[Cognitive computing systems] were built for that.”

 

Greater Innovation

Cognitive computing systems are good at processing vast amounts of data from a variety of sources (images, videos, text, and so on), making it adaptable to a variety of industries. Its ability to “explain” is another exciting feature of cognitive computing, said Intel Labs’ Singer, which can be essential to further innovations in this space down the road.

“Today’s AI often makes mistakes because it doesn’t understand common-sense concepts,” Singer added. But the knowledge base and neural networks of cognitive computing systems will usher in the “next wave of AI to provide higher quality and dependability.”

 

Risks of Cognitive Computing

Malicious Intentions  

For all the good cognitive computing is doing for innovation, ProtectedBy.AI CEO Kostman thinks it’s only a matter of time before bad actors take advantage of this technology as well. “Technology is morally agnostic. A hammer is a wonderful thing if you’re building a house. If you’re beating someone’s head in with it, not so much,” he said.

The same goes for cognitive computing systems. The amount of data collected by these systems presents a golden opportunity for malicious actors to do some damage.

 

Unconscious Biases

Like artificial intelligence, the possibilities for error and even bias are also strong in cognitive computing. Though these systems are designed to have machine precision, they are still the product of humans, which means they are not immune to making erroneous or even discriminatory decisions. Fortunately, these issues are top of mind for a lot of people working in this space.

 

Superior Intelligence

Perhaps the most widespread concern regarding this technology has to do with what this technology means for the future of humanity and its place in society. Even though it is still in its “early innings” as Aisera CEO Sudhakar put it, cognitive computing is already challenging our perception of human intelligence and capabilities. And the development of a system that can mimic or surpass our own abilities can be a scary thought.

 

The Future of Cognitive Computing

Singer refers to this next wave as “cognitive AI.” By combining the abilities of cognitive computing with the tremendous power that comes from neural networks, deep learning and the other technology powering AI, computers will improve their ability to understand events, potential consequences and common sense, he explained, which could be a real game changer for humanity’s relationship to technology.

He envisions the generic AI we have today evolving into more of a “personalized companion that continuously learns” and adapts to the changing context of the situation.

With cognitive computing as a backbone, these systems could do anything. They’d be able to predict threats with more accuracy, based on abnormalities in data. They could use their natural language intelligence and sophisticated data analysis capabilities to create completely personalized diagnoses and treatments for patients. And entire smart cities could be developed in order to organize resources based on people’s movements and consumption patterns.

The possibilities are seemingly endless. Now it’s a matter of taking what we have and making it work for us.

“We are starting to move from the age of innovation to the age of implementation,” Kostman said, likening where we are to where electricity was when Thomas Edison invented the lightbulb. “The light has been invented, it’s accessible, and now we have to go past that threshold from innovation to implementation and really make all this very useful.”

 

Frequently Asked Questions

An example of cognitive computing is Baxter, a robot that could learn by having humans grab its arms and show it how to do tasks. The latest version of this robot is Sawyer, which also has two arms and can perform repetitive and potentially dangerous tasks in the workplace. A more famous example of cognitive computing is IBM Watson, which won Jeopardy! in 2011. 

AI refers more broadly to any technology that can process massive volumes of data and complete tasks on its own. While cognitive computing incorporates different AI technologies, it is designed specifically to mimic human thought processes and help humans solve complex problems. AI is meant to replace human labor, but cognitive computing complements human labor and thinking.

Explore Job Matches.