Cognitive AI is a computer program (machine) that can think and learn.

What are Cognitive AI and Cognitive Computing?

Cognitive Artificial Intelligence (AI) is any computer program that can think, learn, and generally mimic human cognition. Cognitive computing is the process of understanding and building cognitive computer systems, including AI.

Cognitive computing uses machine learning (algorithms that allow machines to learn from past experience by detecting patterns), rather than explicitly programmed algorithms (algorithms with a pre-defined and pre-programed sets of rules). The ultimate goal of cognitive computing is to build fully cognitive Artificial Intelligence.

The concept of “smart machines” goes back to early science fiction, Alan Turing’s theoretical machines and work on early AI, and the emergence of the term cognitive computing in the 50’s. Despite the rich history, current cognitive AI is best understood by simply look at recent projects like Google’s DeepMind and IBM’s Watson two, similar but different, ways modern tech is going about creating machines that can think like humans.

Below we look at the different ways computers learn and how that relates to human memory and cognition.

Cognitive computing | Jerome Pesenti | TEDxBermuda.

Getting the terms right: “Machine learning” is a field of study related to creating algorithms that allow computers to learn from example. “Cognitive Computing” is all computing that mimics the way the human brain works. Cognitive AI is machine learning algorithms and cognitive computing applied to artificial intelligence. In other words, machine learning describes a field of science and it’s related algorithms, cognitive computing describes the process of creating cognitive machines, and cognitive AI is a type of program that mimics human cognition, and all those (along with human cognition) are types of cognitive science.

How Do Computers Learn?

A computer can “learn” in two general ways:

Traditional coding: We program a specific algorithm into a machine and it uses “if/then” commands (“if this then do this”). We show a machine every type of apple, and then when it sees a picture of an apple, it can tell you it’s an apple. This can create machines that “seem” to think (weak AI).

Machine learning: We program a general algorithm that gives the machine a foundation for recognizing programs and programming itself. We program an algorithm that allows a machine to understand what an apple is by looking at examples. It doesn’t have to see every type of apple to know an apple is an apple. This can create machines that can actually think (strong AI).

Science Documentary: Cognitive science , a documentary on mind processes, artificial intelligence.

Cognitive AI seeks to make a machine that runs almost purely on machine learning algorithms. So far different levels of this have been accomplished, as illustrated by Google’s DeepMind and IBM’s Watson.

DeepMind Versus Watson

DeepMind and Watson are different, but both are cognitive AI.

Google’s DeepMind uses strict cognitive AI. It can sit down at a game it’s never played before, learn the game through experience, and master it. It recently beat some human experts at the game of Go. DeepMind also became very good at games like Breakout. It taught itself to play and win Breakout using nothing more than pixel and score information.

DeepMind, the computer that mastered Go.

“The algorithms we build are capable of learning for themselves directly from raw experience or data, and are general in that they can perform well across a wide variety of tasks straight out of the box. Our world-class team consists of many renowned experts in their respective fields, including but not limited to deep neural networks, reinforcement learning and systems neuroscience-inspired models.” –Google’s DeepMind

Watch DeepMind get better at Breakout. Notice how it becomes a pro after enough plays, it starts finding exploits and stops “slipping up”.

IBM’s Watson uses the pre-programmed information to accomplish a similar thing. Watson has been in the works for far longer, it tried and failed, and finally won at Jeopardy against Jeopardy pros. Watson analyzes structured and unstructured data (like the IoT), and can make sense of it as it learns, but Watson requires more data input than DeepMind.

IBM Watson: How it Works.

“IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data.” – IBM’s Watson

IBM’s Watson Supercomputer beats Humans in Jeopardy.

Both cognitive AI’s can learn and think, but perhaps due to the newness of DeepMind, DeepMind is a little closer to a true cognitive AI. That being said, they have different applications. Watson is useful right now in the business world, where DeepWeb is still just sitting at home playing Atari games in its parent’s basement (joke).

FACT: Breakout was conceptualized by Nolan Bushnell and Steve Bristow (Atari), and built by Steve Wozniak aided by Steve Jobs.

FACT: The term “Atari” is from the game Go. It roughly means that the opposing player is surrounded and almost out of options.


What is Human Learning?

To understand cognitive computing we have to not only understand how machines learn, we have to understand how humans learn.

Human learning, AKA cognition, is a complex process involving language, perception, memory, attention, reasoning, and emotion. All of these are key for computers and for humans. I’ll do a quick recap of each, we cover each extensively on our site so click the links for more info:

Language: Be it computer language or human language, it’s the system we use to communicate ideas. A machine can think, but if it can’t act or speak and it can’t understand us, its uses are limited.

Perception: The ability to perceive. Humans glean data from our senses. A machine that wants to learn needs to mimic the senses. DeepMind can “see” the game screen, Watson can “hear” the jeopardy questions.

Attention: Ability to focus. Humans have a very limited attention, machines are only limited by their storage and computing power.

Memory: A complex process of our brains, how we encode, store, and retrieve information. “Memory” essentially encompasses everything a machine needs to mimic to be cognitive AI and everything we seek to understand in cognitive science. Big, big, subject.

Reasoning: Our ability to use logic. Computer logic is typically pre-programmed. Machine learning seeks to teach a computer how to use logic to recognize patterns and learn.

Emotion: “No, Skynet don’t blow up the world, that would make me sad.” Emotion, a helpful but elusive tool that humans have and machines lack. Can we teach emotion to machines?

FACT: Cognitive AI is a subsection of cognitive science that seeks to understand human thinking.


Cognitive AI describes a machine that learns like a human, the ultimate goal is to mimic and then improve on human cognition in order to build smart machines and better understand ourselves. The key thing that makes it cognitive is the ability to learn from past experince.


  1. Cognitive Computing“. Wikipedia.org
  2. Cognitive Science“. Wikipedia.org
  3. Machine learning“. Wikipedia.org
  5. Meet Watson“. ibm.com

"Cognitive AI is a Computer Program That Can Think and Learn" is tagged with: Alan Turing, Artificial Intelligence, Cognitive Computing, Collective Intelligence, Early Computers, Epistemology, Google, IBM, Internet, Learning, Memory, Systems

Vote Fact or Myth: "Cognitive AI is a Computer Program That Can Think and Learn"

Your Vote: {{ voteModel || 'no vote' | uppercase }}