Researched by Thomas DeMichelePublished - June 27, 2016 Last Updated - December 28, 2016
Can Machines Think?
Whether or not machines can think, depends on our definition of “think.” Generally we can say, machines can think, but they think differently than humans. Thus the question becomes, “does thinking differently still count as thinking?”
Is Watson thinking when it contemplates when it engages a doctor in a detailed healthcare question? Is DeepMind thinking when it considers a move in a game of Go? Can we imagine a machine that can pass as a human in a conversation (thus winning Alan Turing’s Imitation Game)?
This line of reasoning can be applied to anything from a theoretical, to an analog, to a modern digital computer including everything from a rudimentary Babbage machine, to a to a theoretical Turning machine, to modern day cognitive AI like IBM’s Watson and Google’s DeepMind).
If we accept the logic arising from Alan Turing’s work with AI to modern AI, then we can make a strong case for the idea that “machines can think” (theoretical machines in theory, and modern AI in practice in some ways; older machines only could under very broad definitions of “think”).
If we are more skeptical, and define thinking as the chemical, electric, and organic process that humans experience directly, then we can produce logic that says machines cannot think. Likewise, if we consider the way in which information is processed and stored, we may consider the differences too vast to really consider what a machine does as thinking.
TIP: To re-state our findings: Whether or not machines can think, depends on our definition of “think.” We may not be able to answer the open-ended “can machines think” question for you… but the data below can help show you how to think about it.
Computers That Think Like Humans. A video describing the differences between human thought and computer thinking, including the differences in parallel processing ability (something we do much better than machines).
In other words thinking is (from input to output): receiving sensory data (input), cognition (processing information and looking for patterns and answers), learning (storing bulk or organized information and refining the organization of that information over time), retrieving (getting the right bits of data ready for output), and finally responding (outputting the data).
For example with a human, I read a line of text, I contemplate it, I then store it connecting it with other like information in my neural network, then later I can retrieve it, and finally recite it back in context with other information. I have thunk, but I haven’t explicitly done anything a machine can’t do (although my process was very different).
We know modern AI can receive sensory data (non-electric inputs that can be converted to electronic inputs like audio and pure electronic inputs like uploading a database), it can process that data (it can organize it and connect it with other data it has), and learn (store and organize the data). It can also retrieve and output the data based on inquiry.
TIP: Google’s DeepMind can master games by learning from its mistakes. This is how it mastered the game breakthrough.
What Does it Mean for a Computer to Think?
We can equate the above to machine learning, a computer’s ability to learn by example, including by sensing its environment, and the ability of modern machines to consider multiple complex variables and then produce a satisfactory result.
Since a machine can mimic human cognition and learning, we may decide that it can think.
The above would be the basics of Alan Turning’s view on what it means for a machine to think. The skeptic’s view picks apart both the logic and broad definitions of consciousness and thinking found in this argument.
Turning’s View: If we ask, “can digital computers mimic human cognition and learn?” which Turning did, then we can look at IBM’s Watson or Google’s DeepMind as both can “think,” “learn,” and mimic human cognition by any loose definition. Consider just the field of machine learning paired with Watson on Jeopardy, and DeepMind on Alpha Go and we have “proof” of cognition and learning. If we want to put them to “the test” to prove this, then we can use “the Turing Test” better known as the “Imitation Game” as discussed in his famous 1950 paper. In this paper he says, essentially, that we should consider that machines can “think” if a human interrogator can not tell it apart, through conversation, from a human being.
Skeptics View: If we look at things more strictly, we could pick apart Turning, Watson, and DeepMind on technicalities by pointing to epistemological arguments surrounding the term “think”, point out the differences between human and computer cognition, or demonstrating that a non-thinking computer can manipulate symbols to mimic cognition without actually performing a cognitive function (we can say something like a Babbage machine is always mechanical is always “being a calculator” and never “thinking”). These are only some simple versions of the skeptic’s viewpoint.
TIP: There is also the possibility we are in a computer simulation, then there is an extra question, “are we technically a type of machine?”… and if so, can we build a machine that is human in every way except that we built it?… and if so, can it think?
Can Digital Computers Do Well in the Imitation Game?
As noted above, Alan Mathison Turing (A. M. Turing) lays out his exploration of the question “can machines think?” in his paper COMPUTING MACHINERY AND INTELLIGENCE. In the paper, you’ll find Turning’s famous Imitation Game, and despite intricate concepts, some rather plain language describing his theories.
As far as answers to the question “can machines think?” goes, one can’t do much better than Turning. To be fair, Turning knew then that the term “thinking” was difficult to define and instead simply asked, “are there imaginable digital computers which would do well in the imitation game?”
In other words, the Turning test only seeks to prove that a machine can theoretically pass as a human in a game of Q&A.
May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection. – Alan Turning, father of computer science and AI, explaining that asking if machines can think is an unnecessarily semantic question in his paper theorizing AI and the potential of digital computers in 1950 (consider that the first computers were built in the late 40’s, including The Manchester Mark 1 which Alan helped build; he also inspired the very first computers with his 1936–1937 theoretical Universal Turing Machine).
Can a computer beat a human at the Imitation Game?: On 7 June 2014, a Turing test competition was held at the Royal Society London and was won by the Russian chatter bot Eugene Goostman. The bot, during a series of five-minute-long text conversations, convinced 33% of the contest’s judges that it was human. The competition’s organizers believed that the Turing test had been “passed for the first time” at the event, saying that “The event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations.”NOTE: The bot that fooled the judges was presented as a 13-year-old child, so some criticize the idea that a machine has passed the Turning test.
Can We Ever Prove for Certain that Machines Can Think?
It is likely that we won’t be able to prove that machines can think in a way that wins over the skeptics for some time (if ever due to the semantics of the argument). Likewise, it is doubtful we will have a definitive answer to questions about the meaning of consciousness anytime soon either. The reason being that these are largely semantic questions of philosophy rather than questions of pure logic or pure natural science.
Perhaps an advanced machine can help us to answer these questions, or perhaps they will hit the same sticking points we do.
It could be that machines and humans are fundamentally different, one made of chemicals and electricity, another made of off-on switches (quantum or binary). Or, it could be that consciousness is simply a manifestation that happens equally for all energetic forms regardless if they are made of synthetic compounds, start dust, or pure electromagnetic energy.
Advances in AI and machine learning and new tech like IBM’s Watson are constantly calling into question old ideas about the limits of a machine’s ability to mimic human cognition as we move forward.
Is cognitive AI thinking when it processes data? Or do we define thinking as a process that can never be broken down into 1’s and 0’s governed by a deterministic computerized algorithm? These, and others, are the questions which those like Charles Babbage, Ada Lovelace, and A. M. Turing started asking long before the modern day.
FACT: Alan Turning invented one of the first video games, Turochamp. He began work on this in 1948 and completed it in 1952. It was a version of Chess; notably another very early video game was a version of tic-tac-toe. I like to say, “Turning was working on chess while his peers were working on tic-tac-toe.”
Putting aside skepticism and philosophical arguments, machines can mimic human cognition in practice rather well (see Watson) and they can think in theory (as per the rules of Turning’s imitation game).
Thus, it is reasonable to say “machines can think”, just as long as we are honest about the limits of our understanding and current technology.
With that said, it is just as reasonable to say “machines can not think” and to point to all the differences between human and computer cognition.
Ultimately this, like “two things can never touch” is subject to opinion despite the facts. Some day this question might provide a more solid answer, but that day is not today.
Author: Thomas DeMichele
Thomas DeMichele is the content creator behind ObamaCareFacts.com, FactMyth.com, CryptocurrencyFacts.com, and other DogMediaSolutions.com and Massive Dog properties. He also contributes to MakerDAO and other cryptocurrency-based projects. Tom's focus in all...