As the father of computer science and one of the greatest mathematicians, pioneers, cryptographers, logicians, and code-breakers of the 20th century, Alan Turing was nothing short of a technological superhero. Sure, he’s been credited with creating one of the first universal computers and helping the British intelligence service break the German “Enigma” code during WWII, as was retold in the 2014 movie, The Imitation Game. But did you know Turing was also responsible for creating the most famous experiment of artificial intelligence (AI) to date? In fact, many admirers of the iconic scientist would say his greatest breakthroughs were not mechanical, but theoretical—essentially proving that certain technological advancements could happen, even if human innovation had yet to arrive.

Why was Alan Turing special?

Born outside London in 1912, Turing grew up as a curious and ambitious young man who showed early signs of considerable genius. But like many gifted thinkers before him, Turing struggled with acceptance, both from his peers and his educators who often found his orientation towards math and science to be a “waste of time.” Despite these challenges, he continued to show tremendous ability in these areas, even working through advanced calculus and Einstein’s theories at just 16-years-old. And as he matured, his own techie ideas became more nuanced and complex, focusing primarily on how a machine could be built to compute anything humans could, using binary code for computing—like modern-day software.

In 1950, Turing decided to do more than just create machines that could do things; he wanted to test whether their behaviors were intelligent enough to pass for human, a process he aptly dubbed “the imitation game.” And so like any good scientist, Turing created an experiment to test his hypothesis. He wanted to know if machines could simulate thought and speech well enough to pass for human—a query that continues to delight and fascinate the computing world today.

What did the Turing Test prove?

The test itself was simple in design. It involved two humans and one computer, and here’s how it went: a flesh and blood person sits in front of a computer screen and chats with two unidentified interlocutors, one human and one with circuits for brains. The human judge must then assess the responses of both entities and decide which one is the actual person and which one is just a really, really smart machine. So in essence, the computer being questioned is A, the person being questioned is B, and the human judge is C. Because C cannot physically see either responder, he must choose assess the overall “human-ness” of both A and B based solely on their chatting interface.

Turing’s hypothesis was also pretty simple: if the human tester rendered an inaccurate decision less than 50% of the time, meaning the computer was able to successfully imitate a real person more than half the time, logic would suggest machines can be considered “intelligent” because they are able to effectively pass for human. The computer was smart enough to know what to say and how to say it. So while the human in the test was trying to help the questioner, the computer was aiming to fool him.

If you are thinking this process sounds more like parroting than real, authentic intelligence, you’ve got a point. But it’s not quite that simple. During his life, Turing sidestepped this question and avoided defining the essence of intelligence by simply proving machines can, in fact, “think” for themselves on some level. According to him, if a digital computer or robot acts, reacts, and interacts like a human, it should be regarded as a sentient being, capable of forming decisions and completing tasks typically reserved only for people. In other words, any developing system able to engage in intellectual processes, such as reasoning, assuming, generalizing, defining, or learning from past experience is essentially exhibiting the type of cerebral power once found only in real brain tissue. After all, isn’t this how we assess the intelligence of other humans—through similar external observations? Any machines capable of such activity could surely, in time, be programmed to rival human intelligence, couldn’t they?

Has Turing stood the test of time?

Turing’s paper on the subject is still frequently cited by people in the industry, many of whom use it to fuel academic discussions for and against the use of AI. Although no computer ever really passed the test during Turing’s lifetime, the legacy of his thinking has had a profound impact on what we know about computers and their ability to make decisions. In his reflections on the imitation game, Turing predicted, “in about 50 years time (by the year 2000) it will be possible to program computers… to make them play the imitation game so well that an average interrogator will have no more than 70% chance of making the correct identification after five minutes of questioning.”

So, was Turing right? Basically, yes. Even though his timeline was slightly off, many pioneers of machine learning now suggest human-level AI will be achieved around the year 2030 or sooner. And at this point, machines will not only match human abilities, they will surpass them. This type of forward thinking took root in the 1960’s and 70’s and has laid the intellectual groundwork for others to expand and expound on futurist possibilities, ultimately leading to an intelligentsia movement known as transhumanism. It’s worth looking up.

And while some people may find the notion of super-smart computers to be unsettling, the truth is, this kind of technological advancement will do wonders to improve our own intelligence as a result. Rather than becoming underlings to all-powerful machines, human beings will likely continue to advance in unison with computers, eventually creating a virtual world that is seamlessly embedded in the real one.

Related Resources:

Protecting The Best Machine Of All: The Human Brain

Will Machine Learning Win The Battle Of The Bots?

 

 

 

Post a comment