Search this Blog

When Will Computers Out Think You?

By Carl Weiss 


Courtesy of pxhere.com
As I sit here thinking about artificial intelligence, I am reminded of all the amazing motion pictures that predict the evolution of computer to thinking and self replication machines. Some of my favorites are "Terminator," "iRobot," "War Games," and "Battle Star Galactica." In all of these movies, the computers become more powerful than the people who created them. Some have happy endings, other not so much.

Computers Have Changed the World

Face it, since the 1980s the personal computer has changed the world as we know it.  Before Apple and IBM started offering computers to the masses, the world was a much
The first developers of IBM PC computers negle...
The first developers of IBM PC computers neglected audio capabilities (first IBM model, 1981). (Photo credit: Wikipedia)
different place.  We weren’t as connected.  Life ran at a slower pace. The world seemed bigger.  However, the advent of the microchip and everything that goes along with it has forever changed the ways in which we communicate, educate, shop, do business and entertain ourselves.  In fact, just about the only thing that hasn’t changed in the past 25 years has been the fact that we still control the machines that reside at the heart of every PC, tablet, smartphone, automobile, airplane and power plant.  Without us, they would just be a die cast doorstop that would be about as smart as an anvil. 

While science fiction novels and movies galore speak of the wonder and the horror of thinking machines, the fact is that there still aren’t any machines on the planet that can self-program or learn from their mistakes.  That’s the good news.  The bad news is that the day is coming sooner than you think when computers will be able to think for themselves. 

Computer Games Have Been the Springboard to AI

Computers are really good at games.  The reason is that games have rules.  Programming the rules into a computer is fairly straightforward.  Once programmed, a game-playing computer has a distinct advantage over a human because computers can perform hundreds of millions of calculations per second.  This was first brought to light in a big way when, in 1997, the IBM computer "Deep Blue" beat the world’s chess champion Gary Kasparov

Deep Blue
Deep Blue (Photo credit: James the photographer)
“'Deep Blue,' with its capability of evaluating 200 million positions per second, was the fastest computer that ever faced a world chess champion. Today, in computer chess research and matches of world class players against computers, the focus of play has often shifted to
software chess programs, rather than using dedicated chess hardware. Modern chess programs such as Rybka, Deep Fritz or Deep Junior are more efficient than the programs during Deep Blue's era. In a recent match, Deep Fritz vs. world chess champion Vladimir Kramnik in November 2006, the program ran on a personal computer containing two Intel Core 2 Duo CPUs.”  http://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)

While an impressive feat, "Deep Blue" and its successors are extremely limited in what they can accomplish and how they can interact with humans.  All they do is play chess.  They not only are unable to hold a conversation about the nuances of the game, they don’t understand what the word nuance means.  However, all that changed in 2010 with the creation of the computer known as "Watson."

English: IBM's Watson computer, Yorktown Heigh...
English: IBM's Watson computer, Yorktown Heights, NY (Photo credit: Wikipedia)
Designed and built by IBM, "Watson" was designed to answer questions on the TV game show Jeopardy.  Unlike Deep Blue, Watson could not only understand the sometimes arcane questions posed on the show, but it could deliver its answers verbally. Relying on an extensive database of some 200 million pages of content contained in four terabytes of RAM, Watson competed on the air against two of the most successful human Jeopardy competitors of all time, Ken Jennings and Brad Rutter, beating them both for a prize worth one million dollars. 

“Since Deep Blue's victory over Garry Kasparov in chess in 1997, IBM had been on the hunt for a new challenge. In 2004, IBM Research manager Charles Lickel, over dinner with coworkers, noticed that the restaurant they were in had fallen silent. He soon discovered the cause of this evening hiatus: Ken Jennings, who was then in the middle of his successful 74-game run on Jeopardy! Nearly the entire restaurant had piled toward the televisions, mid-meal, to watch the phenomenon. Intrigued by the quiz show as a possible challenge for IBM, Lickel passed the idea on, and in 2005, IBM Research executive Paul Horn backed Lickel up, pushing for someone in his department to take up the challenge of playing Jeopardy! with an IBM system.  Eventually David Ferrucci took him up on the offer.
In initial tests run during 2006, Watson was given 500 clues from past Jeopardy programs. While the best real-life competitors buzzed in half the time and responded correctly to as many as 95% of clues, Watson's first pass could get only about 15% correct. During 2007, the IBM team was given three to five years and a staff of 15 people to solve the problems. By 2008, the developers had advanced Watson such that it could compete with Jeopardy! champions.  By February 2010, Watson could beat human Jeopardy! contestants on a regular basis.”

Doctor Watson has a PhD?


More incredibly, after retiring from television, "Watson" was repurposed in 2013 to provide management decisions in lung cancer treatment at memorial sloan-Kettering Center.  IBM Watson’s business chief Manoj Saxena says that 90% of nurses in the field who use Watson 
Watson, Ken Jennings, and Brad Rutter in their...
Watson, Ken Jennings, and Brad Rutter in their Jeopardy! exhibition match. (Photo credit: Wikipedia)
now follow its guidance.  IBM is also looking at the possibility of using "Watson" for legal research.

While the software that upon Watson is based is available to large corporations and research centers (such as Rensselaer Polytechnic Institute), a system that meets the minimum requirements currently costs more than one million dollars.  However, as computer chips become faster and less costly, it won’t be long before this kind of technology makes it to the masses.  When you realize that the computer power available in today’s Smartphones is superior to that used to fly the Space Shuttle, then this claim is hardly beyond the realm of possibility.

“According to IBM, "The goal is to have computers start to interact in natural human terms across a range of applications and processes, understanding the questions that humans ask and providing answers that humans can understand and justify."

Moore’s Law Still Means More!

Moore’s Law states that computer power doubles approximately every two years.  This little gem was coined by Intel co-founder Gordon Moore back in 1965, when he published a paper noting that the number of components in integrated circuits had doubled ever since their invention in 1958.  While this trend has slowed slightly over the intervening forty eight years, this exponential growth has directly influenced every aspect of the electronics industry and brought us closer to the point where computers will be able to think for themselves.
Moore's Law, The Fifth Paradigm.
Moore's Law, The Fifth Paradigm. (Photo credit: Wikipedia)
“Peter Van Der Made, former IBM chief scientist, has spent over a decade studying the human brain and understanding how to replicate it in computer form. His new book, Higher Intelligence, tells the story of a 10-year breakthrough R&D project to build an 'artificial brain chip that will
help computers learn like the human brain. 

"By producing computer chips that allow computers to learn for themselves, we have unlocked the next generation of computers and artificial intelligence," Mr Van Der Made says.  “We are on the brink of a revolution now where the computers of tomorrow will be built to do more than we ever imagined.  Current computers are great tools for number crunching, statistical analysis, or surfing the Internet. But their usefulness is limited when it comes to being able to think for themselves and develop new skills," he says.
“The synthetic brain chip of tomorrow can evolve through learning, rather than being programmed.”

Peter goes onto say in his book that he and his colleagues have already been able to simulate many of the functions of the human brain and convert them into hardware that learns without the intervention of a programmer.  If he is correct, the next few years could see a paradigm shift that is more earth shattering than that of the advent of the silicon chip. Already we are seeing autonomous aerial vehicles flying the friendly skies and driverless vehicles plying the highways of Southern California.  With a few more iterations of Moore’s Law and a bit of tinkering, will we shortly be on the verge of intelligent systems, everyday robotics and machines that can outthink their makers?

If this isn’t quite the case yet, all I can say is, “Game on!”

If you like this article, you can find more by typing “robots” or "artificial intelligence" in the search box at the top left of this blog. If you found this article useful, share it with your friends, families and co-works. If you have a comment related to this article, leave it in the comment sections below.  If you would like a free copy of our book, "Internet Marketing Tips for the 21st Century," fill out the form below.

Thanks for sharing your time with me.





Since 1995, Carl Weiss has been helping clients succeed online.  He owns and operates several online marketing businesses, including Working the Web to Win and Jacksonville Video Production. He also co-hosts the weekly radio show, "Working the Web to Win," every Tuesday at 4 p.m. Eastern on BlogTalkRadio.com.


Related articles

1 comment:

  1. Computers outsmarted me back in the days of Lotus 123 on a DOS platform. Since then it's been paddling as hard as I can to keep my head above water!

    ReplyDelete