Can Google Cure Death, Disease and Aging?

Grim reaper crossed out with red X
Grim reaper crossed out with red X (Photo credit: Wikipedia)

Everyone knows that there only two things in life of which we can all be assured: Death and Taxes.  Now, one of the billionaires behind the world’s most popular search engine wants to take a crack at eliminating the first of these woes.  That’s right; Larry Page of Google has made it his stated goal to cure death.  With a war chest in the billions, I guess if anyone can take a legitimate crack at the Grim Reaper, he can. 

The Search for Immortality Dates Way Back

Not that the search for an alternative to death is new, before he died in 210 BC, the first sovereign emperor of China, Zhao Zheng spent much of his time and considerable wealth searching for the elixir of life.  So did the legendary Spanish explorer, Ponce de Leon, who scoured Florida in 1513 for the mythical Fountain of Youth. While both these and other historical notables had an almost obsessive desire to cheat death, none of them had access to modern  medicine, DNA research, or medical miracles such as transplantation, bionics or bioprinting that are reshaping the very notion of what it is to be human.  Page, on the other hand, does have these and other technologies at his disposal.  In addition, he is incorporating them into a new medical technology company called Calico.

Enter Calico


Short for “California Life Company,” Calico sounds more like a shelter for cats than a research company.  However, when you think about it, cats are supposed to have nine lives. 
Google Ventures
Google Ventures (Photo credit: Wikipedia)
Maybe Page’s latest venture does, too.  Considered the brainchild of Bill Marris, Google Ventures managing partner, he became the catalyst for Calico when he noted that hundreds of companies were focused on curing a variety of medical conditions and diseases. Yet there were no companies that focused on the root cause of disease or what caused the body to progressively fail over time.  That we understood the mechanisms involved in death, largely due to progressive genetic degradation, as the body aged was not an issue.  What was at issue was whether it was possible to not only identify the specific causes of aging, but also to develop treatments that would effectively slow, stop or even reverse them.

It was with this stated goal in mind that Arthur Levinson, chairman and ex-CEO of the biotechnology company, Genentech, was tapped to head Calico.  Currently chair of Apple Computer’s board of directors, Apple CEO Tim Cook gave Levinson and Calico his blessing by recently stating, “"For too many of our friends and family, life has been cut short or the quality of their life is too often lacking. Art is one of the crazy ones who think it doesn’t have to be this way. There is no one better suited to lead this mission and I am excited to see the results."

Curing Cancer is Not as Big of an Advance as you Might Think

That other people share the belief that trying to “cure death” is a topic best relegated to the lunatic fringe
Larry Page
Larry Page (Photo credit: Wikipedia)
along with such things as Bigfoot research is obvious.  It did not help matters that Larry Page was quoted in a Time Magazine interview as saying, “We think of solving cancer as this huge thing that’ll totally change the world. But when you really take a step back and look at it, yeah, there are many, many tragic cases of cancer, and it’s very, very sad, but in the aggregate, it’s not as big an advance as you might think.”

This caused other journalists to respond in kind, such as Digital Trends Andrew Couts, who wrote, “Okay, so Page doesn’t think curing cancer would be that big of a deal – a notion that I’m certain offended a great many people, researchers and victims alike. For me, however, the fact that the tech world’s elite wants to cure death – and think that they can do it – comes as little surprise: Of course they want to live forever – they’re super successful rich people!”
http://www.digitaltrends.com/opinion/google-calico-disrupting-death/#ixzz2lfRgWovD 

Nevertheless, all sarcasm aside, some of the serious questions posed by Calico that need to be addressed are:
1.      Is it possible to reprogram or reset the body’s biological clock?
2.      If not, can the next generation of medical technology significantly extend life?
3.      Does bionics represent the ultimate solution?
4.      If it is ultimately possible to truly cheat death, what are the consequences?  

Can We Live Forever? That is the Real Question

To start with, we have to look at the strides in longevity that have been made by humanity over the centuries.  People born in the year 1800 had an average life expectancy of 35, while those born today have a life expectancy of 75-80 years.  So what has to keep happening to keep this trend from continuing? Will we live to be 150-160 years of age by the year 2200? The reason we live longer than our ancestors has mostly to do with the fact that we have learned how to combat disease, treat life threatening conditions such as diabetes, heart disease and cancer, and that we better understand the nutritional requirements of the human body. 

The problem is that even if we were to cure all disease and through a combination of radical surgical procedures including transplantation and artificial organs, be able to effectively treat all known
The Human Life Span, Twenty-Five Years Ago vs....
The Human Life Span, Twenty-Five Years Ago vs. Today (1939) (Photo credit: yorkd)
physical maladies, this would not be sufficient to keep us alive indefinitely.  Like it or not, in every one of us is a ticking time bomb that ensures cell death. 

When you get right down to the basis of life, be that of a human or simple bacterium, all revolve around mitosis, the simple act of cell division.  Research has shown that healthy cells are programmed to reproduce for a limited amount of time before they die.  In fact, if you take cells from something old and transplant them into something young, the older cells will still die at their preordained time.  This is why as people age their bones become brittle, their skin wrinkles, and the hair thins and the body becomes less able to ward off disease.  We are all in a sense programmed to self-destruct.

The phenomenon known as the Hayflick Limit has been known since 1961 when Dr. Leonard Hayflick, Professor of Medical Biology at Stanford University, first discovered that human cells divide a limited number of times in vitro. Author of the book “How and Why We Age,” first published in 1994, Hayflick demonstrated that during continued mitosis the end of the chromosome called the Telomere progressively degrades.  This means that through repeated division, the enzymes that duplicate DNA produce copy errors that ultimately affect cell replication.  He also showed that when this mechanism broke down it resulted in either cell death or malignancy.  That’s right sports fans, cancer cells while deadly can live forever.

So-Called Methuselah Mice Live Twice as Long as Their Brethren


Laboratory mice Location: Children's Hospital ...
Laboratory mice Location: Children's Hospital Los Angeles Research Institute, Los Angeles, CA Equipment: Canon PowerShot S110 (Photo credit: Wikipedia)
Of course, that didn’t stop other researchers from looking for a way of slowing the process down.  While many have espoused the ingestion of compounds known to halt the production of cell damaging free radicals, others think that telomerase; an enzyme that mends the protective covering on cells could be the answer.  The problem is that to date no studies have yet proven that either of these concepts has been able to significantly increase the lifespan of mammals. 

However, a series of experiments with mice began in 1986 by Roy Walford and Richard Weindruch reported that by restricting their diet by 30 percent that mice could live up to twice as long as those fed a normal diet.   Before you start pulling in your belt and breaking out your Adkins Diet Plan, let me also point out that the same experiment was performed with rhesus monkeys begun in 1987 by the National Institute on Aging, which while reporting health benefits did not demonstrate increased lifespan.

An Then There’s Robo-Sapien


English: Crop and moustache from Androide pic
English: Crop and moustache from Androide pic (Photo credit: Wikipedia)
What immortality in essence boils down to is correcting the built-in genetic copy error while preventing immortal cells from replicating uncontrollably into cancer.  When you consider that until recently such things as gene splicing and gene therapy were the stuff of science fiction then it is entirely possible that eventually it will be possible to program our genes to turn off the self-destruct mechanism.  If that does not work, there are other paths to immortality.  Take entrepreneur and author Ray Kurzweil for instance. 

Described as “the restless genius” by The Wall Street Journal, and “the ultimate thinking machine” by Forbes. Inc. magazine which ranked him #8 among entrepreneurs in the United States, calling him the “rightful heir to Thomas Edison.”  Ray was the principal inventor of the first CCD flatbed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.  Ray is the recipient of the $500,000 MIT-Lemelson Prize, the world’s largest for innovation. In 1999, he received the National Medal of Technology, the nation’s highest honor in technology, from President Clinton in a White House ceremony. And in 2002, he was inducted into the National Inventor’s Hall of Fame, established by the U.S. Patent Office.

Among other claims, Ray has also postulated that by 2045, an event known, as "the singularity" will occur, and allowing humans to fully integrate their psyches with machines. Were that to happen, all any of us who could afford it would have to do to cheat death would be to upload what we call our personality into that of a robot and voila, instant immortality.

Russian Billionaire is Taking the Plunge

So enamored with this concept was he that Russian billionaire Dmitry Itskov has already begun construction of a robotic replica of himself so that once the technology of mind transfer has been worked out, he can be the first billionaire on the block to merge with a machine.  Since he is currently 32 years old, this will make him only 55 years old in 2045, which gives him a certain amount of wiggle room should the technology take a bit longer to become a reality.


What if These Guys are Right?

Just like the Wright Brothers, being the first to fly with wings, or Neil Armstrong standing on the surface of the Moon, if humanity puts its mind to a problem then there is a high probability that we can solve most any problem.  The real problem as I see it is not a matter of technology.  The problem is more about practicality.  In other words, what happens if we get it right?

What are the Ramifications?

Think about the ramifications of immortality.  Currently there are more than seven billion people living on Earth.  Even taking into consideration such things as accidental death and homicide, if we all woke up tomorrow morning with the realization that we would never die, how long would it be
Population of the world and its regions (in mi...
Population of the world and its regions (in millions). Data from http://esa.un.org/unpp/ . Solid line: medium variant. Shaded region: low to high variant. Dashed line: constant-fertility variant. (Photo credit: Wikipedia)
 before we all starved to death?  Considering that it currently takes our global population of mere mortals less than 40 years on average to double, it would not take long for humans to run out of resources.  This growth rate would ensure the fact that within a generation or two we would all face starvation.  I do not know of any technology that would enable us to keep 20 billion people fed and we have not yet invented star-ships capable of interstellar travel.

Of course, like most other technologies, it will in all likelihood take a number of years for the benefits of immortality to reach the masses.  In which case, this means that only the wealthy will be able to afford such a luxury.  This could also prove problematic since this would enable the super-rich to consolidate their power, which is usually the way in which most revolutions have been fomented since human civilization began.  

Would Immortality be a Gift or a Curse?

Even for those in possession of immortality, it could prove more of a curse than a gift.  A number of fictional works have been penned over the years about the perils associated with eternal life.  In the Picture of Dorian Gray, the main character barters his soul for eternal youth only to pay the price in depravity and despair down the road as everyone around him ages and dies. 

As usual with human knowledge, wisdom in many cases takes a back seat. If the Wright Brothers had foreseen what would become of their invention, where forty short years’ later entire cities were being carpet-bombed and millions of civilians killed, would they have stuck to building bicycles?  Who knows, but for what it’s worth, if it came down to a choice between curing death or eliminating taxes, I for one would rather see all of us a little richer rather than a whole lot older.

In this article, I discussed Calico’s grandiose goals and current efforts to thwart death, disease and aging. I further discussed  efforts by science to cheat death and disease by medical, medicinal and or robotic means. This article goes on to cover prior efforts by humanity to achieve immortality and it then brings the reader up to speed on why science is getting close to making head way towards achieving this goal. Last, it discusses the ramifications of reaching these goals. If you found this article to be useful, share it with your friends and co-workers. If you have a comment, provide it in the space below.  It has been my pleasure making this journey with you.  


If you like this article, you can find more by typing “Google” in the search box at the top left of this blog. If you found this article useful, share it with your friends, families and co-works. If you have a comment related to this article, leave it in the comment sections below. If you would like a free copy of our book, "Internet Marketing Tips for the 21st Century", fill out the form below.

Thanks for sharing your time with me.



Since 1995, Carl Weiss has been helping clients succeed online.  He owns and operates several online marketing businesses, including Working the Web to Win and Jacksonville Video Production. He also co-hosts the weekly radio show, "Working the Web to Win," every Tuesday at 4 p.m. Eastern on BlogTalkRadio.com.

Related articles

The Smartest Guys in the Room



By Carl Weiss
The Internet has without a doubt been one of the most important inventions ever created. When it comes to revolutionizing the way in which we perceive and operate in the world, it has created a major paradigm shift in all things human.  And like many inventions, the web was the result of the labor of many talented people.  Some of those people wound up being relegated to footnote status while others went on to become Internet icons.

And So it Begins


The Internet was actually an outgrowth of the ARPANET, which was a cold war project designed to safeguard government communications in case of nuclear attack by the Soviet Union.  Developed in the sixties and seventies, the ARPANET program led to the development of a number of internet protocols such as TCP/IP, that formed the foundation for the internet that we all use today.

The only problem was that back in the sixties and through much of the seventies, the only people who had access to computers were governments, fortune 500 companies and universities.  Then in 1976 a couple of guys named Steve invented and began selling one of the first personal computers, called the Apple.  The problem back then was that these fledgling computers were not very powerful and they were expensive.  The first Apples sold for $666.66.  This was at a time when many compact cars sold for a couple of thousand dollars.


Steve and Bill - Best Buddies?

Add to this the fact that their computer didn’t do a lot and it was obvious that they would have to woo a number of software developers.  Some of these included Fred Gibbons, Mitch Kapor and Bill Gates.  While people think of Microsoft as being a mortal enemy of Apple, back then the two we
Steve Jobs and Bill Gates at the fifth D: All ...
Steve Jobs and Bill Gates at the fifth D: All Things Digital conference (D5) in 2007 (Photo credit: Wikipedia)
re kissing cousins.  It wasn’t until a few years later with the advent of the IBM PC that the lines were drawn in the sand. Then, Apple and Microsoft wound up on opposing camps.  In the early days not only did every Apple computer come equipped with Gate’s version of Basic, but in a 1983 interview Bill also stated that he expected to generate half of Microsoft’s revenues in 1984 from Macintosh software.

While many people don’t instantly recall who Fred Gibbons and Mitch Kapor were back then, their contributions kept Apple’s train on the tracks during the early half of the '80s.  Fred’s firm, Software Publishing Company, created a number of software products, chief among them being Harvard Graphics, which brought on-screen graphics abilities to early versions of the Apple. 

First Killer App


Mitch Kapor developer of Lotus 1-2-3 was one of the rock stars of software development in the early days of personal computing.  In fact Lotus 1-2-3 became one of computing’s first killer apps which would eventually contribute to the popularity of Apple’s nemesis, the IBM PC.  That doesn’t mean that there weren’t other software developers on the hunt for other killer apps.  In fact, Lotus 1-2-3 was an offshoot of another popular spreadsheet program called Visicalc. 

1-2-3 boingmash... Mark Frauenfelder, Xeni Jar...
1-2-3 boingmash... Mark Frauenfelder, Xeni Jardin, Cory Doctorow and Mitch Kapor. (Photo credit: Wikipedia)
Of course, competition among software developers was not the only place where the microcomputer war was heating up.  Back then there were a number of companies trying to muscle in on the hardware game as well.  Chief among them was Tandy Radio Shack whose TRS-80 was a popular brand at the time.   (Although it didn’t help that denigrators of the brand used to refer to it as the Trash Eighty.)  The Commodore 64 was another successful brand, selling some 17 million units in the US. 

IBM Gets Down to Business


This is when the titan of computer manufacturing decided to get into the game with a micro of its own.  By 1980 more than 50 microcomputer systems were on the market.  This is when IBM decided it was time to join the fray.  Before 1981, IBM was the dominant player when it came to mainframe
Apple I at the Smithsonian Museum
Apple I at the Smithsonian Museum (Photo credit: Wikipedia)
 computers.  They had never even considered creating a microcomputer.  In fact, so sudden was their entry into this burgeoning market that they did something unprecedented.  Instead of taking the time to create an entirely proprietary product, they instead cobbled together the IBM PC with off-the-shelf technology.  (They didn’t even have an operating system for a PC, which opened the door for Bill Gates whose MS-DOS system quickly became the defacto operating system for the IBM PC along with an eventual swarm of clones.)

So many competitors began to pile on that even those who had enjoyed hegemony during the early going (such as Apple) began to feel the heat.  By 1982, the marketplace had reached
Apple Lisa with a ProFile hard drive stacked o...
Apple Lisa with a ProFile hard drive stacked on top of it. (Photo credit: Wikipedia)
 critical mass and a shakeout was inevitable.  Not wanting to wind up kicked to the curb, Steve Jobs at Apple decided it was time to reinvent itself by once again looking into the crystal ball to determine where personal computing was headed.  So taking $50 million of the company’s money, Steve assembled a team of the best and brightest at Apple and created what he thought would be the next leap forward in personal computing technology.  Called Lisa, the computer was released in January of 1984 priced between $3,495 and $5,495Even though the system was well ahead of its time, commercially its launch was hailed as a failure, one that would ultimately cost Jobs his job. (The release of the Macintosh the following year, which was both faster and cheaper saved Apple from joining the technological scrap heap wherein many of its competitors wound up.)

Steve Jobs' Other Job


This failure did not deter Jobs, who, along with several other ousted Apple employees, went onto start NeXT Computer, Inc. in 1995.  While NeXT only sold around 50,000 units and was ultimately absorbed by Apple for $429 million, several of the concepts developed at NeXT were incorporated into later Apple systems, including parts of the OS X and IOS operating systems.  During his hiatus from Apple, Steve Jobs also dabbled with another company called Pixar, which even George Lucas had lost faith in.  Pixar would later go onto produce a number of animated features many of which would receive several Academy Awards.  Jobs also clearly had a bead on the NeXT big trend of the 1990’s which he referred to as interpersonal computing that would soon after appear with a similar moniker: the Internet. 

The term Internet was actually coined back in 1974 as an abbreviation to the term “internetworking”.  Of course, at that time, the only entities internetworking were military sites and universities.  However, by the 1980’s NASA joined the fray by developing the NASA Science Network which allowed scientists to share data on a global basis.  This eventually coalesced into the NASA Science Internet, which eventually connected more than 20,000 scientists worldwide.  As interest in worldwide networking grew, the technology spawned other early adopters, such as Usenet, UUCPNet, FidoNet, JUNET and NSFNET.   During the late 1980’s the first Internet service providers were formed, starting with the first commercial dialup service in the US called "The World."

The Internet Becomes the World Wide Web


By the early '90s, search engines such as Archie and Gopher sprung up, along with the first web browsers, such as ViolaWWW.  While a far cry from the multi-media offerings we have tom to know and expect today, these early search engines and browsers offered little more than text-only listings more reminiscent of Craigslist than Google.  That did not mean that there wasn’t market share to be had. 

Yahoo!Xtra Homepage
Yahoo!Xtra Homepage (Photo credit: Wikipedia)
As the World Wide Web grew, search engines and web directories began to spring up like weeds with such names as WebCrawler, Lycos, HotBot, Excite, Yahoo (founded in 1994) and AltaVista.  Also several new web browsers hit the market, the most notable of which was Netscape, that appeared in 1994.  Clearly a cut above other web browsers, Netscape Navigator was the gold standard for surfing the web in the 1990’s.  In fact the product was such a hit that only 16 months after its inception, the company went public with its stock soaring from $28 to $58 on the opening day.  It also began the feeding frenzy later called the Tech Bubble that would run rampant for the next six years creating a number of billionaires in the process.

Lots of Billionaires are Created


Netscape Communicator
Netscape Communicator (Photo credit: Wikipedia)
One such billionaire definitely took notice of Netscape in a big way.  Bill Gates at Microsoft knew a good thing when he saw it.  And he knew that Microsoft had to stake a claim on the Internet.  By August of 1995 Microsoft announced the introduction of Internet Explorer 1.0.  While it took Microsoft more than 5 years to gain the upper hand over Netscape, their dominance in the operating system market opened the doors for them to chip away at Netscape’s market share.  So much so that by June of 2006 netscape.com went dark for good.  That would seem like good news to Bill at Microsoft.  It probably would have been had it not been for one other upstart named Google.

Begun in March of 1996 as a research project by two Stanford students,  Larry Page and Sergey Brin, their original idea had not been to design a web browser or search engine, ironically. What they were out to develop was the world’s first digital library.  In search of a dissertation theme, Page considered exploring the mathematical properties of the Internet by turning its structure into a gigantic graph.  Page’s web crawling software called BackRub began exploring the web autonomously with Page’s own website serving as ground zero.  His research along Brin’s help eventually morphed into an algorithm that the pair named PageRank, which became the nexus for their Google search engine, which was registered on September 15, 1997.

The Birth of the 800-Pound Gorilla


Google's homepage 1998–1999
Google's homepage 1998–1999 (Photo credit: Wikipedia)
By December 1998, Google had an index of about 60 million pages.  But already a number of Internet gurus were arguing that Google’s search results were superior to those of its competitors.  In March 1999, the company moved to Palo Alto.  In June of that same year, it secured $25 million in equity capital from venture capital firms Kleiner Perkins Caulfield and Sequoia Capital.  While Brin and Page were hesitant to take the company public, it was Google’s IPO on August 19, 2004 that shook the pillars of the Internet that would ultimately see Google not only as the dominant search engine online, but it would also see the company owning the world’s most popular web browser (Chrome), the world’s most popular video portal (YouTube) and the world’s most popular cellphone operating system (Android).

Photo Credit:tomsguide.com
Of course, like a many innovators that came before them, that doesn’t mean that the latest successors to the Internet crown are always omniscient.  Several of Google’s latest tech forays have yet to come to roost, including a computer worn on your face (Google Glass), a pair of mysterious barges berthed in San Francisco Bay and Maine, and communication blimps that are designed to bring the Internet to sub- Saharan Africa.  Not to be outdone, Apple announced a product a year ago that while intriguing has yet to hit the market called the iWatch.

In this article, I have laid out a portrait of the player and events that have had a significant impact on the creation of the internet. From the early days of Apple and IBM personal computing, through the infancy of the ARPANET, the creation of the World Wide Web, Netscape, Microsoft's IE, Killer apps like Lotus 123, the Search Engines boom days and the Birth of Google.  

Love them or hate them, one thing you can say about the smartest guys in the room is that they will never leave you bored. I can’t wait to see what comes "NeXT." If you liked this article, pass it on to your friends and co-workers. If you have a comment, enter it below.  


Until next time, keep watching the WWW evolve. 

If you like this article, you can find more by typing “inventors or high tech” in the search box at the top left of this blog. If you found this article useful, share it with your friends, families and co-works. If you have a comment related to this article, leave it in the comment sections below. 

If you would like a free copy of our book, "Internet Marketing Tips for the 21st Century", fill out the form below. 




Since 1995, Carl Weiss has been helping clients succeed online.  He owns and operates several online marketing businesses, including Working the Web to Win and Jacksonville Video Production. He also co-hosts the weekly radio show, "Working the Web to Win," every Tuesday at 4 p.m. Eastern on BlogTalkRadio.com.



Related articles