The Singularity is Near. Ray Kurzweil published this seminal book in 2005 capturing the attention of technologists for his predictions and even philosophers contending with what this meant for the human condition. In 2006, the tech-talk across the industry was of the coming Web 3.0. We were fixated on the internet, but Kurzweil went far beyond. He described to us the many intersecting paths of all the sciences. When Humans Transcend Biology. This part of the title is what caught my attention. I read through its 600-plus pages with fascination, and I’ll say he was optimistic—or at least fatalistic.
Kurzweil describes the technology trendline. Technical advancement, he points out, is not on a linear path but on an exponential one. It has been accelerating and assuredly moving to a point, the singularity. In his view, that point is not far off—as the singularity is the inflection point, like the heel of the hockey stick. He explains further; it is not one, but all the sciences advancing synergistically with the aid of computing that make this possible—physics, chemistry, genomics, robotics, biology to name just a few. It needs intelligence to bring it together. We have to close the man-machine interface, no interface needed, no keyboard, no 60-inch display.
We will need an accelerated kind of biological evolution, an artificial intelligence (AI), to drive the advances in science together. When humans transcend biology is when the power of AI and all the other tech becomes part of our human biology, the kind that can repair the body at the cellular level and that allows our brain to have a new CPU architecture, a parallel computing architecture. Our own evolution will need to accelerate if we hope to stay apace of all the other tech we are creating.
Many have warned about the dangers of AI contemplating being left behind by our own creations, or worse, our own creations turning against us. When it is technology making technology, humans no longer needed, the sci-fi movies become far too real. Kurzweil argues that we will shortly have molecular-sized (nano) AI-enabled compute technology, that can integrate in the human brain cell. This is the next “giant leap for mankind.” Call it techevolution, when humans share the apex, with our other AI tech, and we can keep up with a little help from our tech-enabled biology.
Science Fiction? Let’s consider the possibility that Kurzweil is right. It is in our programming, in our brains, to make better and better tools with no end, even reckless to the potential that it can come back to harm us? Is there anything that we can do to allow us humans to stay in control of our own technology, to make it secure for us? Is it beyond our capacity to influence this programming, like trying to hold back the tides?
My dread is that at least part of Kurzweil’s argument is wishful thinking, the part about, “it will all turn out okay.” We have had nukes since the 1940s, both the good kind, nuclear power, and the bad, ICBMs. No one can say with certainty that the bad ones won’t be used again and that it will all turn out okay – in the future. For a while we made treaties to stop the nuclear arms race. The treaties were violated before the ink was dry. There are lessons from our creation of nuclear power for us to consider in helping us think through this question of, “can we make the AI of the singularity secure for us?”
I am convinced by Kurzweil’s treatise; technology will reach the singularity, that it is inexorably coming, the date soon when it reaches the inflection point, technical progress with its own momentum, like a runaway train and we humans are driving, for now. Yet, my experience in all my past roles, being a part of this technological trend, is that we don’t know what we are doing. I read his arguments and came away thinking that he, Kurzweil, trivializes the many wrong paths that all of this can take. I rather see this with the old axiom, Sod’s Law, also known as Murphy’s Law. It says, “that if something can go wrong, then it will.” There is another law, at least it should be a law, from Paul Ehrlich that says, “To err is human, to really foul [use your own replacement word] things up you need a computer.” And we should be greatly alarmed. It may not turn out okay at all.
What should we, who have made cybersecurity our life’s work, be doing to provide counsel and action about the uncontrolled use of AI, blinded by the progress and oblivious to its consequences? In the next posting, we start to dive into the more practical parts of this series, seeking answers with an eye on what Kurzweil is telling us.