So there’s this debate raging between Mark Zuckerberg and Elon Musk about the implications of AI for the future: Is AI going to be dangerous for humankind as Elon Musk says, or is that opinion only “nay-saying” as Mark Zuckerberg believes?
I have two thoughts on this.
First, the human urge to explore, to find out, to invent, can never be curbed even if it means to live dangerously. This powerful human desire to go beyond the known is irrepressible. We emote in our innermost beings with Tennyson’s Ulysses who, despite being old and feeble, calls himself a
“… grey spirit yearning in desire To follow knowledge like a sinking star, Beyond the utmost bound of human thought.”
Second, the potential of human intelligence should never be underestimated. As Yuval Noah Harari, author of the critically-acclaimed New York Times bestseller and international phenomenon Sapiens has pointed out, it has been the power of human intelligence that has helped us survive through the millennia of evolution. Intelligence—the thing that gives our species its name, homo sapiens. Sapience. Knowledge. Intelligence. Wisdom. Just substitute “human intelligence” for the word “life” in the statement that the character Dr. Ian Malcolm (Jeff Goldnlum) makes in Jurassic Park:
“If there is one thing the history of evolution has taught us, it’s that life will not be contained. Life breaks free, it expands to new territories and crashes through barriers, painfully, maybe even dangerously, but, uh… well, there it is.”
Icarus and his wings, Prometheus and his fire—we have seared the burning desire to explore–even dangerously explore–into the myths that make us. So whether we like it or not, our innate human-ness will not leave AI unexplored. We should proceed in our quest for AI-driven solutions, accompanied by both courage and caution.
But it will also be our human-ness that will rescue us from calamity.
What is your opinion?