If you’re like me, you feared more a biological disaster than an AI disaster.
Many people feared AI in the past few years
There was this particular moment when Elon Musk, Mark Zuckerberg, Andrew Ng, and others had extreme opinions on AI.
It is about those times in 2018 that the German inspect magasine reached to me for an interview Q&A. They asked the question “Where do you think the future will lead us regarding Deep Learning?” to which I have answered:
“In my opinion, people fear too much Deep Learning. We have much more to fear from biology engineering (e.g.: people crafting viruses and bacterias that would kill us). […] I expect to see lots of new productivity tools emerging of Deep Learning technologies […]”
They even went further and named the article “No Fear for Deep Learning” (Keine Angst Vor Deep Learning). You can also read more of my notes on the future of Deep Learning here.
Do you still hear people fearing the end of the world with AI? Give them a reason to worry and point to virus editing. The end of the world with AI won’t happen anytime soon.
I’ve recently learned that Bill Gates feared a global pandemic too. Heck.
Here is the magazine the article was published on: www.inspect-online.com.
Note: I am posting the photo of the article with their written permission. Click here to open the article in HD.
See Appendix A below for my answer to common questions.
Appendix A: My Answer to Common Questions.
I still worry, how should we be careful?
If you still worry, please work at creating the BCI to save us from the inevitable. Here is some text I extracted from this source:
SPOILER ALERT: you may want to read the source article before reading the punch below. Okay, go:
“But then, one night while working on the post, I was rereading some of Elon’s quotes about this, and it suddenly clicked. The AI would be me. Fully. I got it.
[…]
Elon sees communication bandwidth as the key factor in determining our level of integration with AI, and he sees that level of integration as the key factor in how we’ll fare in the AI world of our future:We’re going to have the choice of either being left behind and being effectively useless or like a pet—you know, like a house cat or something—or eventually figuring out some way to be symbiotic and merge with AI.
[…]
But time is of the essence here—something Elon emphasized:The pace of progress in this direction matters a lot. We don’t want to develop digital superintelligence too far before being able to do a merged brain-computer interface.”
To sum up, AI should be an extension of ourselves, as much as the car is an extension of our own body while driving. This can be achieved by coding the BCI to avoid the inevitable rise of AI / AGI.
Why Separate Deep Learning from Biology Engineering?
Both AI and biology engineering can be used for bad things, like nuclear, and like many innovative technologies.
Those rad technologies, when emerging, often have phases where they can come to be uncontrollable or else very hard to control.
So why pass the puck to biology engineering? The difference I highlight here targets the alarmists who’d like to focus on optimizing for the threat that is first to come. Back in 2018, the negative thoughts of the population seemed to be mainly focused on the field of AI in an overamplified fashion considering the yet quite low risk. Some people just worry about anything and AI seemed to be a hyped misunderstood scapegoat.
More details:
Deep learning is fine to solve very specific tasks at the moment, but I believe that for it to have free will and to be a threat by overoptimizing it’s given goal (a.k.a. be uncontrollable), it needs much much more matury, and much much more computing power, and it’d need to be combined with Reinforcement Learning (RL).
There is also possibly Moores Law’s possible further limitations to come, and possibly Quantum Computing to come before AGI (I’d like an opinion on that if anyone here knows hardware and Moore’s Law enough).
Your Question Here
Ask your own question on the original LinkedIn post.