Deep Learning isn’t like the human brain. What is the future of research in deep learning and optimization techniques? What innovations are to be expected?
In the mind, thoughts happens throughout time, whereas in Deep Learning the algorithm is flashed once from lower-placed neurons to higher-placed neurons in only one sweep, without any neural recurrence of connectivity in time nor neural loops. This is a problem for deep learning if we ever want to unlock more powerful deep learning algorithms.
What is the future of research in deep learning and optimization techniques?
You may also like my article on the future of research in deep learning, which explains the SNN deep learning algorithm.
Deep Learning in the brain
It is expected by Elon Musk that the Brain-Computer Interface (BCI) will revolutionize how humans communicate. I have a good article on the BCI if you want to learn more. There is also this awesome blog post by waitbutwhy discussing the subject in depth. Research on SNNs will need to take place.
Innovations to expect
We can expect to see the emergence of awesome speeds of communication between brains, new apps and tools, shareable mental models and knowledge bases. Lots of productivity and leisure apps. Deep learning projects that will enable the creation of those apps. A need for deep learning open-source projects will be felt, and changes in the type of intelligence that humans will need to develop in the future will happen, as knowledge will become a commodity.
Open-Source Software as a way to regain our power and privacy in the digital world
I believe that the best operating system of the BCI will need to be open-source, like Linux, to ensure the trust of the public. Anyone should be able to audit the system by themselves (if they wanted to) in order to trust it. There should be no backdoor to your mind, nor any leak of information.
Allowing communication between the brain and machines is expected to require interfaces to a common communication format (or neural data structure), and this converter will need a deep learning algorithm trained on just your mind to learn how to do and map the conversion to the universal neural information sharing formats.
The safest should be that people train their own deep learning models locally to avoid being subject to adversarial attacks, a very important topic. Quickly said: adversarial attacks could trick you into believing anything. I speculate that adversarial attacks could be made on the human brain, but mostly only if its interneural connections are exposed, so it’s important to keep such models secure and local, and to limit the reach of such models if that is to happen.