The limits of Deep Learning, and its future

Deep Learning isn’t like the human brain. What is the future of research in deep learning and optimization techniques? What innovations are to be expected? In the mind, thoughts happens throughout time, whereas in Deep Learning the algorithm is flashed once from lower-placed neurons to higher-placed neurons in only one sweep, without any neural recurrence […]

Spiking Neural Network (SNN) with PyTorch: towards bridging the gap between deep learning and the human brain

I think I’ve discovered something amazing: Hebbian learning naturally takes place during the backpropagation of SNNs. Backpropagation in Spiking Neural Networks (SNNs) engenders Spike-Timing-Dependent Plasticity (STDP)-like Hebbian learning behavior. So: – At first I simply thought “hey, what about coding a Spiking Neural Network using an automatic differentiation framework?” Here it is. – Then I […]

Random Thoughts on Brain-Computer Interfaces, Productivity, and Privacy.

I think that in the future, Brain-Computer Interfaces (BCI) will be absolutely awesome. As much as the thought of using them are attractive, I think they also might be dangerous for ourselves if misused. We can expect to see the emergence of awesome speeds of communication, new apps and tools, shareable mental models and knowledge […]

How to Grow Neat Software Architecture out of Jupyter Notebooks

Have you ever been in the situation where you’ve got Jupyter notebooks (iPython notebooks) so huge that you were feeling stuck in your code? Or even worse: have you ever found yourself duplicating your notebook to do changes, and then ending up with lots of badly named notebooks? Well, we’ve all been here if using notebooks long enough. So how should we code with notebooks?

LSTMs for Human Activity Recognition

See the GitHub project here. Human Activity Recognition (HAR) using smartphones dataset and an LSTM RNN. Classifying the type of movement amongst six categories: – WALKING, – WALKING_UPSTAIRS, – WALKING_DOWNSTAIRS, – SITTING, – STANDING, – LAYING. Compared to a classical approach, using a Recurrent Neural Networks (RNN) with Long Short-Term Memory cells (LSTMs) require no […]