How can neuroscience inform Artificial Intelligence research and practice, and what are the benefits and challenges involved?

Short Article: Branching into brains – Shai and Larkum 2017.
Long Article: Neuroscience – Inspired Artificial Intelligence – Hassabis, Kumaran, Summerfield and Botvinick 2017

Neuroscience has a long history of influencing Artificial Intelligence. One of the primary examples of this is Reinforcement Learning where an entire field of research which was originally built for training animal behaviour was developed into an active field of Artificial Intelligence research. Additionally, modern deep learning approaches such as Neural Networks were inspired by the biological brian especially the way specific neurons are activated. However, currently there are aspects of deep learning for example altering the network based on how accurately it finds the solution is illustration of how the field is diverging from neuroscience – where the practice of altering the network from external sources is rare. Whilst some research is ongoing into creating neural models that better mimic the biological brain these limitations do not spell the end of the influence neuroscience has on modern Artificial Intelligence. Examples of such can be seen in advances in how knowledge from memory in neuroscience is demonstrating promise in furthering Artificial Intelligence research.

Reinforcement learning is a prime example of how the fields of neuroscience and Artificial Intelligence can learn from each other. Hassabis et al (2017) describes the two pillars of AI as deep learning and reinforcement learning. Although Reinforcement Learning and Deep Learning are intertwined and it would be an oversimplification to consider these as two separate pillars the influence neuroscience has had on Reinforcement Learning is unquestionable. Maximising future rewards is something that was born out of research into animal behaviour and this was directly transposed into the field of Reinforcement Learning in Artificial Intelligence. This exemplifies how in the past the field of neuroscience has successfully influenced Artificial Intelligence and provides a foreshadowing of how neuroscience can influence Artificial Intelligence.

Presently, modern Neural Networks have taken inspiration from the brain and its makeup of firing neurons. Shai and Larkum (2017) in “Branching into brains” imply that that is where the similarity ends and there may just be a fundamental difference between the two. Hassabis however, takes a more nuanced approach and whilst does point out some key differences and limitations in neuroscience’s impact on future artificial intelligence research it is contextualised as a further challenge for which neuroscience can continue to play a vital role in resolving the current challenges.

Correspondingly, neural networks frequently rely on back-propagation in their training phase to achieve a model that provides robust and reliable predictions. In essence it consists of fine tuning the weights that are inside the neural net based on the error rate from the previous iteration or epoch (Hecht-Nielsen, R., 1992). Essentially, a given model is built and then a metric such as error rate or loss is used to determine how accurate the current model is at predicting the outcome. This is often done by using a test-set where the outcome is known and can be used to measure the performance of the current model. The loss is then fed backwards to tune the weights and parameters of the model based on this loss. Herein lies the issue with comparing this method of backpropagation in Artificial Intelligence to Neuroscience. In a biological brain there is no switch between the learning and production stage, nor is their credit assignment of backpropagation, whereby those parameters that are better and more accurate at providing the correct solution are selected and the others are discarded. Accordingly, there is a gap between the traditional understanding of how the brain operates where credit assignment is not being fed to the neurons in the brain from any external source, as they are only locally connected to other neurons and will not receive any extra information about the error they may be making.

This issue of how credit assignment in traditional neuroscience is addressed by both Hassabis et al in Neuroscience-Inspired Artificial Intelligence and Shai et al in Branching into brains. Hassabis tries to close this gap by suggesting that adjusting the forward weights will allow backward projections to transmit useful signals that can be used (Lillicrap et al, 2016). Lilicrap suggests that a second neural network can be used to feed information back into the original network and can act as a form of backpropagation. Another suggestion is how plasticity in biological synapses based on local information is a form of backpropagation that current Artificial Intelligence researchers are implementing in models such as hierarchical auto-encoder networks and energy based networks updating the weights on local information only, providing a better simulation of a biological brain.

Shai, however, takes a less nuanced approach by first of all confronting the fact that deep learning and biological brains are after all fundamentally different. Guerguiev, Lillicrap and Richards propose how a pyramidal structure of neurons can perhaps solve this issue. The concept behind pyramidal neurons is that the long branches mean that error signals are kept separate from sensory neurons and can be brought together at the correct point to obtain the optimal solution. Guerguiev et al. notes how cortical neurons are seemingly located at just the correct point to receive sensory input and the top of these are well positioned to receive feedback error all the while keeping these two feeds of information separate for a sufficient period by having other nearby neurons acting as gates to control the information transfer for optimal results. This suggestion of a pyramidal neuron is a more reserved solution to bridging the gap between the two fields compared with some of Hassabis’s

suggestions such as having a second network feeding back information in place of direct backpropagation.

Additionally, Hassabis et al illustrate many more areas where Neuroscience can, has and will continue to influence Artificial Intelligence research. For example there are many contributions around current ideas of memory in neuroscience that can be introduced into Artificial Intelligence research, such as episodic memory which allows the Artificial Intelligence algorithm to learn from consecutive experiences this has been particularly successful on training an intelligent algorithm to play a video game such as Atari (Kavukcuoglu, 2013). Further possibilities include imagination and simulation based planning where deep generative models have shown some promise in being capable of simulating human-like levels of imagination. For instance humans plan hierarchically by considering in parallel terminal solutions making interim choices and piecemeal steps towards the goal. Current Artificial Intelligence research is focused around mimicking such behaviour and has shown significant promise in providing more optimal and smarter solutions.

Moreover, the benefits of using biological processes for inspiration and even as a blueprint to develop Artificial Intelligence has a number of advantages. Using something that has evolved naturally using Darwinian survival principles is particularly advantageous. As biological organisms and in particular human brains developed into what is essentially an incredibly energy efficient and versatile organic intelligent computer (Moravec, H., 1998). To then bring this into modern Artificial Intelligence would ostensibly result in high performance solutions, as it is almost as if the work has already been done on a biological level and all that is needed is to recreate this on a silicon level.

There are however many challenges associated with using neuroscience as a blueprint or even as a general influence in Artificial intelligence. For one we are still unsure at this stage if the two fields are even fully comparable. The fundamental differences that are unsolved are yet to be shown to have a clear solvable solution. Perhaps the fields that have influenced each other in the past, will be unable resolve these fundamental differences and will always have some limitations on what neuroscience can provide Artificial Intelligence. Furthermore, when there is a fundamental difference (if such a limitation exists) it can hinder Artificial Intelligence research when it indeed needs to take a fundamentally different research approach to further the goals and ultimately achieve a true Artificial General Intelligence “human like” algorithm. Continuing with Neuroscience influenced research where the solutions lie elsewhere could prove to be the biggest challenge and obstacle to obtaining improved results in Artificial Intelligence research.

In conclusion, neuroscience can influence and has influenced Artificial Intelligence research as has been discussed with Reinforcement Learning to Neural Networks. The use of credit assignment which is implemented in neural networks when the

error is propagated backwards to fine tune the weights and parameters is something that at first appears to conflict with current neuroscientific knowledge. Nonetheless, a number of different approaches that are currently being researched are showing promise in bridging this gap. From adding a second network that feeds back information into the network to creating a pyramidal structure where the different parts of the network can be isolated from each other until an optimal point, are among some of the promising solutions that are emerging from the field of neuroscience and being applied to Artificial Intelligence research. Ultimately, as Shai et al points out there may be fundamental differences between the two fields that will never be reconciled, a challenge that future Artificial Intelligence researchers will continue to grapple with.

References

Hassabis, D., Kumaran, D., Summerfield, C. and Botvinick, M., 2017. Neuroscience-inspired artificial intelligence. Neuron95(2), pp.245-258.

Hecht-Nielsen, R., 1992. Theory of the backpropagation neural network. In Neural networks for perception (pp. 65-93). Academic Press.

Lillicrap, T.P., Cownden, D., Tweed, D.B., and Akerman, C.J. (2016). Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7, 13276.

Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. and Riedmiller, M., 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.

Moravec, H., 1998. When will computer hardware match the human brain. Journal of evolution and technology1(1), p.10.

Shai, A. and Larkum, M.E., 2017. Deep Learning: Branching into brains. Elife6, p.e33066.