Attractor Dynamics in the Energy - Accuracy Space in an Artificial Neurovascular Network
Feedforward neural networks, when trained optimally perform classification and function approximation tasks with reasonably high accuracy. Biological neural networks, unlike their artificial counterparts, require energy which is provided by a network of blood vessels. This leads us to ask whether the cerebrovascular network is capable of ensuring maximum performance of the neural network while supplying minimum energy? Should the cerebrovascular network also be trained, like the neural network, to achieve such an optimum? In order to answer these questions, we constructed an Artificial Neurovascular Network (ANVN) comprising of a multilayered perceptron (MLP) connected to a vascular tree structure.
The root node of the vascular tree structure is an energy source and the terminal nodes of the vascular tree supply energy to the hidden neurons of the MLP. The energy delivered by the terminal vascular nodes to the hidden neurons determines the biases of the hidden neurons. The “weights’’ on the branches of the vascular tree depict the energy distribution from the parent node to the child nodes. The vascular weights are updated by a kind of “backpropagation” of the energy demand error that arises inside the hidden neurons.
Analyzing the test performance of ANVN with trained and untrained vascular networks, we found that higher performance was achieved at lower energy levels when the vascular network is trained. This indicates that the vascular network is trained to ensure efficient neural performance. We also found that the energy efficiency of an ANVN reaches a maximum at an optimal hidden neuron size. In addition, for a smaller number of hidden neurons, the per capita energy consumption vs accuracy of the network approaches a fixed point attractor. Once the number of hidden neurons increases beyond a threshold, the fixed point appears to vanish giving place to a line of attractors.