With increasing hidden layer size, a neural network shows increasing training performance while test performance reaches a maximum at an optimal size of the hidden layer. In this study the “energy efficiency” of an Artificial Neurovascular Network also reaches an optimum at a finite size of the hidden layer. This is because the increase in training accuracy is accompanied with an increase in total energy used. Can we find the optimal hidden layer size by observing the relation between energy consumption and accuracy for different hidden layer sizes? In this study, an MLP is connected bidirectionally to a simple vascular tree structure. An energy source supplies the energy demanded by the vascular tree. The biases of the hidden layer neurons are decided by the energy available at the leaf nodes near each neuron. The vascular weights are updated depending upon the requirement of each neuron. For a smaller value of hidden layer size, the network approaches a stable fixed point in the per capita energy consumption vs accuracy plot, whereas once the hidden layer size crosses a threshold, the fixed point appears to vanish giving place to a line of attractors.