Large scale procesing

Last night there was a blow somewhere south with big breakers rolling in this morning. A couple of us braved these for a rousing swim before breakfast.

This Monday morning, we welcomed new participants just arrived, Emre Neftci ("Neurons and sysnapses learn to learn"), Richard George, Christian Meyr ("Neurobiology that serves a purpose in hardware"), George Keller.
("For sale, cortical theory, well worn"). Stefano Ambrogio ("Emerging memory for machine leanring"), Bruno Averbeck ("Neural basis of reinforcement learning"), Oded Ilan ("Event based cameras can bring new products to life"), ?? from Graz ("I'm not a poet", "Structural plasticity in synapse models").

Then Matt Cook introduced the "Large scale processing" discussion. We wish we knew what processing meant for neural systems. We know machine processing. But we don't know how think about brains. But you say, we have neural networks. But we don't really know how these work either. There are many models of processing, which he listed
  1. DNNs
  2. Network dynamics (recurrent networks)
  3. Relational networks
  4. Predictive processing (coding)
  5. Liquid state machines
His main point, which he reemphasized several times, is that there are TOO FEW models now, and that we need more ideas about how computation in brains could take place.

Rodney pointed out that there is an issue that evolution developed certain nuclii in the brain and a topology of their connections and these solutions should not be ignored for the sake of mathematical modeling.

Sepp also asked, if we want to understand processing, how can we ignore emotion to behavior. Yulia pointed out there are many cognitive models that do include valence. 

Both Sergio and Christian then pointed out that these models should also not be detached from how they are configured or trained.

Sepp pointed out these models lack diversity, unlike e.g. society, that has a lot of diversity which enables its power.

Then Georg Keller took over. He works on mouse cortex. Since dawn of neuroscience there were two ideas: The older is layered deep networks, from Sherrington and Barlow, that the brain processes sensory information, makes a decision, and takes an action. It was demonstrated by feature detectors measured from brain cells.  The other view is based on ??

Barry Richmond pointed out that memory is missing from this view. Either genetic, developed or learned.

I pointed out that DNNs are very good at predicting as long as they are trained on the huge labeled datasets; they can do NLP, end to end speech recognition, and nowadays do not even use RNNs but rather CNNs to to these predictions. Georg agreed, but pointed out that they were discussing a different type of prediction.

Then Matt went on by explaining Hopfield net: How it uses saturating units connected all to all with symmetrical connections to store binary patterns (up to 14% of the number of neurons patterns), if they are orthogonal.  It introduces relational networks, which are based on population coding in part. In a population code, neurons encode an input by a set of responses, e.g. for orientation as in the sketch below:






Then what is the whole population doing? Matt showed how relational networks can represent multidimensional data and how these interconnected populations can pass messages to each other bidirectionally to come to a consensus about the situation. The connections between populations generally are topographic for that dimension, e.g. the C units in the left ABC population are connected most strongly to the C units in the middle CDE population.



If this model reflects brain anatomy, then somehow the precision of the messages is reflected by the numbers of axons connecting them. Long range connections are sparse and therefore lower resolution. 

That took us to the coffee break.

After the break, Georg Keller started with the receptive field. He pointed out the Helmholtz observation from long ago where if you push on your eye the world moves. And a patient with a horizontal eye muscle cut saw the world fly opposite to the direction he willed the eye to move. Even Sherrington called this "muscular sense" so there is no need measure output; which was the basis for all subsequent feed forward processing view.

in Marr's representation theory, the response R=v(stim)

Predictive processing postulates a bit further that R=v(stim)+p(internal state, i.e. intention)

He then summarized a remarkable experiment that showed the responses of a neuron in freely viewing monkey with only a fixation point and a stimulus present (the rest being a blank white background) could  not be used to recover the cells fixation-based receptive field. This obscure paper from 1995 Livingstone, Freeman and Hubel (CSHRL) has not be replicated, perhaps because it was so wierd or because of the obscure source.

So you can do a measurement of either
R=V-P or R=P-V,
i.e. either there is more stimulus than predicted or less than predicted.

Now suppose you have areas V1 and M1, then M1 generates a prediction of visual feedback given movement.  It needs a very complex nonlinear coordinate transform to go from motor to sensory coordinates.  E.g. if you hold a pen in front of you and move it to the left, it should predict from these muscle commands that the visual image should move left. In turn the visual system generates the error signal, i.e. the pen is where it should be or not where it should be.

They looked for auditory responses in area V1 of mouse. They see that 90% of the auditory neurons in V1 are driven by vision. So there is still a mystery.

He concluded with a brief summary of new experiments that they had recently done to put mice in a reversed visual environment. They find that as the mice learn to deal with this, the motor area input to visual cortex gradually flips its direction selectivity.

Finally, Christian Meyr talked about influence of biology on DNNs. He thinks neuromorphic people and biologists are not doing a good job on influencing deep learning; rather the DNN people are left to fumble around on their own rather than being intelligently guided by better understanding. He emphasized again that SNNs are mostly dead currently, given current memory technology; they are 3 orders of magnitude less efficient than ANNs computed on current SOA accelerators, because of the point made early in workshop about the nature of DRAM memory and how these expensive accesses can be efficiently reused in synchronous ANN accelerators.
He then gave  a useful summary of some of the features of the SpiNNaker2 system that is now nearing tapeout, including true random numbers, sparsity, instantaneous voltage and clock frequency ramping, which will lead to quite efficient hardware for event driven simulations of many types.

That led to lunch break at 1257pm, followed by many workgroup meetings in the afternoon and evening.








Comments