This morning after either breakfast or tennis or both, the participants welcomed local students from Sassari brought in by Sergio Solinas to hear workgroup progress reports and to plan the afternoon's demos.
There are a planned 7 demos: TBD
Then Moritz Milde MC'ed a presentation of workgroup progress reports.
RETCODE was reported by Tetsu Yagi and Tobi Delbruck. The main goal is to try to implement a form of signal to noise control to maximize information flow from the Osaka retina, by controlling parameters of the bipolar cells or the Izavichavich neuron parameters. There is a still an open question about how to measure this SNR, perhaps by entropy as explored last summer in Telluride, or by filtering the retina output digitally according to the ideal spatiotemporal filtering properties of the retina electronic circuits.
Charlotte Frenkel (CTEAM Institute, Université catholique de Louvain) showed progress with ODIN, the online digital neuromorphic chip.
Alpha Renner (INI, UZH) showed results from WTAB. In particular how a WTA network with local inhibition can result in stable local bumps of activity. Rodney suggested using the Pointer architecture to steer activity. They are trying to implement it on DYNAP and also in BRIAN and continuous differential equation simulations.
Yulia Sandarmirskaya reported that the NSLAM group was held back mostly by struggles with software understanding so far. Map formation is hte main goal.
Alan Stokes (Manchester) reported about the group SPINN (SpiNNaker crash course); people are starting to use it, but are working to install it and to learn how to use the software framework. Alan pointed out that installing software before the workshop would save days of workshop time. Yulia pointed out that there are many things occurring in parallel and it hard to find workshop time to work on this workgroup. PyNN users would have it easy but others have a much harder time. Still even kindergarten progress is useful to take home.
There was a discussion about the incompatibility of different file formats, e.g. PyNN, aedat, and the strengths and weaknesses of different containers like Bag files used in ROS, HDF5 and the fact they are easily corrupted, and ARP used with the iCub developments at IIT.
Jamie Knight (Sussex) reported about GPUEYE. They brought small wheeled robots with GPUs that allow rather fast simulation of SNNs. What they plan is to collect driving data under variable lighting and to try to train simple SNNs and cost functions for training. They have a 88 neuron path integration network, and they hope to make it even smaller to enable running it on BS2 32x32 neuron networks. They are also teaching how to use their robots with BRIAN. What they hope is to develop an analog network for insect localization that maintains retinotopy and invariance. The usual pixel difference cost functions blur out large features too much for indoor use, so the correct cost functions are still unclear.
Dongchen Liang (INI, UZH) reported about the NOISE group, trying to filter out DVS spike noise with DYNAP. He reported how simple ideas of just setting adjustable threshold for DYNAP neurons doesn't work well because of DYNAP mismatch. Another way is to try using WTA inhibition to let through only large activity but filter out weak uncorrelated events. Carsten Nielson reported that the FPGA already does a correlation filter, but someone the DVS-DYNAP still is not working very well, how was not very clear.
Dmitrii Zendrikov (INI, UZH) reported about the RNGOAL relational network group. They are trying to get a network of 3 populations of cells representing 2 eyes and a head position to encode gaze direction. They got a simulation working, and are now mapping it onto DYNAP and get it balanced. Rodney asked how is the combinatoric problem. Dmitrii answered that they use 1d retinas and 2d population. Sepp pointed out there is a Sophie Deneuve paper from 10 years before addressing this problem. RNGOAL is working towards a basic implementation of this architecture on DYNAP. Matthew pointed out theory is very old, but goal is to implement it on DYNAP, and to learn basic principles of making such systems work on analog-mixed-signal hardware.
Next came BrainScales2 (BRSC2 ). Sebastian Billaudelle (Heidelberg) talked about structural plasticity goals and setup. They can set an address for each synapse for which address it listens to, thus supplying a 3rd dimension. Each neuron can have 32 presynaptic partners but only of 32x64 possible inputs. They came up with specific scenario where they have a row of teacher neurons at the top and want to learn a diagonal set of weights as shown in the sketch below. They are using STDP, pruning, and homeostasis to try to learn the connections. They will demo it this afternoon. The analog parameters are easy to set, but the compiler is really a hack now and painful to use.
After the coffee break, BRSRC2 went on with Yannick Stradmann (Heidelberg) and Christian Jarvers (Ulm). Yannick reported about structured neurons. These are motivated by dendritic tree structures. They have NMDA and calcium spikes. They want to make use of these compartments for feature classificaiton. Because they have a tree structure that provides additional power as described in the arxiv paper on the sketch. To make it work, they need to configure the chip in such binary tree structure. The branches are labeled at top right in the sketch, and the bottom shows how the two rows of switches can wire this up, using combination of connection or conductance. In the big chip to come, they will proximal and distal parts of the tree. While the neuron is below threshold it acts like LIF, but once it goes above threshold, then they can make an NMDA conductance that can trigger plasticity by STDP. They are working to configure this stuff.
Next Christian (who was really great also as a source of USB mini cable and a spare Arduino for the RoShamBo hand rework) talked about neurorobotics work with BRSC2. He talked about a pantograph arm system that has 2 motors that drive 2 rods to control a pointer in 2d like a pen plotter, shown in this sketch; the system outputs photodiode and motor positions in analog and are controlled by spikes:
They do this to scale the system to BRSC2 system that runs 1000X faster than real time, so this robot "cannot do much, but it is really fast" which the audience liked. He proposed that they could build a super fast maze follower using the LED and photodiode at the tip of the arm. They can currently achieve about 2-4ms per cycle but could go faster with FPGA development. They have only 32 neurons on the BRSC2 prototype, and they have made a google spreadsheet that collects references to tiny networks that might run on the prototype BRSC2.
Karla Burela (INI, UZH) reported about NSLAM developments. She showed the setup with the omnibot and the head direction neurons representing 1d landmark position in the robot frame(?). They have a simple object recognition system for DYNAP that can distinguish sphere from cone (which are designed to appears the same from all directions)
SSTN was reported by Carsten Nielson (aiCtx and INI, UZH). They are working on how to map networks to the limited input address space of 10 bits given a larger address space of inputs. They so far discussed it and have a plan for graph partitioning.
BIOPLAS was reported by Melika Peyvand (INI, UZH). They have a datatset of EMG signals from EPFL and worked on using SNNs to classify these movement signals. They decided to try to record their own dataset, and inspired by RoShamBo, they decided to try to record these rock scissors and paper gestures. They have collected the dataset and may make it public. Hopefully by the end they will be able to classify these signals and beat the DVS solution. Camillo has a python interface to Myo, which would allow easy use by typical workshop participants. Moritz asked, what are the challenges for classification? Melika answered that these are currently not known. Sepp will help with seeing where the information is in the different EMG channels.
TOUCH was reported by Benjamin Ward-Cherrier (Bristol). They are using a DAVIS to report touch from what other termed a "DVS condom" that allows tracking of the markers on the inside of the flexible rubber cover at high bandwidth. The markers are strongly coupled and the aim is to maximize the precision and speed of the update given physical constraints using the DVS events, which are bootstrapped by a DAVIS frame. They are aiming to put this in framework of learning to learn the marker correlations and physical constraints, maybe by some spring model where the spring parameters could be learned.
Jacques Keiser (FZI Forschungszentrum Informatik) reported on HBP neurorobotics platform. They met yesterday for first time and participants got the platform software that let them run simulations in NEST. Thanks to Alan Stokes, they could run the network on a SpiNNaker platform yesterday for first time. They now have a ROS implementation that allows multi-threading asynchronous updates bidirectionally between sensors and PyNN:
NMPM, the neuromorphic power managment group, was reported by Johannes Partzsch (Dresden). The aim is to control the clock frequencies and supply voltages on the SpiNNaker2 chip processing cores dynamically to save lots of power. They can run at 500MHz 1V, or 125MHz 0.7V. They only burn half energy for same task (burning only 15% of power, but taking 4X longer) in low power state. How can this flexibility be used? The goal was to achieve a simple project. The goal is to divide the image into parts and only run the part of the image following the object of interest at high frequency.
That concluded the workgroup progress reports. There was a break for lunch, with live demos to be shown at 14:00 in the disco lab room.
There are a planned 7 demos: TBD
Then Moritz Milde MC'ed a presentation of workgroup progress reports.
RETCODE was reported by Tetsu Yagi and Tobi Delbruck. The main goal is to try to implement a form of signal to noise control to maximize information flow from the Osaka retina, by controlling parameters of the bipolar cells or the Izavichavich neuron parameters. There is a still an open question about how to measure this SNR, perhaps by entropy as explored last summer in Telluride, or by filtering the retina output digitally according to the ideal spatiotemporal filtering properties of the retina electronic circuits.
Charlotte Frenkel (CTEAM Institute, Université catholique de Louvain) showed progress with ODIN, the online digital neuromorphic chip.
Alpha Renner (INI, UZH) showed results from WTAB. In particular how a WTA network with local inhibition can result in stable local bumps of activity. Rodney suggested using the Pointer architecture to steer activity. They are trying to implement it on DYNAP and also in BRIAN and continuous differential equation simulations.
Yulia Sandarmirskaya reported that the NSLAM group was held back mostly by struggles with software understanding so far. Map formation is hte main goal.
Alan Stokes (Manchester) reported about the group SPINN (SpiNNaker crash course); people are starting to use it, but are working to install it and to learn how to use the software framework. Alan pointed out that installing software before the workshop would save days of workshop time. Yulia pointed out that there are many things occurring in parallel and it hard to find workshop time to work on this workgroup. PyNN users would have it easy but others have a much harder time. Still even kindergarten progress is useful to take home.
There was a discussion about the incompatibility of different file formats, e.g. PyNN, aedat, and the strengths and weaknesses of different containers like Bag files used in ROS, HDF5 and the fact they are easily corrupted, and ARP used with the iCub developments at IIT.
Jamie Knight (Sussex) reported about GPUEYE. They brought small wheeled robots with GPUs that allow rather fast simulation of SNNs. What they plan is to collect driving data under variable lighting and to try to train simple SNNs and cost functions for training. They have a 88 neuron path integration network, and they hope to make it even smaller to enable running it on BS2 32x32 neuron networks. They are also teaching how to use their robots with BRIAN. What they hope is to develop an analog network for insect localization that maintains retinotopy and invariance. The usual pixel difference cost functions blur out large features too much for indoor use, so the correct cost functions are still unclear.
Dongchen Liang (INI, UZH) reported about the NOISE group, trying to filter out DVS spike noise with DYNAP. He reported how simple ideas of just setting adjustable threshold for DYNAP neurons doesn't work well because of DYNAP mismatch. Another way is to try using WTA inhibition to let through only large activity but filter out weak uncorrelated events. Carsten Nielson reported that the FPGA already does a correlation filter, but someone the DVS-DYNAP still is not working very well, how was not very clear.
Dmitrii Zendrikov (INI, UZH) reported about the RNGOAL relational network group. They are trying to get a network of 3 populations of cells representing 2 eyes and a head position to encode gaze direction. They got a simulation working, and are now mapping it onto DYNAP and get it balanced. Rodney asked how is the combinatoric problem. Dmitrii answered that they use 1d retinas and 2d population. Sepp pointed out there is a Sophie Deneuve paper from 10 years before addressing this problem. RNGOAL is working towards a basic implementation of this architecture on DYNAP. Matthew pointed out theory is very old, but goal is to implement it on DYNAP, and to learn basic principles of making such systems work on analog-mixed-signal hardware.
Next came BrainScales2 (BRSC2 ). Sebastian Billaudelle (Heidelberg) talked about structural plasticity goals and setup. They can set an address for each synapse for which address it listens to, thus supplying a 3rd dimension. Each neuron can have 32 presynaptic partners but only of 32x64 possible inputs. They came up with specific scenario where they have a row of teacher neurons at the top and want to learn a diagonal set of weights as shown in the sketch below. They are using STDP, pruning, and homeostasis to try to learn the connections. They will demo it this afternoon. The analog parameters are easy to set, but the compiler is really a hack now and painful to use.
After the coffee break, BRSRC2 went on with Yannick Stradmann (Heidelberg) and Christian Jarvers (Ulm). Yannick reported about structured neurons. These are motivated by dendritic tree structures. They have NMDA and calcium spikes. They want to make use of these compartments for feature classificaiton. Because they have a tree structure that provides additional power as described in the arxiv paper on the sketch. To make it work, they need to configure the chip in such binary tree structure. The branches are labeled at top right in the sketch, and the bottom shows how the two rows of switches can wire this up, using combination of connection or conductance. In the big chip to come, they will proximal and distal parts of the tree. While the neuron is below threshold it acts like LIF, but once it goes above threshold, then they can make an NMDA conductance that can trigger plasticity by STDP. They are working to configure this stuff.
Next Christian (who was really great also as a source of USB mini cable and a spare Arduino for the RoShamBo hand rework) talked about neurorobotics work with BRSC2. He talked about a pantograph arm system that has 2 motors that drive 2 rods to control a pointer in 2d like a pen plotter, shown in this sketch; the system outputs photodiode and motor positions in analog and are controlled by spikes:
They do this to scale the system to BRSC2 system that runs 1000X faster than real time, so this robot "cannot do much, but it is really fast" which the audience liked. He proposed that they could build a super fast maze follower using the LED and photodiode at the tip of the arm. They can currently achieve about 2-4ms per cycle but could go faster with FPGA development. They have only 32 neurons on the BRSC2 prototype, and they have made a google spreadsheet that collects references to tiny networks that might run on the prototype BRSC2.
Karla Burela (INI, UZH) reported about NSLAM developments. She showed the setup with the omnibot and the head direction neurons representing 1d landmark position in the robot frame(?). They have a simple object recognition system for DYNAP that can distinguish sphere from cone (which are designed to appears the same from all directions)
SSTN was reported by Carsten Nielson (aiCtx and INI, UZH). They are working on how to map networks to the limited input address space of 10 bits given a larger address space of inputs. They so far discussed it and have a plan for graph partitioning.
BIOPLAS was reported by Melika Peyvand (INI, UZH). They have a datatset of EMG signals from EPFL and worked on using SNNs to classify these movement signals. They decided to try to record their own dataset, and inspired by RoShamBo, they decided to try to record these rock scissors and paper gestures. They have collected the dataset and may make it public. Hopefully by the end they will be able to classify these signals and beat the DVS solution. Camillo has a python interface to Myo, which would allow easy use by typical workshop participants. Moritz asked, what are the challenges for classification? Melika answered that these are currently not known. Sepp will help with seeing where the information is in the different EMG channels.
TOUCH was reported by Benjamin Ward-Cherrier (Bristol). They are using a DAVIS to report touch from what other termed a "DVS condom" that allows tracking of the markers on the inside of the flexible rubber cover at high bandwidth. The markers are strongly coupled and the aim is to maximize the precision and speed of the update given physical constraints using the DVS events, which are bootstrapped by a DAVIS frame. They are aiming to put this in framework of learning to learn the marker correlations and physical constraints, maybe by some spring model where the spring parameters could be learned.
Jacques Keiser (FZI Forschungszentrum Informatik) reported on HBP neurorobotics platform. They met yesterday for first time and participants got the platform software that let them run simulations in NEST. Thanks to Alan Stokes, they could run the network on a SpiNNaker platform yesterday for first time. They now have a ROS implementation that allows multi-threading asynchronous updates bidirectionally between sensors and PyNN:
NMPM, the neuromorphic power managment group, was reported by Johannes Partzsch (Dresden). The aim is to control the clock frequencies and supply voltages on the SpiNNaker2 chip processing cores dynamically to save lots of power. They can run at 500MHz 1V, or 125MHz 0.7V. They only burn half energy for same task (burning only 15% of power, but taking 4X longer) in low power state. How can this flexibility be used? The goal was to achieve a simple project. The goal is to divide the image into parts and only run the part of the image following the object of interest at high frequency.
That concluded the workgroup progress reports. There was a break for lunch, with live demos to be shown at 14:00 in the disco lab room.
Comments
Post a Comment