Caltech CNS origins (Hopfield) and edge inference memory energy gap (Cong group, UCLA), NullHop


From Monday's session, some links
  1. John Hopfield's personal history of forming Caltech's Computation and Neural Systems program (CNS)   https://drive.google.com/open?id=0BzvXOhBHjRheQmNPcUJuVzB4dHM
  2. Very recent paper on increasing gap between memory and compute for DNN "edge" inference, has useful trend plotshttps://drive.google.com/open?id=1DvlRMU-xIMhIOLzx2uozkIwH5oxrmhLU , Xu, X., Ding, Y., Hu, S. X., Niemier, M., Cong, J., Hu, Y., et al. (2018). Scaling for edge inference of deep neural networks. Nature Electronics 1, 216–222. doi:10.1038/s41928-018-0059-3.
  3. Our paper on our NullHop CNN accelerator, accepted to IEEE TNNLS https://drive.google.com/open?id=10FdVx-VRZ4Q26st0vVQz8k-Mw95f2hCR,Aimar, A., Mostafa, H., Calabrese, E., Rios-Navarro, A., Tapiador-Morales, R., Lungu, I.-A., et al. (2018). NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps. IEEE Trans. on Neural Networks and Learning Systems TNNLS) accepted. 

Comments