Loading Events

RI Seminar

October

21
Fri
Guido Bugmann Associate Professor University of Plymouth UK
Friday, October 21
3:30 pm to 4:30 pm
A deep spiking network model for fast stimulus-response association learning

Event Location: NSH 1305
Bio: Guido Bugmann is an associate professor (Reader) in Intelligent Systems at the University of Plymouth’s School of Computing and Mathematics where he develops human-robot dialogue systems, vision-based navigation systems for wheeled and humanoid robots, and investigates computational properties of biological vision and decision making. He previously worked at the Swiss Federal Institute of Technology in Lausanne, NEC’s Fundamental Research Laboratories in Japan and King’s College London. He has three patents and more than 100 publications. Bugmann studied physics at the University of Geneva and received his PhD in physics at the Swiss Federal Institute of Technology in Lausanne. He is a member of the Swiss Physical Society, the British Machine Vision Association, AISB, the board of EURON (2004-2008) and the EPSRC peer review college.

Abstract: On the basis of instructions, humans are able to set up associations between sensory and motor areas of the brain separated by several neuronal relays, within a few seconds. This paper proposes a model of fast learning in a multilayer network of Leaky Integrate-and-Fire (LIF) spiking neurons. In this model, it is proposed that cortical feedback connections convey a top-down learning-enabling signal that guides bottom-up learning in “hidden” neurons that are not directly exposed to input or output activity. A new synaptic learning rule is proposed where synaptic efficacies converge rapidly towards a specific value determined by the number of active inputs of a neuron, respecting a principle of resource limitation in terms of total input synaptic efficacy available to a neuron. These efficacies are stable with regards to repeated arrival of spikes in a spike train. This rule reproduces the inverse relationship between initial and final synaptic efficacy observed in long-term potentiation (LTP) experiments. Simulations of repeated presentation of the same stimulus-response pair, show that, under conditions of fast learning with probabilistic synaptic transmission, learning tends to recruit a new sub-network at each presentation of the stimulus-response pair, rather than re-using a previously trained one. This increasing allocation of neural resources results in progressively shorter execution times, in line with experimentally observed reduction in response time with practice.