Lecturer in Computing Office: 14BB02 Student Office Hours: Monday, 1400 — 1500 and Tuesday 1000 — 1100. Research Day: Thursday. Department of Computing University of Surrey Guildford, Surrey GU2 7XH United Kingdom Phone: +44-1483-68-2648 Fax: +44-1483-68-6051 Electronic: PGP Public Key: download here |
I am a member of the Nature-Inspired Computing and Engineering group within the Dept. of Computing.
My research interests include:
Language comes in discrete chunks (words), however it is ultimately
processed on the neural hardware of the brain that is analog in nature
(rates and spike times take on continuous values), and thus it is
Since the nature of hand-coded dynamic representations used in the now classical proofs of Turing equivalence of recurrent neural networks is quite different from those that emerge when a recurrent neural network is actually trained on a particular formal language, I became interested also in learning algorithms for artificial neural networks that are both efficient and neurally or cognitively realistic.
Gradient descent algorithms for example are quite powerful, however they are not biologically or cognitively realistic for two main reasons: they are fully supervised and they use synapses antidromatically. For reinforcement learning algorithms it is the opposite: they are cognitively more plausible and even neurally plausible implementations exists, however many problems that can be learnt with gradient descent cannot with reinforcement learning. (Grüning 2007).
I am also interested in cognitive science and especially in modelling language processing. Collaborators and I are exploring the domain (in)dependence of statistical learning strategies in human subjects for correlations in visual stimulus sequences that otherwise are typical for language processing.
In the Spring Semester 2013 I will be 0.6 FTE on a MILES mini-sabbatical to do research on robust microbial communities.
$Id: teaching.html 20 2009-11-15 19:51:21Z ag0015 $
PhD, MSc
Artificial neural networks are both biologically inspired models of the nervous system and trainable computational devices. They can be trained with gradient-descent learning algorithms such as back-propagation. However back-propagation is not biologically realistic because it requires an extensive circuitry to calculate error gradients. So-called weight perturbation methods have been suggested that can do without extensive circuitry but estimate only an approximation to the true error gradient by making use of "noise" in a clever way. "Noise" or random perturbations are also used in genetic training algorithms that do not follow a gradient, but use trial-and-error in an evolutionary way.
One aim of this project is to compare the performance of weight perturbation and genetic algorithms and develop some ideas for their improvement.
A. Grüning and A. Vinayak PG. The accumulation theory of ageing. Preprint.
I. Sporea and A. Grüning. Supervised learning in multilayer spiking neural networks. Neural Computation, 2012. In Press, Preprint.
A. Grüning and I. Sporea. Supervised learning of logical operations in layered spiking neural networks with spike train encoding. Neural Processing Letters, 36(2), 117--134, 2012, Preprint. Ca. 22 pages.
S. Notley and A. Grüning. Improved spike-timed mappings using a tri-phasic spike timing-dependent plasticity rule. In Proceedings of the International Joint Conference on Neural Networks. 2012. Preprint.
J. Chrol-Cannon, A. Grüning and Y. Jin. The emergence of polychronous groups under varying input patterns, plasticity rules and network connectivities. In Proceedings of the International Joint Conference on Neural Networks. 2012. Preprint.
I. Sporea and A. Grüning. Classification of distorted patterns by feed-forward spiking neural networks. In Proceedings of the International Conference on Articifial Neural Networks, Lecture Notes in Computer Science. Springer, 2012. Preprint.
N. Yusoff, A. Grüning and S. Notley. Pair-associate learning with modulated spike-time dependent plasticity. In Proceedings of the International Conference on Articifial Neural Networks, Lecture Notes in Computer Science. Springer, 2012. Preprint.
P. Ioannou, M. Casey and A. Grüning. Evaluating the effect of spiking network parameters on polychronization. In Proceedings of the International Conference on Articifial Neural Networks, Lecture Notes in Computer Science. Springer, 2012. Preprint.
N. Yusoff and A. Grüning. Biologically inspired sequence learning. In International Symposium on Robotics and Intelligent Sensors (IRIS), Procedia Engineering. Elsevier, 2012. Preprint.
N. Yusoff and A. Grüning. Learning anticipation through priming in spatio-temporal neural networks. In Proceedings of the ICONIP 2012, vol. Part I, 7663 of Lecture Notes in Computer Science, p. 168sqq. 2012. Preprint.
I. Sporea and A. Grüning. Reference time in SpikeProp. In Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE, San Jose, CA, August 2011. Preprint.
N. Yusoff, I. Sporea and A. Grüning. Neural networks in cognitive science -- an introduction. In P. Lio and D. Verma (eds.), Biologically Inspired Networking and Sensing: Algorithms and Architectures. IGI Global, Hershey, PA, 2011.
N. Yusoff and A. Grüning. Supervised associative learning in spiking neural network. In K. Diamantaras, W. Duch and L. Iliadis (eds.), ICANN (1), vol. 6352 of Lecture Notes in Computer Science, pp. 224--229. Springer, 2010.
I. Sporea and A. Grüning. Modelling the McGurk} effect. In ESANN 2010 proceedings, European Symposium on Artificial Neural Networks - Computational Intelligence and Machine Learning. Brugge, 2010. Preprint.
I. Sporea and A. Grüning. A distributed model of memory for the McGurk effect. In Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE, Barcelona, 2010. Preprint.
N. Yusoff, A. Grüning and A. Browne. Modelling the Stroop Effect: Dynamics in inhibition of automatic stimuli processing. In Proceedings of the 2nd International Conference in Cognitive Neurodynamics (ICCN 2009), Lecture Notes in Computer Science. Springer, 2009. Preprint.
I. Sporea and A. Grüning. Modelling of the McGurk effect. In Frontiers in Behavioral Neuroscience. Conference Abstract: 41st European Brain and Behaviour Society Meeting. 2009.
N. Yusoff, A. Grüning and T. Browne. Competition and cooperation in colour-word Stroop Effect: An association approach. In Frontiers in Behavioral Neuroscience. Conference Abstract: 41st European Brain and Behaviour Society Meeting. 2009.
A. Grüning. Elman backpropagation as reinforcement for simple recurrent networks. Neural Computation, 19(11), 3108--3131, 2007, Preprint.
A. Grüning. Stack- and queue-like dynamics in recurrent neural networks. Connection Science, 18(1), 23--42, 2006, Preprint.
A. Grüning and A. Treves. Distributed neural blackboards could be more attractive. Behavioral and Brain Sciences, 29(1), 79--80, 2006, Preprint.
A. Grüning. Back-propagation as reinforcement in prediction tasks. In W. Duch, J. Kacprzyk, E. Oja and S. Zadrozny (eds.), Proceedings of the International Conference on Artificial Neural Networks (ICANN'05), vol. 3697 of LNCS, pp. 547--552. Springer, Berlin, Heidelberg, 2005. Preprint.
A. Grüning. Dynamic representations of stack- and queue-like syntactic structures. In A. Cangelosi, G. Bugmann and R. Borisyuk (eds.), Proceedings of the Ninth Neural Computation and Psychology Workshop Modelling Language, Cognition, and Action (NCPW9). World Scientific, New Jersey, 2005.
A. Grüning and A. A. Kibrik. Modeling referential choice in discourse: A cognitive calculative approach and a neural network approach. In Antonio Branco, T. McEnery and R. Mitkov (eds.), Anaphora Processing: Linguistic, Cognitive and Computational Modelling. John Benjamins, Amsterdam, 2004. Preprint.
A. Grüning and A. A. Kibrik. A neural network approach to referential choice. In I. Kobozeva, N. Laufer and V. Selegey (eds.), Computational Linguistics and Intellectual Technolgies -- Proceedings of the Dialogue 2003 International Conference, Protvino, pp. 260--266. Nauka, Moscow, 2003.
A. Grüning and A. A. Kibrik. Referential choice and activation factors: A neural network approach. In A. Branco, T. McEnery and R. Mitkov (eds.), Proceedings of the 4th Discourse Anaphora and Anaphor Resolution Colloqium (DAARC 2002). Edições Colibri, Lisbon, 2002. Preprint.
A. Grüning. Neural Networks and the Complexity of Languages. Doctoral disseration, School of Mathematics and Computer Science, University of Leipzig, 2004. Abstract.
A. Grüning. Ladungssektoren positiver Energie im Hochenergielimes des Schwinger-Modells [Charge sectors of positive energy in the high energy limit of the Schwinger model]. Diploma (Master's) thesis, Institute for Theoretical Physics, University of Göttingen, 1999.