We don’t see space, we feel it instead. This was a consequence of the experiments of John O’Keefe and Jonathan Dostrovsky in 1971, when they found place cells in a rat’s hippocampus, far from the visual cortex. Those cells fired each time when the rat was at certain places in the experimental box. Their discovery inspired several new experiments on how we navigate in space.
I’m interested in how the human mind can flexibly process spatial relations and complex environments, and how we can address challenging questions here with the use of virtual reality and machine learning. We are studying navigation ability at several levels of cognitive processing; from behaviour, to activity of brain areas, and to spiking of single neurons in deeper brain structures. I’m fortunate to collaborate with researchers from BME, ELTE, Technion (Haifa, Israel) and UT Austin (Texas, USA).
Our perception of the world is essentially multimodal. For example, when someone drives a car it is tempting to see that he/she perceives the world through only the eyes, but it is not true. The driver does not only hear noises from inside and outside of the car, but feels the speed and angle of movement through his/her body. Therefore, to know fast and accurately we can react in a driving emergency, we have to consider all the information that reach our senses.
I use mostly virtual reality to simulate realistic situations where multisensory integration can be studied. This approach is interesting not only because it can provide new insights about human behaviour in natural scenarios, but also in the development of virtual reality interfaces. In VR environments we experience a paradox situation. On one hand, no matter how realistic the scene is, we know it is virtual; but on the other hand, even modest virtual scenes are capable of evoking strange experiencies, e.g. vertigo by simply rocking the horizon. It is still an open question in which level of cognitive processing we perceive virtual reality as real, and in which level our brain knows that it is not. I’m currently collaborating with researchers from Royal Holloway University of London (UK), Aix-Marseille University (France), UCL (London, UK), and Carl von Ossietzky University (Oldenburg, Germany).
During my work at Synetiq, I’m building statistical machine learning models to understand what makes people like an advert and how we can predict the success of an ad. This field is especially interesting to me because it provides a great opportunity to bring scientific results to business decision-making. This is great challenge, since the use of emotions in marketing research is a relatively new direction, and so everything we do - be it deep learning or unsupervised methods - is breaking new grounds.
In applied research, the emphasis is on repeatable, robust, and meaningful experiments; and on the communication of often complex models in an understandable manner. These focuses taught me important lessons, which I can use also in answering exploratory research questions as well.
The prosodic structure
Our ancestors spoke. They did not write or read, but spoke with an evolving grammar and prosody to express their thoughts, emotions and beliefs. And as our thoughts are complicated sometimes it is also hard to tell them: speech needs a structure. We could simply grab the basis of structure by stating that it’s important to know what words belong together and what words reflect the point of the conversation. In prosody these two structural aims are served by two abstract representations, namely the prosodic boundary and the prominence. In my research I’m looking for the characteristics of how the brain perceives the utterances in the real world. Because it is also important to know that under normal circumstances we seldom use “well” all the necessary markers of these elements, however we do understand each other almost perfectly.