Associate Professor John A. Perrone
My research fits within the general area of visual perception and visual neuroscience. My PhD thesis focused on visual slant perception and I developed a model of human slant misperception [1,2]. As a Postdoctoral Fellow at NASA Ames Research Center (U.S.A), I applied my surface slant model to the problem of understanding the visual information used by pilots during the approach and landing phase of flight. I was able to successfully account for some of the landing approach errors made by pilots during night-landings .
During the period at NASA Ames Research Center and Stanford University (U.S.A), I shifted my research focus from 'static' perception to the perception of visual motion. At the time, the Vision group at NASA Ames was one of the strongest internationally in terms of motion perception research and it particularly excelled in the development of computer models of visual motion perception. I was able to learn these ‘new’ motion modeling techniques from the NASA researchers and soon published the first physiologically plausible model of human self-motion estimation . In collaboration with Dr Lee Stone, the model was extended  to include the role of eye-movements. Knowing how humans and animals use vision to navigate through the world (self-motion) is an important topic within psychology and in the general field of neuroscience. It also has many important practical applications (e.g., robotics, aerospace and driving research). An advantage of our self-motion estimation model over competing models is that it is closely tied to the known properties of neurons in the primate visual cortex (V1, primary visual cortex; MT, Middle Temporal area; MST, Medial Superior Temporal area).
After the publication of our self-motion estimation model, there was a lot of debate as to whether or not neurons in a particular area of the brain (MST) could do what we had proposed. We carried out tests of our model and verified that the neuron properties were consistent with the model mechanisms .
There was still some debate however about the properties of neurons in another area of the brain (MT) that we had also incorporated into our self-motion estimation model. In collaboration with Dr. Alex Thiele (then at the Salk Institute for Biological Studies, U.S.A.), we showed for the first time that MT neurons are speed tuned . Prior to the publication of our results, there was an open question as to whether or not MT neurons in the brain were truly speed tuned (i.e., do they respond selectively to a particular rate of image motion independently of the spatial structure of the stimulus?).
I then went on to develop a model of how neurons within area MT could develop speed tuning [8, 9, 10]. This model of the V1-MT stage has an advantage over many other competing models because it can be tested with moving pictures and stimuli that exactly match those used in primate striate and extra-striate cell recordings. It also forms the last phase of my overall plan to develop a general model of visual motion processing in the brain. When completed it will be a very powerful tool for understanding many aspects of motion perception.
One of the first applications for the model will be to apply it to the age-old question of why the world appears still when we move our eyes. This is the topic of a recently funded (2006) Royal Society of New Zealand Marsden grant application: 'Vector addition in the brain: Why the world stays still when we move our eyes' with Professor Rich Krauzlis (the Salk Institute for Biological Studies, U.S.A.) as co-PI.
The Marsden project will make use of our previous experience with motion models and eye-movement research to understand how humans manage to maintain the perception of a stable visual world despite the fact that they constantly move their eyes. During these eye movements, the resulting retinal image motion is ambiguous because it could represent movement of the world, movement of the observer or combinations of both. Despite this ambiguous input, our brains somehow manage to solve this 'eye rotation problem' and correctly construct the perception of a stable world. We have discovered a mechanism (a type of vector addition) that our visual system could potentially use to cancel the effect of eye movements. Using a combination of computer modelling and psychophysical methods, we aim to find evidence for this cancellation mechanism which, if verified, would lead to a significant breakthrough into the long standing question of why eye movements do not cause apparent movement of the world. A solution to the problem would also help provide fundamental insights into the way our brains work and how the brain combines visual and motor signals. This knowledge could also be used for designing better artificial-vision systems for robots.