Breadcrumbs

Associate Professor John A. Perrone

Research: Human Visual Navigation

Modelling aspects of human self-motion estimation.
Some of this research was conducted in collaboration with Dr. Leland S. Stone of the Flight Management and Human Factors Division, NASA Ames Research Center, Moffett Field, CA. U.S.A.


The successful manoeuvring of a person or vehicle through a cluttered environment requires information about possible obstacles (the layout) as well as the instantaneous motion of the observer and craft (heading direction and rotation). The latter 'self-motion' or egomotion information is required to help decide if any corrective motor inputs are needed for collision avoidance or for a change in the desired direction of travel. This navigational ability underlies many aspects of human behavior (walking, running, driving, flying) as well as many machine-based applications (autonomous vehicles, robotics).

The inputs, providing the self-motion feedback for these navigational skills, are many and varied. They can be visual, vestibular, proprioceptive, motor-corollary or cognitive although vision appears to dominate (see Henn, Cohen & Young, 1980 for a review). The visual component of self-motion perception was first investigated by Gibson (1950, 1966) and has since received much attention (see Heeger & Jepson, 1992; Warren, Morris & Kalish, 1988 for reviews). Efforts have mainly concentrated on the two-dimensional visual motion cues (the retinal flow field) that can be used to extract self-motion and environmental layout information. Psychophysical experiments have demonstrated that heading information, at least, can be extracted purely from visual motion inputs (see Warren & Hannon, 1990; Stone & Perrone, 1997) although the role of eye-movements still remains an issue (Royden, Banks & Crowell, 1992).

We developed a template model that was able to account for many of the physiological and psychophysical aspects of visual self-motion estimation (Perrone, 1992, Perrone & Stone, 1994, 1998). It uses networks of direction- and speed-tuned input sensors similar to neurons in area MT of primate visual cortex, to form detectors tuned to particular heading and rotation combinations. The approach relies on speed and direction tuning at the level of the MT neurons rather than direct readouts of the image velocity vectors (Perrone, 2001). The resulting detectors have similar response properties to neurons found in area MST, the putative processing area for self-motion estimation (e.g., Kawano et al, 1984; Saito et al., 1986; Duffy & Wurtz, 1991, 1996) We have successfully used the template model to emulate many of the receptive field properties of MST neurons (Perrone & Stone, 1998).

The model has now been extended to deal with actual two-dimensional input sequences, rather than theoretical vector flow fields. A model of two-dimensional motion sensors with properties similar to those found in area MT has been developed and used as a front-end to the template model (see 2-d motion page). We are now at the stage where heading and scene layout can be extracted from two-dimensional image sequences involving combined translation and rotation of the observer.

References

Perrone J.A. (1987). Extracting 3-D egomotion information from a 2-D flow field: A biological solution? Optical Society of America Technical Digest Series 22: 47. 63-74.

Perrone J.A. (1989). The perception of surface layout during low levelflight. (NASA CP 3118) Washington, DC: National Aeronautics and Space Administration. 63-74.

Perrone J.A. (1989). In search of the elusive flow field. Workshop on Visual Motion. IEEE Computer Society Press. 181-188.

Perrone J.A. (1990) Simple technique for optical flow estimation. Journal of the Optical Society of America A.7, 264-278.

Perrone J.A. (1992) Model for the computation of self-motion in biological systems. Journal of the Optical Society of America A, 9, 177-194.

Perrone J.A. & Stone, L.S. (1994) A model of self-motion estimation within primate extrastriate visual cortex. Vision Research 34, 2917-2938.

Perrone J. A.(1994) Simulating the speed and direction tuning of MT neurons using spatiotemporal tuned V1-neuron inputs. Investigative Ophthalmology and Visual Science, 38, S481.

Perrone, J. A. (1997). Extracting observer heading and scene layout from image sequences. Investigative Ophthalmology and Visual Science, 35, 2158.

Perrone, J. A. & Stone, L.S. (1998) Emulating the visual receptive field properties of MST neurons with a template model of heading estimation. J.Neuroscience, 18, 5958-5975.