Dr. José Ignacio Núñez Varela
Assistant Professor






Gaze Control for Visually Guided Manipulation


Most of our daily tasks require us to act under uncertain and incomplete information. Only by using sensing to fill in missing information and reduce task relevant uncertainty can these tasks be accomplished. One such sense is vision, which can actively explore the scene to gather information that would help us interact with our environment [Findlay and Gilchrist 2003. Active Vision. Oxford Univ. Press]. This work addresses two questions:

    1) What mechanisms could a rational decision maker employ to select a gaze location given limited information and limited computation time?

    2) How is it that humans select the next fixation location?



Previous work has suggested that human eye movement behaviour is consistent with decision making mechanisms for fixation selection that are Bayes' rational [Najemnik and Geisler 2005. Optimal eye movement strategies in visual search. Nature 434, 7031, 387-391], or that try to maximise reward [Navalpakkam et al. 2010. Optimal reward harvesting in complex perceptual environments. Proc. National Academy of Sciences 107, 11, 5232-5237].

The aim of our work is to investigate these claims further, by examining in detail the formulation and behaviour of three one-step look ahead models of gaze control that deal with the problem of fixation selection, during the performance of manipulation tasks. Our first model chooses the fixation location that maximises the reduction of location uncertainty (Unc). Our second model incorporates task rewards and selects the fixation that maximises the value of performing an action by reducing location uncertainty (RU). Our third model is similar to RU but it maximises the gain that results when a motor system is given access to perception (RUG).

A pick & place task is used to characterise our models in terms of task performance by varying three environmental variables: reach/grasp sensitivity, observation noise, and field of view. The RUG gaze scheme is, in general, the best option in terms of task performance and robustness to changes for all environmental variables.

Pick & place task using gaze control based on rewards, uncertainty and gain (RUG)



Download: [mpeg] [avi] [flv] [mp4]


A second task was implemented to allow the arms of the robot to interact with each other by performing bimanual actions. By removing one of the containers from the previous task, the robot needs to transfer objects from one hand to the other.

Bimanual pick & place task using gaze control based on rewards, uncertainty and gain (RUG)



Download: [mpeg] [avi] [flv] [mp4]


A third task, based on a psychophysical experiment devised by [Johansson et al. 2001. Eye-hand coordination in object manipulation. J. of Neuroscience 21, 17, 6917-6932], showed the goodness of fit of our gaze control models to existing human data. Only the RUG and RU schemes reproduced the same relative ordering of gaze and actions as the human subjects. Because the behaviour of the RUG and RU schemes is necessarily the same for this task, we cannot decide which of these two models best fits the human data, so further experiments are required to differentiate between them.

Johansson's task using gaze control based on rewards, uncertainty and gain (RUG)



Download: [mpeg] [avi] [flv] [mp4]


Our results demonstrate that reasoning about task rewards and task uncertainty is critical for the control of gaze. Still, further experiments (for humans and machines) should be devised to analyse the precise role of rewards and information uncertainty during the performance of tasks.

For more information please see our publications on this work.


Relevant Publications:
Nunez-Varela, J. and Wyatt, J. L. Models of Gaze Control for Manipulation Tasks. Transactions of Applied Perception (ACM TAP). To appear. [pdf] [bib]

Nunez-Varela, J., Ravindran, B., and Wyatt, J. L. Gaze Allocation Analysis for a Visually Guided Manipulation Task. 12th International Conference on Adaptive Behavior (SAB 2012). Odense, Denmark, August 2012. [pdf] [bib]

Nunez-Varela, J., Ravindran, B., and Wyatt, J. L. Where Do I Look Now? Gaze Allocation During Visually Guided Manipulation. Proc. IEEE International Conference on Robotics and Automation (ICRA). IEEE Press, Minnesota, USA, May 2012. [pdf] [bib]







Last updated: August 14, 2013