Publications

Motion adaptation reveals that the motion vector is represented in multiple coordinate frames

Accurately perceiving the velocity of an object during smooth pursuit is a complex challenge: although the object is moving in the world, it is almost still on the retina. Yet we can perceive the veridical motion of a visual stimulus in such conditions, suggesting a nonretinal representation of the motion vector. To explore this issue, we studied the frames of representation of the motion vector by evoking the well known motion aftereffect during smooth-pursuit eye movements (SPEM). In the retinotopic configuration, due to an accompanying smooth pursuit, a stationary adapting random-dot stimulus was actually moving on the retina. Motion adaptation could therefore only result from motion in retinal coordinates. In contrast, in the spatiotopic configuration, the adapting stimulus moved on the screen but was practically stationary on the retina due to a matched SPEM. Hence, adaptation here would suggest a representation of the motion vector in spatiotopic coordinates. We found that exposure to spatiotopic motion led to significant adaptation. Moreover, the degree of adaptation in that condition was greater than the adaptation induced by viewing a random-dot stimulus that moved only on the retina. Finally, pursuit of the same target, without a random-dot array background, yielded no adaptation. Thus, in our experimental conditions, adaptation is not induced by the SPEM per se. Our results suggest that motion computation is likely to occur in parallel in two distinct representations: a low-level, retinal-motion dependent mechanism and a high-level representation, in which the veridical motion is computed through integration of information from other sources.

Authors: Seidel Malkinson T, McKyton A, Zohary E.
Year of publication: 2012
Journal: J Vis. 2012 Jun 22;12(6). pii: 30.

Link to publication:

Labs:

“Working memory”