RT Journal Article SR Electronic T1 Position Information Encoded by Population Activity in Hierarchical Visual Areas JF eneuro JO eNeuro FD Society for Neuroscience SP ENEURO.0268-16.2017 DO 10.1523/ENEURO.0268-16.2017 A1 Kei Majima A1 Paul Sukhanov A1 Tomoyasu Horikawa A1 Yukiyasu Kamitani YR 2017 UL http://www.eneuro.org/content/early/2017/03/23/ENEURO.0268-16.2017.abstract AB Neurons in high-level visual areas respond to more complex visual features with broader receptive fields (RFs) compared to those in low-level visual areas. Thus, high-level visual areas are generally considered to carry less information regarding the position of seen objects in the visual field. However, larger RFs may not imply loss of position information at the population level. Here, we evaluated how accurately the position of a seen object could be predicted (decoded) from activity patterns in each of six representative visual areas with different RF sizes (V1–V4, LOC, and FFA). We collected fMRI responses while human subjects viewed a ball randomly moving in a two-dimensional field. To estimate population RF sizes of individual fMRI voxels, RF models were fitted for individual voxels in each brain area. The voxels in higher visual areas showed larger estimated RFs than those in lower visual areas. Then, the ball’s position in a separate session was predicted by maximum likelihood regression (support vector regression, SVR) to predict the position. We found that regardless of the difference in RF size, all visual areas showed similar prediction accuracies, especially on the horizontal dimension. Higher areas showed slightly lower accuracies on the vertical dimension, which appears to be attributed to the narrower spatial distributions of the RFs centers. The results suggest that much of position information is preserved in population activity through the hierarchical visual pathway regardless of RF sizes, and is potentially available in later processing for recognition and behavior.Significance Statement High-level ventral visual areas are thought to achieve position invariance with larger receptive fields at the cost of the loss of precise position information. However, larger receptive fields may not imply loss of position information at the population level. Here, multivoxel fMRI decoding reveals that high-level visual areas are predictive of an object’s position with similar accuracies to low-level visual areas, especially on the horizontal dimension, preserving the information potentially available for later processing.