Temporal attention selectively enhances target features

June 14th, 2021

Journal of Vision (2021)
Luis Ramirez, Joshua Foster & Sam Ling

Temporal attention, the allocation of attention to a moment in time, improves perception. Here, we examined the computational mechanism by which temporal attention improves perception, under a divisive normalization framework. Under this framework, attention can improve perception of a target signal in three ways: stimulus enhancement (increasing gain across all sensory channels), signal enhancement (selectively increasing gain in channels that encode the target stimulus), or external noise exclusion (reducing the gain in channels that encode irrelevant features). These mechanisms make diverging predictions when a target is embedded in varying levels of noise: stimulus enhancement improves performance only when noise is low, signal enhancement improves performance at all noise intensities, and external noise exclusion improves performance only when noise is high. To date, temporal attention studies have used noise-free displays. Therefore, it is unclear whether temporal attention acts via stimulus enhancement (amplifying both target features and noise) or signal enhancement (selectively amplifying target features) because both mechanisms  predict improved performance in the absence of noise. To tease these mechanisms apart, we manipulated temporal attention using an auditory cue while parametrically varying external noise in a fine-orientation discrimination task. Temporal attention improved perceptual thresholds across all noise levels. Formal model comparisons revealed that this cuing effect was best accounted for by a combination of signal enhancement and stimulus enhancement, suggesting that temporal attention improves perceptual

 Download it here 

Population spatial frequency tuning in human early visual cortex

February 25th, 2020

Journal of Neurophysiology (2020)
Sara Aghajari, Louis Vinke & Sam Ling

Neurons within early visual cortex are selective for basic image statistics, including spatial frequency. However, these neurons are thought to act as band-pass filters, with the window of spatial frequency sensitivity varying across the visual field and across visual areas. Although a handful of previous functional MRI studies have examined human spatial frequency sensitivity using conventional designs and analysis methods, these measurements are time consuming and fail to capture the precision of spatial frequency tuning (bandwidth). In this study, we introduce a model-driven approach to fMRI analyses that allows for fast and efficient estimation of popu- lation spatial frequency tuning (pSFT) for individual voxels. Blood oxygen level-dependent (BOLD) responses within early visual cortex were acquired while subjects viewed a series of full-field stimuli that swept through a large range of spatial frequency content. Each stimulus was generated by band-pass filtering white noise with a central frequency that changed periodically between a minimum of 0.5 cycles/degree (cpd) and a maximum of 12 cpd. To estimate the underlying frequency tuning of each voxel, we assumed a log- Gaussian pSFT and optimized the parameters of this function by comparing our model output against the measured BOLD time series. Consistent with previous studies, our results show that an increase in eccentricity within each visual area is accompanied by a drop in the peak spatial frequency of the pSFT. Moreover, we found that pSFT bandwidth depends on eccentricity and is correlated with the pSFT peak; populations with lower peaks possess broader bandwidths in logarithmic scale, whereas in linear scale this relationship is reversed.

 Download it here 

Luminance potentiates human visuocortical responses

February 11th, 2020

Journal of Neurophysiology (2020)
Louis Vinke & Sam Ling

Our visual system is tasked with transforming variations in light within our environment into a coherent percept, typically described using properties such as luminance and contrast. Models of vision often downplay the impor- tance of luminance in shaping cortical responses, instead prioritizing representations that do not covary with overall luminance (i.e., con- trast), and yet visuocortical response properties that may reflect luminance encoding remain poorly understood. In this study, we examined whether well-established visuocortical response properties may also reflect luminance encoding, challenging the idea that lumi- nance information itself plays no significant role in supporting visual perception. To do so, we measured functional activity in human visual cortex when presenting stimuli varying in contrast and mean lumi- nance, and found that luminance response functions are strongly contrast dependent between 50 and 250 cd/m2, confirmed with a subsequent experiment. High-contrast stimuli produced linearly in- creasing responses as luminance increased logarithmically for all early visual areas, whereas low-contrast stimuli produced either flat (V1) or assorted positive linear (V2 and V3) response profiles. These results reveal that the mean luminance information of a visual signal persists within visuocortical representations, potentially reflecting an inherent imbalance of excitatory and inhibitory components that can be either contrast dependent (V1 and V2) or contrast invariant (V3). The role of luminance should be considered when the aim is to drive potent visually evoked responses and when activity is compared across studies. More broadly, overall luminance should be weighed heavily as a core feature of the visual system and should play a significant role in cortical models of vision.

 Download it here 


Normalization governs attentional modulation within human visual cortex

December 11th, 2019

Nature Communications (2019)
Ilona Bloem & Sam Ling

Although attention is known to increase the gain of visuocortical responses, its underlying neural computations remain unclear. Here, we use fMRI to test the hypothesis that a neural population’s ability to be modulated by attention is dependent on divisive normalization. To do so, we leverage the feature-tuned properties of normalization and find that visuocortical responses to stimuli sharing features normalize each other more strongly. Comparing these normalization measures to measures of attentional modulation, we demonstrate that sub- populations which exhibit stronger normalization also exhibit larger attentional benefits. In a converging experiment, we reveal that attentional benefits are greatest when a subpopulation is forced into a state of stronger normalization. Taken together, these results suggest that the degree to which a subpopulation exhibits normalization plays a role in dictating its potential for attentional benefits.

 Download it here 


Dichoptic vision in the absence of attention: neither fusion nor rivalry

September 9th, 2019

Scientific Reports (2019)
Cheng Stella Qian, Sam Ling & Jan W. Brascamp

When the two eyes’ processing streams meet in visual cortex, two things can happen: sufficiently similar monocular inputs are combined into a fused representation, whereas markedly different inputs engage in rivalry. interestingly, the emergence of rivalry appears to require attention. Withdrawing attention causes the alternating monocular dominance that characterizes rivalry to cease, apparently allowing both monocular signals to be processed simultaneously. What happens to these signals in this case, however, remains something of a mystery; are they fused into an integrated representation? In a set of experiments, we show this not to be the case: visual aftereffects are consistent with the simultaneous yet separate presence of two segregated monocular representations, rather than a joint representation. these results provide evidence that dichoptic vision without attention prompts a third and previously unknown mode, where both eyes’ inputs receive equal processing, but escape interocular fusion.

 Download it here 


Postdoc in Functional Neuroimaging @ Boston University

July 11th, 2019

The Ling Lab (http://sites.bu.edu/vision) at Boston University is seeking a postdoctoral researcher, with a flexible start date.

This position, funded by National Eye Institute, is part of a collaboration between Dr. Sam Ling (http://sites.bu.edu/vision) and Dr. Jan Brascamp (https://psychology.psy.msu.edu/brascamplab/). The position would involve using state of the art fMRI techniques to investigate the neural computations subserving early visual processing, and how they interact with processes such as attention and interocular suppression.  Research methods that are currently employed in the lab include fMRI, EEG, psychophysics, and computational modeling. The Ling Lab is part of Boston University’s Department of Psychological and Brain Sciences (www.bu.edu/psych), and is affiliated with the Center for Integrated Life Sciences and Engineering, and the Center for Systems Neuroscience. The Brascamp Lab is part of Michigan State University’s Department of Psychology (https://psychology.msu.edu), and is affiliated with MSU’s Cognitive Science Program and Neuroscience Program.

Applicants must have a Ph.D. in neuroscience, psychology or related fields, and should possesses a strong programming background.  Prior experience with neuroimaging or advanced psychophysical techniques is highly preferred. The position is available immediately and applications will be reviewed until the position is filled.

Application should include: a CV, a brief statement of research interests, the expected date of availability, and the names and contact information for three referees. For more information about this position, contact Dr. Sam Ling at samling@bu.edu.


Visuocortical changes during a freezing-like state in humans

June 12th, 2018

Neuroimage (2018)
Maria Lojowska, Sam Ling, Karin Roelofs, Erno Hermans

Screen Shot 2018-06-12 at 9.39.44 AMAn adaptive response to threat requires optimized detection of critical sensory cues. This optimization is thought to be aided by freezing - an evolutionarily preserved defensive state of immobility characterized by parasympathetically mediated fear bradycardia and regulated by the amygdala-periaqueductal grey (PAG) circuit. Behavioral observations in humans and animals have suggested that freezing is also a state of enhanced visual sensitivity, particularly for coarse visual information, but the underlying neural mechanisms remain unclear. We induced a freezing-like state in healthy volunteers using threat of electrical shock and measured threat-related changes in both stimulus-independent (baseline) and stimulus-evoked visuocortical activity to low- vs. high-spatial frequency gratings, using functional MRI. As measuring immobility is not feasible in MRI environments, we used fear bradycardia and amygdala- PAG coupling in inferring a freezing-like state. An independent functional localizer and retinotopic mapping were used to assess the retinotopic specificity of visuocortical modulations. We found a threat- induced increase in baseline (stimulus-independent) visuocortical activity that was retinotopically nonspecific, which was accompanied by increased connectivity with the amygdala. A positive correlation between visuocortical activity and fear bradycardia (while controlling for sympathetic activation), and a concomitant increase in amygdala-PAG connectivity, suggest the specificity of these findings for the parasympathetically dominated freezing-like state. Visuocortical responses to gratings were retinotopically specific but did not differ between threat and safe conditions across participants. However, individuals who exhibited better discrimination of low-spatial frequency stimuli showed reduced stimulus-evoked V1 responses under threat. Our findings suggest that a defensive state of freezing involves an integration of preparatory defensive and perceptual changes that is regulated by a common mechanism involving the amygdala.

 Download it here 

Visual memories bypass normalization

December 10th, 2017

Psychological Science (2018)
Ilona Bloem, Yurika Watanabe, Melissa Kibbe & Sam Ling

Screen Shot 2017-12-10 at 9.17.33 AMHow distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Participants were asked to remember the contrast of visual stimuli, which were pit against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores – neither between representations in memory, nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

 Download it here 

Attentional modulation interacts with orientation anisotropies in contrast perception

September 1st, 2017

Journal of Vision (2017)
Ilona Bloem & Sam Ling

Exp2_results_v2Orientation perception is not comparable across all orientations –a phenomenon commonly referred to as the oblique effect. Here, we first assess the interaction between stimulus contrast and the oblique effect. Specifically, we examined whether the impairment in behavioral performance for oblique versus cardinal orientations is best explained by a contrast- or response gain modulation of the contrast psychometric function. Results revealed a robust oblique effect, whereby asymptotic performance for oblique orientations was substantially lower than for cardinal orientations, which we interpret as the result of multiplicative attenuation of contrast responses for oblique orientations. Next, we assessed how orientation anisotropies interact with attention by measuring psychometric functions for orientations under low or high attentional load. Interestingly, attentional load affects the performance for cardinal and oblique orientations differently: while attentional load multiplicatively attenuates contrast psychometric functions for both cardinal and oblique orientation conditions, the magnitude of this effect is greater for the obliques. Thus, having less attentional resources available seems to impair the response for oblique orientations to a larger degree than for cardinal orientations.

Download it here 

Characterizing the effects of feature salience and top-down attention in the early visual system

May 14th, 2017

Journal of Neurophysiology (2017)
Sonia Poltoratski, Sam Ling, Devin McCormack, Frank Tong

TheScreen Shot 2017-05-14 at 10.25.17 AM visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. Here, we used high-resolution fMRI at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or non-salient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, while the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas.

 pdf_iconDownload it here 

BU’s new MRI scanner is here!

March 2nd, 2017


Timelapse of the install (courtesy of Louis Vinke)

Elevated arousal levels enhance contrast perception

March 2nd, 2017

Journal of Vision (2017)
Dongho Kim, Savannah Lokey & Sam Ling

Our state of arousal fluctuates from moment to moment—fluctuations that can have profound impacts on behavior. Arousal has been proposed to play a powerful, widespread role in the brain, influencing processes as far ranging as perception, memory, learning, and decision making. Although arousal clearly plays a critical role in modulating behavior, the mechanisms underlying this modulation remain poorly understood. To address this knowledge gap, we examined the modulatory role of arousal on one of the cornerstones of visual perception: contrast perception. Using a reward-driven paradigm to manipulate arousal state, we discovered that elevated arousal state substantially enhances visual sensitivity, incurring a multiplicative modulation of contrast response. Contrast defines vision, determining whether objects appear visible or invisible to us, and these results indicate that one of the consequences of decreased arousal state is an impaired ability to visually process our environment.

 pdf_iconDownload it here 

Best animated gif ever

October 19th, 2016

Nice, Louis.FIR_UNIONv1_log5_fitINDIvox

Perceptual learning increases orientation sampling efficiency

March 15th, 2016

Journal of Vision (2016)
Denise Moerel, Sam Ling & Janneke Jehee

Screen Shot 2016-03-15 at 7.29.10 PM

Visual orientation discrimination is known to improve with extensive training, but the mechanisms underlying this behavioral benefit remain poorly understood. Here, we examine the possibility that more reliable task performance could arise in part because observers learn to sample information from a larger portion of the stimulus. We used a variant of the classification image method in combination with a global orientation discrimination task to test whether a change in information sampling underlies training-based benefits in behavioral performance. The results revealed that decreases in orientation thresholds with perceptual learning were accompanied by increases in stimulus sampling. In particular, while stimulus sampling was restricted to the parafoveal, inner portion of the stimulus before training, we observed an outward spread of sampling after training. These results demonstrate that the benefits of perceptual learning may arise, in part, from a strategic increase in the efficiency with which the observer samples information from a visual stimulus.

 pdf_iconDownload it here 

The Occipital Face Area is Causally Involved in Facial Viewpoint Perception

November 15th, 2015

Journal of Neuroscience (2015)
Tim Kietzmann, Sonia Poltoratski, Peter König, Randolph Blake, Frank Tong & Sam Ling


Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: judging the viewpoint symmetry or judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. While viewpoint symmetry judgments were significantly disrupted, we observed no impact on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of inter-hemispheric crosstalk in the formation of viewpoint-invariant face perception.

pdf_icon Download it here