Congrats to Luis, recipient of the 2021 NIH D-SPAN Scholar Award!!

February 15th, 2022

Luis Ramirez: 2021 D-SPAN Scholar
F99 Phase: Boston University | Sponsor: Sam Ling

Very exciting news! The NIH Blueprint Diversity Specialized Predoctoral to Postdoctoral Advancement in Neuroscience (D-SPAN) Award supports the pre- to post-doctoral transition of diverse graduate students. This two-phase award will facilitate completion of the doctoral dissertation and transition of talented graduate students (F99 phase) to strong neuroscience research postdoctoral positions (K00 phase), and will provide career development opportunities relevant to their long-term career goal of becoming independent neuroscience researchers. Currently, Luis is a PhD Candidate in the Graduate Program for Neuroscience at Boston University (BU), working with Dr. Sam Ling. His dissertation combines non-invasive human brain imaging, computational modeling, and psychophysics to understand the neurocomputational mechanisms that underly how attention regulates perception. In the long-term, Luis aims to investigate how perception and memory interact in visual cortex, and how attention facilitates this interaction. Moreover, as a first-gen, Afro-Latino student, Luis is committed to improving academia for historically excluded students, having led critical DEI committees and graduate student organizations throughout BU.

More info here

 

Arousal-based pupil modulation is dictated by luminance

January 26th, 2022

Scientific Reports (2022)
Jasmine Pan, Michaela Klimova, Joseph McGuire & Sam Ling

Pupilometry has become a standard measure for assessing arousal state. However, environmental factors such as luminance, a primary dictator of pupillary responses, often vary across studies. To what degree does luminance interact with arousal-driven pupillary changes? Here, we parametrically assessed luminance-driven pupillary responses across a wide-range of luminances, while concurrently manipulating cognitive arousal using auditory math problems of varying difficulty. At the group-level, our results revealed that the modulatory effect of cognitive arousal on pupil size interacts multiplicatively with luminance, with the largest effects occurring at low and mid-luminances. However, at the level of individuals, there were qualitatively distinct individual differences in the modulatory effect of cognitive arousal on luminance-driven pupillary responses. Our findings suggest that pupillometry as a measure for assessing arousal requires more careful consideration: there are ranges of luminance levels that are more ideal in observing pupillary differences between arousal conditions than others.

Download it here.

 

Saturating nonlinearities of contrast response in human visual cortex

January 4th, 2022

Journal of Neuroscience (2022)
Louis Vinke, Ilona Bloem & Sam Ling

Response nonlinearities are ubiquitous throughout the brain, especially within sensory cortices where changes in stimulus intensity typically produce compressed responses. Although this relationship is well established in electrophysiological measure- ments, it remains controversial whether the same nonlinearities hold for population-based measurements obtained with human fMRI. We propose that these purported disparities are not contingent on measurement type and are instead largely dependent on the visual system state at the time of interrogation. We show that deploying a contrast adaptation paradigm permits reliable measurements of saturating sigmoidal contrast response functions (10 participants, 7 female). When not controlling the adaptation state, our results coincide with previous fMRI studies, yielding nonsaturating, largely linear contrast responses. These findings highlight the important role of adaptation in manifesting measurable nonlinear responses within human visual cortex, reconciling discrepancies reported in vision neuroscience, re-establishing the qualitative relationship between stimulus intensity and response across different neural measures and the concerted study of cortical gain control

Download it here.

 

The specificity of orientation-tuned normalization within human early visual cortex

November 9th, 2021

Journal of Neurophysiology (2021)
Michaela Klimova, Ilona Bloem & Sam Ling

Normalization within visual cortex is modulated by contextual influences; stimuli sharing similar features suppress each other more than dissimilar stimuli. This feature-tuned component of suppression depends on multiple factors, including the orientation content of stimuli. Indeed, pairs of stimuli arranged in a center-surround configuration attenuate each others response to a greater degree when oriented collinearly than when oriented orthogonally. Although numerous studies have examined the nature of surround suppression at these two extremes, far less is known about how the strength of tuned normalization varies as a function of continuous changes in orientation similarity, particularly in humans. In this study, we used functional magnetic resonance imaging (fMRI) to examine the bandwidth of orientation-tuned suppression within human visual cortex. Blood-oxygen- level-dependent (BOLD) responses were acquired as participants viewed a full-field circular stimulus composed of wedges of ori- entation-bandpass filtered noise. This stimulus configuration allowed us to parametrically vary orientation differences between neighboring wedges in gradual steps between collinear and orthogonal. We found the greatest suppression for collinearly arranged stimuli with a gradual increase in BOLD response as the orientation content became more dissimilar. We quantified the tuning width of orientation-tuned suppression, finding that the voxel-wise bandwidth of orientation tuned normalization was between 20 and 30, and did not differ substantially between early visual areas. Voxel-wise analyses revealed that suppression width covaried with retinotopic preference, with the tightest bandwidths at outer eccentricities. Having an estimate of orienta- tion-tuned suppression bandwidth can serve to constrain models of tuned normalization, establishing the precise degree to which suppression strength depends on similarity between visual stimulus components.

 Download it here 

Temporal attention selectively enhances target features

June 14th, 2021

Journal of Vision (2021)
Luis Ramirez, Joshua Foster & Sam Ling

Temporal attention, the allocation of attention to a moment in time, improves perception. Here, we examined the computational mechanism by which temporal attention improves perception, under a divisive normalization framework. Under this framework, attention can improve perception of a target signal in three ways: stimulus enhancement (increasing gain across all sensory channels), signal enhancement (selectively increasing gain in channels that encode the target stimulus), or external noise exclusion (reducing the gain in channels that encode irrelevant features). These mechanisms make diverging predictions when a target is embedded in varying levels of noise: stimulus enhancement improves performance only when noise is low, signal enhancement improves performance at all noise intensities, and external noise exclusion improves performance only when noise is high. To date, temporal attention studies have used noise-free displays. Therefore, it is unclear whether temporal attention acts via stimulus enhancement (amplifying both target features and noise) or signal enhancement (selectively amplifying target features) because both mechanisms  predict improved performance in the absence of noise. To tease these mechanisms apart, we manipulated temporal attention using an auditory cue while parametrically varying external noise in a fine-orientation discrimination task. Temporal attention improved perceptual thresholds across all noise levels. Formal model comparisons revealed that this cuing effect was best accounted for by a combination of signal enhancement and stimulus enhancement, suggesting that temporal attention improves perceptual

 Download it here 

Population spatial frequency tuning in human early visual cortex

February 25th, 2020

Journal of Neurophysiology (2020)
Sara Aghajari, Louis Vinke & Sam Ling

Neurons within early visual cortex are selective for basic image statistics, including spatial frequency. However, these neurons are thought to act as band-pass filters, with the window of spatial frequency sensitivity varying across the visual field and across visual areas. Although a handful of previous functional MRI studies have examined human spatial frequency sensitivity using conventional designs and analysis methods, these measurements are time consuming and fail to capture the precision of spatial frequency tuning (bandwidth). In this study, we introduce a model-driven approach to fMRI analyses that allows for fast and efficient estimation of popu- lation spatial frequency tuning (pSFT) for individual voxels. Blood oxygen level-dependent (BOLD) responses within early visual cortex were acquired while subjects viewed a series of full-field stimuli that swept through a large range of spatial frequency content. Each stimulus was generated by band-pass filtering white noise with a central frequency that changed periodically between a minimum of 0.5 cycles/degree (cpd) and a maximum of 12 cpd. To estimate the underlying frequency tuning of each voxel, we assumed a log- Gaussian pSFT and optimized the parameters of this function by comparing our model output against the measured BOLD time series. Consistent with previous studies, our results show that an increase in eccentricity within each visual area is accompanied by a drop in the peak spatial frequency of the pSFT. Moreover, we found that pSFT bandwidth depends on eccentricity and is correlated with the pSFT peak; populations with lower peaks possess broader bandwidths in logarithmic scale, whereas in linear scale this relationship is reversed.

 Download it here 

Luminance potentiates human visuocortical responses

February 11th, 2020

Journal of Neurophysiology (2020)
Louis Vinke & Sam Ling

Our visual system is tasked with transforming variations in light within our environment into a coherent percept, typically described using properties such as luminance and contrast. Models of vision often downplay the impor- tance of luminance in shaping cortical responses, instead prioritizing representations that do not covary with overall luminance (i.e., con- trast), and yet visuocortical response properties that may reflect luminance encoding remain poorly understood. In this study, we examined whether well-established visuocortical response properties may also reflect luminance encoding, challenging the idea that lumi- nance information itself plays no significant role in supporting visual perception. To do so, we measured functional activity in human visual cortex when presenting stimuli varying in contrast and mean lumi- nance, and found that luminance response functions are strongly contrast dependent between 50 and 250 cd/m2, confirmed with a subsequent experiment. High-contrast stimuli produced linearly in- creasing responses as luminance increased logarithmically for all early visual areas, whereas low-contrast stimuli produced either flat (V1) or assorted positive linear (V2 and V3) response profiles. These results reveal that the mean luminance information of a visual signal persists within visuocortical representations, potentially reflecting an inherent imbalance of excitatory and inhibitory components that can be either contrast dependent (V1 and V2) or contrast invariant (V3). The role of luminance should be considered when the aim is to drive potent visually evoked responses and when activity is compared across studies. More broadly, overall luminance should be weighed heavily as a core feature of the visual system and should play a significant role in cortical models of vision.

 Download it here 

 

Normalization governs attentional modulation within human visual cortex

December 11th, 2019

Nature Communications (2019)
Ilona Bloem & Sam Ling

Although attention is known to increase the gain of visuocortical responses, its underlying neural computations remain unclear. Here, we use fMRI to test the hypothesis that a neural population’s ability to be modulated by attention is dependent on divisive normalization. To do so, we leverage the feature-tuned properties of normalization and find that visuocortical responses to stimuli sharing features normalize each other more strongly. Comparing these normalization measures to measures of attentional modulation, we demonstrate that sub- populations which exhibit stronger normalization also exhibit larger attentional benefits. In a converging experiment, we reveal that attentional benefits are greatest when a subpopulation is forced into a state of stronger normalization. Taken together, these results suggest that the degree to which a subpopulation exhibits normalization plays a role in dictating its potential for attentional benefits.

 Download it here 

 

Dichoptic vision in the absence of attention: neither fusion nor rivalry

September 9th, 2019

Scientific Reports (2019)
Cheng Stella Qian, Sam Ling & Jan W. Brascamp

When the two eyes’ processing streams meet in visual cortex, two things can happen: sufficiently similar monocular inputs are combined into a fused representation, whereas markedly different inputs engage in rivalry. interestingly, the emergence of rivalry appears to require attention. Withdrawing attention causes the alternating monocular dominance that characterizes rivalry to cease, apparently allowing both monocular signals to be processed simultaneously. What happens to these signals in this case, however, remains something of a mystery; are they fused into an integrated representation? In a set of experiments, we show this not to be the case: visual aftereffects are consistent with the simultaneous yet separate presence of two segregated monocular representations, rather than a joint representation. these results provide evidence that dichoptic vision without attention prompts a third and previously unknown mode, where both eyes’ inputs receive equal processing, but escape interocular fusion.

 Download it here 

 

Visuocortical changes during a freezing-like state in humans

June 12th, 2018

Neuroimage (2018)
Maria Lojowska, Sam Ling, Karin Roelofs, Erno Hermans

Screen Shot 2018-06-12 at 9.39.44 AMAn adaptive response to threat requires optimized detection of critical sensory cues. This optimization is thought to be aided by freezing - an evolutionarily preserved defensive state of immobility characterized by parasympathetically mediated fear bradycardia and regulated by the amygdala-periaqueductal grey (PAG) circuit. Behavioral observations in humans and animals have suggested that freezing is also a state of enhanced visual sensitivity, particularly for coarse visual information, but the underlying neural mechanisms remain unclear. We induced a freezing-like state in healthy volunteers using threat of electrical shock and measured threat-related changes in both stimulus-independent (baseline) and stimulus-evoked visuocortical activity to low- vs. high-spatial frequency gratings, using functional MRI. As measuring immobility is not feasible in MRI environments, we used fear bradycardia and amygdala- PAG coupling in inferring a freezing-like state. An independent functional localizer and retinotopic mapping were used to assess the retinotopic specificity of visuocortical modulations. We found a threat- induced increase in baseline (stimulus-independent) visuocortical activity that was retinotopically nonspecific, which was accompanied by increased connectivity with the amygdala. A positive correlation between visuocortical activity and fear bradycardia (while controlling for sympathetic activation), and a concomitant increase in amygdala-PAG connectivity, suggest the specificity of these findings for the parasympathetically dominated freezing-like state. Visuocortical responses to gratings were retinotopically specific but did not differ between threat and safe conditions across participants. However, individuals who exhibited better discrimination of low-spatial frequency stimuli showed reduced stimulus-evoked V1 responses under threat. Our findings suggest that a defensive state of freezing involves an integration of preparatory defensive and perceptual changes that is regulated by a common mechanism involving the amygdala.

 Download it here 

Visual memories bypass normalization

December 10th, 2017

Psychological Science (2018)
Ilona Bloem, Yurika Watanabe, Melissa Kibbe & Sam Ling

Screen Shot 2017-12-10 at 9.17.33 AMHow distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Participants were asked to remember the contrast of visual stimuli, which were pit against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores – neither between representations in memory, nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

 Download it here 

Attentional modulation interacts with orientation anisotropies in contrast perception

September 1st, 2017

Journal of Vision (2017)
Ilona Bloem & Sam Ling

Exp2_results_v2Orientation perception is not comparable across all orientations –a phenomenon commonly referred to as the oblique effect. Here, we first assess the interaction between stimulus contrast and the oblique effect. Specifically, we examined whether the impairment in behavioral performance for oblique versus cardinal orientations is best explained by a contrast- or response gain modulation of the contrast psychometric function. Results revealed a robust oblique effect, whereby asymptotic performance for oblique orientations was substantially lower than for cardinal orientations, which we interpret as the result of multiplicative attenuation of contrast responses for oblique orientations. Next, we assessed how orientation anisotropies interact with attention by measuring psychometric functions for orientations under low or high attentional load. Interestingly, attentional load affects the performance for cardinal and oblique orientations differently: while attentional load multiplicatively attenuates contrast psychometric functions for both cardinal and oblique orientation conditions, the magnitude of this effect is greater for the obliques. Thus, having less attentional resources available seems to impair the response for oblique orientations to a larger degree than for cardinal orientations.

Download it here 

Characterizing the effects of feature salience and top-down attention in the early visual system

May 14th, 2017

Journal of Neurophysiology (2017)
Sonia Poltoratski, Sam Ling, Devin McCormack, Frank Tong

TheScreen Shot 2017-05-14 at 10.25.17 AM visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. Here, we used high-resolution fMRI at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or non-salient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, while the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas.

 pdf_iconDownload it here 

BU’s new MRI scanner is here!

March 2nd, 2017

CILSE_MRI

Timelapse of the install (courtesy of Louis Vinke)

Elevated arousal levels enhance contrast perception

March 2nd, 2017

Journal of Vision (2017)
Dongho Kim, Savannah Lokey & Sam Ling

Our state of arousal fluctuates from moment to moment—fluctuations that can have profound impacts on behavior. Arousal has been proposed to play a powerful, widespread role in the brain, influencing processes as far ranging as perception, memory, learning, and decision making. Although arousal clearly plays a critical role in modulating behavior, the mechanisms underlying this modulation remain poorly understood. To address this knowledge gap, we examined the modulatory role of arousal on one of the cornerstones of visual perception: contrast perception. Using a reward-driven paradigm to manipulate arousal state, we discovered that elevated arousal state substantially enhances visual sensitivity, incurring a multiplicative modulation of contrast response. Contrast defines vision, determining whether objects appear visible or invisible to us, and these results indicate that one of the consequences of decreased arousal state is an impaired ability to visually process our environment.

 pdf_iconDownload it here