On April 12th, 2020, F1000Prime became Faculty Opinions. See our blog to learn more.

Evolving concepts of sensory adaptation

  • Michael A. Websteremail
View more
View less
  • Michael A. Webster

    mwebster@unr.edu

    Affiliations

    • Department of Psychology, University of Nevada, Reno, Reno NV 89557, USA,
F1000 Biol Reports  2012, 4:21 (https://doi.org/10.3410/B4-21)
Published: 01 Nov 2012

Abstract

Sensory systems constantly adapt their responses to match the current environment. These adjustments occur at many levels of the system and increasingly appear to calibrate even for highly abstract perceptual representations of the stimulus. The similar effects of adaptation across very different stimulus domains point to common design principles but also continue to raise questions about the purpose of adaptation.

Introduction

The sensory systems we use to monitor the world around us are not static and instead are continuously recalibrating to adjust for changes in the environment (e.g. in the lighting or temperature) or to compensate for changes in the observer (e.g. with aging or disease). For example, the aromas that lure you into a room (or warn you away!) fade quickly from awareness once you enter, while your perception of color can change dramatically depending on the colors seen previously (Figure 1). These rapid sensitivity adjustments are known as adaptation (changes in the response properties of neurons induced by the recent stimulus context) and suggest that much of our perceptual experience is relative to the stimuli we have experienced recently. Studies of adaptation have a very rich history in perceptual science, because the perceptual “aftereffects” of the adaptation can provide clues as to how our senses encode and represent the stimulus [1,2]. A wide variety of aftereffects have been documented, including classic perceptual illusions, such as a bias in what looks vertical after viewing tilted lines [3], or a change in what looks stationary after adapting to movement [4]. However, these studies have traditionally focused on simple perceptual properties and, thus, potentially early stages of neural coding. Recently, the study of adaptation has been extended to much more complex and naturalistic attributes, and this has revealed both new insights and new questions about sensory coding and the role of adaptation. While this review focuses on vision, the findings and principles that have emerged are likely to be general to all of the senses as well as their interactions.

Figure 1.

Color afterimages

An example of a perceptual aftereffect, similar to the “Big Spanish Castle” illusion popularized on the internet [88]. The upper image is a pseudo-negative of the original photograph (with color differences exaggerated and brightness differences reduced). Stare at the black dot in the upper image for 30 seconds or more and then quickly switch your gaze to the dot in the lower picture. The grayscale image should briefly appear colored. The illusory colors are opposite to the adapting colors, a “negative aftereffect” typical in adaptation. This occurs because adaptation reduces sensitivity to the local adapting color and, thus, biases perception toward the complementary color. Note also that the aftereffect is visible only when the afterimage is in register with the grayscale image (by fixating the dots). This occurs because the colors in the afterimage tend to fill-in between the luminance boundaries and, thus, blend together in different ways depending on the position of the borders [87]. This illustrates that adaptation effects that arise very early in the visual system can be modulated at more central levels [22].

Stages of adaptation

In the visual system, multiple stages of adaptation are well-documented. Adjusting to changes in average brightness or color begins as early in visual coding as the photoreceptors [5], while adjusting to different patterns of light (e.g. to the orientation of edges or the direction of motion) reflects changes at more central levels [6]. In humans, these pattern-selective aftereffects are thought to include response changes in the cortex, in part because cells with the receptive fields that can distinguish orientation or movement direction first appear in the cortex [7], and also because adaptation in one eye often affects the appearance of patterns seen through the other [8] (and inputs from the two eyes first converge in primary visual cortex). But vision is mediated by myriad cortical levels and areas [9], raising the question of whether the processes of adaptation are manifested throughout the visual stream. One suggestion for this is the growing demonstration of “high-level” pattern aftereffects, which have shown that adaptation can influence the perception of complex and much more abstract properties of the stimulus [1,10]. For example, the identity [11] or characteristics of a person, such as their gender [12] can be biased by prior exposure to a male or female face or to feminine or masculine traits (e.g. walking style [13,14]) (Figure 2). Adaptation also affects the perception of more abstract attributes of objects (e.g. their three-dimensional viewpoint [15]), scenes (e.g. how panoramic they appear [16]), and materials (e.g. whether a surface looks glossy or matte [17]). Further, even seemingly simpler pattern aftereffects, such as the tilt or movement of an edge can depend on higher-level attributes [18], such as whether the test and adapt edges appear to belong to the same object [19]. The similar consequences of adaptation on both simple and complex perceptual judgments may, thus, indicate that adaptation operates at all levels of visual coding. In turn, this suggests that the visual system may draw on fundamentally similar strategies to represent very different forms of information, and that adaptation serves a common function within these representations [20,21]. However, these findings have also challenged basic notions about adaptation.

Figure 2.

Face aftereffects

Adaptation can bias the perceived characteristics of faces in ways very similar to color aftereffects. For example, after looking at a female (or male) face, a neutral, androgynous face appears more masculine (or feminine). Studies exploiting these aftereffects have attempted to establish whether facial dimensions, like gender, are encoded by the relative differences in two broadly tuned mechanisms (upper) or by the distribution of responses across multiple narrowly tuned channels (lower). Adapted from [20].

An important issue is whether high-level aftereffects necessarily reflect response changes at the high-processing levels at which the adapted attribute is explicitly represented. For example, do face aftereffects depend on adaptation in neurons that directly encode the stimulus as a face? Identifying the sites of adaptation is difficult because sensitivity changes that arise early in the visual system will propagate to later stages, and, thus, complex aftereffects could reflect changes inherited from earlier levels [22,23]. This has been illustrated in face adaptation by showing that aftereffects of perceived expression can be induced by exposure to curved [24] or oriented [25] lines, which are not themselves face-like. Moreover, there is increasing evidence that neurons very early in the visual system, in the retinal layers of the eye, can also exhibit a surprising range of plasticity [26]. This can include selective adjustments to patterns like motion and orientation that the cells are not directly tuned for, so that even at this early stage it becomes difficult to disentangle the adaptive behaviour of individual neurons from the processing network [27,28]. Consequently, there is no question that complex images engage adaptation at early levels of visual coding. However, a number of approaches have been used to try to isolate additional, more-central sites of sensitivity change [20]. These include varying the size [29], position [30], or orientation [31] of the adapt and test stimuli so that they are less likely to be “seen” by the same lower-level mechanisms. They also include testing whether the adaptation transfers between adapt and test stimuli that are more similar conceptually than perceptually. For example, gender aftereffects can be induced in images of faces after adapting to male or female headless bodies [32], and motion aftereffects can occur in stimuli after adapting to static images [33] or even passages of text [34] that convey movement. Gender aftereffects also depend on the perceptual category of the stimuli rather than simply their physical similarity [35,36]. Finally, distinct high-level aftereffects have also been implicated by showing that the adaptation has different properties than low-level aftereffects, such as the time course [18] or susceptibility to awareness [37], or that it alters percepts in qualitatively different ways [38]. Overall, such findings suggest that adaptation probably is intrinsic to all levels of sensory coding and ranks among a basic set of “canonical” computations used throughout the brain to govern activity [39]. Because of this, caution is needed in identifying an aftereffect with specific levels of processing.

Selectivity of visual aftereffects

If adaptation is happening throughout the visual stream, can we use aftereffects to decipher the coding of high-level attributes in the same way that adaptation has revealed how information is sampled by early visual mechanisms? A frequent goal of adaptation studies has been to characterize the number and selectivity of adaptable mechanisms representing a stimulus dimension [6]. The assumption is that if adaptation to one stimulus alters sensitivity to another, then both are detected by a common filter or channel, and, thus, the spread of the aftereffects can be used to measure how many different channels might span the dimension and their bandwidths. This approach has been actively extended to high-level perceptual attributes in the study of how the visual system is organized to represent faces [20]. Two standard models of face coding are an exemplar-based code – where each specific identity is represented by a specific neural template or something like the central tendency of the responses across a range of detectors, and a norm-based code – where identity is instead represented in a relative fashion by coding how the individual deviates from an average or prototype face [40] (Figure 2). The exemplar model is reminiscent of orientation or size (spatial frequency) coding, where we might represent the tilt of an edge by the pattern of activity across many channels tuned to different orientations and spatial scales [41]. Norm-based codes are instead similar to color vision, where the dimensions of hue and saturation correspond to how, and how much, the stimulus differs from a neutral gray [42]. In principle, these schemes can be distinguished by the pattern of aftereffects [43]. In a norm-based code, adaptation should readjust the norm (similar to the way that odors or colors fade with exposure), and thus distinctive faces should appear more average the longer we look at them. While there are a number of signs of this normalization [11,44-46], recent studies have questioned whether some facial attributes like gender adapt in ways predicted by simple normalization [47,48]. Moreover, interpreting the pattern of aftereffects is, in general, complicated because the selectivity of the adaptation does not necessarily predict the selectivity of the underlying detectors [49,50], and the channels may not adapt independently [51]. A further problem is whether the stimulus itself should be treated as narrow (e.g. a single wavelength) or broad (e.g. natural light sources), for adaptation will tend to normalize the appearance of stimuli with broad spectra, even if the underlying channels are highly selective [52].

Functions of adaptation

Perhaps a deeper question about high-level adaptation is whether it serves the same purpose for visual coding as the sensitivity changes arising at more peripheral stages of the visual system. This is again hard to answer because the functions of adaptation remain surprisingly mysterious [1,2,53]. New ideas about its potential utility continue to emerge [54], and it may, in fact, serve a variety of distinct roles, from enhancing discrimination [55,56] to highlighting novelty [57,58] to maintaining perceptual constancy (i.e. invariant percepts despite varying viewing contexts) [59]. At peripheral stages, most theoretical accounts of adaptation have emphasized efficient coding, or how to get the most information out of neurons with very limited dynamic ranges [51,60-63]. The visual system must operate over a daunting range of light levels and adaptation is clearly critical to ensure that neurons are not under- or over-exposed [64]. Information and metabolic efficiency can also be maximized by using a predictive code [65], so that neural resources are not wasted on the expected properties of the stimulus and can instead be devoted to signaling only the unexpected. Normalizing the code through adaptation effectively achieves this by neutralizing and, thus, factoring out responses to the current average stimulus. Such principles have proven to be remarkably powerful in predicting the operating characteristics of early sensory neurons [66]. Yet it remains unclear whether the same constraints continue to dominate at later stages or for higher perceptual attributes, where the range of environmental variation may be much less, and where the goal of sensory coding itself might change. For example, a persistent assumption is that adaptation should help us to distinguish small differences in the patterns we are adapted to, in the same way that adapting to the average light level helps us to distinguishing brightness differences within the scene. Yet demonstrations that we can better distinguish among simple patterns, like gratings [67-69] or complex images like faces [70-72], after we adapt to them are meager in comparison to the striking changes that adaptation induces in their appearance. The linear response functions characterized for some dimensions of facial variation, such as eye-height, also seem poorly designed to efficiently represent the presumed unimodal distribution of levels in the stimulus [44,73,74]. Moreover, adaptation may set the operating state of cortical neurons too low to maximize information transmission, perhaps because these neurons are instead designed to reliably detect novel features [75].

Dynamics of adaptation

The feedback signals that might allow a neuron to know when it should adapt are also poorly understood. Adaptation is clearly a process of adjusting sensitivity relative to some underlying reference level, but what sets this level? Often pattern aftereffects are assumed to be one-sided, so that adaptation reduces, but never enhances, sensitivity. However, this assumes a baseline sensitivity set by no stimulation at all. In reality, we are adapted to the world around us, so that the stimulus could change by becoming stronger or weaker. When observers are adapted to a world with artificially lower contrast, they in fact become more sensitive to contrast [76,77]. Recent work suggests that these baseline states are themselves determined by long-term adaptation to the environment, and that there may be distinct adaptation adjustments operating over multiple timescales [78-81]. Moreover, these may be designed to calibrate for different properties of the stimulus distribution, such as the mean versus maximum level [77]. Varying rates of adaptation may be important for optimally tracking different rates of change in the environment or the organism. For example, it makes sense to adapt quickly but recover rapidly when the change is transient (e.g. muscle fatigue), while adjusting more slowly when the change is more persistent (e.g. development or damage) [82-84]. The dynamics of adaptation may also need to vary in order to ensure that adjustments are driven by actual changes in the signal rather than noise [85]. Finally, when observers are repeatedly exposed to a different context, there is evidence that the dynamics also adjust so that they can “learn” to adapt more quickly to the change [86]. These results raise the intriguing possibility that the visual system might store different operating states for different contexts (e.g. whether our glasses are on or off), allowing rapid switches between them. Such findings are not only extending the concept of high-level adaptation to more complex stages of visual analysis but also to more complex forms of sensory calibration.

Summary and future directions

Studies of adaptation continue to reveal surprising and complex forms of plasticity in sensory systems, from peripheral receptors to central mechanisms coding highly abstract properties of the stimulus. The finding that vision adapts in such similar ways to such a diverse array of perceptual attributes suggests that adaptation is an intrinsic feature of visual coding that is manifest throughout the visual stream. However, we still understand little about the dynamics and mechanisms of these adjustments, how they operate over different timescales, and whether they serve common or distinct roles in calibrating our perceptions.