Abstract

Previous work has implicated prefrontal cortices in selecting among and retrieving conceptual information stored elsewhere. However, recent neurophysiological work in monkeys suggests that prefrontal cortex may play a more direct role in representing conceptual information in a flexible context-specific manner. Here, we investigate the nature of visual object representations from perceptual to conceptual levels in an unbiased data-driven manner using a functional magnetic resonance imaging adaptation paradigm with pictures of animals. Throughout much of occipital cortex, activity was highly sensitive to changes in 2D stimulus form, consistent with tuning to form and position within retinotopic coordinates and matching an automated measure of shape similarity. Broad superordinate conceptual information was represented as early as extrastriate and posterior ventral temporal cortex. These regions were not completely invariant to form, suggesting that form similarity remains an important organizational constraint into the temporal cortex. Separate sites within prefrontal cortex represented broad and narrow conceptual tuning, with more anterior sites tuned narrowly to close conceptual associates in a manner that was invariant to stimulus form/position and that matched independent similarity ratings of the stimuli. The combination of broad and narrow conceptual tuning within prefrontal cortex may support flexible selection, retrieval, and classification of objects at different levels of categorical abstraction.

Introduction

Human beings routinely encounter and identify a wide range of different meaningful objects. When encountering an object through vision, our brains transform low-level visual information about form, color, texture, and position into higher order perceptual and conceptual information that allows us to act appropriately on the object and relate it to other objects that we know. Studies of visual object identification in humans and monkeys have revealed a progression of brain areas within the ventral and dorsal visual processing pathways (Ungerleider and Mishkin 1982; Ungerleider and Haxby 1994), as well as in the prefrontal cortex (e.g., Miller et al. 2002), that correspond to multiple, qualitatively distinct, and hierarchically organized levels of representation. Much is known about the earliest stages of visual object representation (see Van Essen and Gallant 1994; Riesenhuber and Poggio 2002; Grill-Spector and Malach 2004 for reviews). Early visual cortex is organized retinotopically, with cells analyzing only small portions of the visual field and responding to local edges and contours within an image (e.g., Hubel and Wiesel 1962; DeYoe et al. 1996; Tootell et al. 1998). Later stages of visual processing in extrastriate and occipitotemporal areas involve progressively larger portions of the visual field and exhibit more complicated and object-selective responses (e.g., Boussaoud et al. 1991; Malach et al. 1995; Brewer et al. 2005). At the level of object concepts, we know that temporal and prefrontal cortex play critical roles in processing information about object category, meaning, and contextual relevance (see Martin and Chao 2001; Binder et al. 2009 for review). However, much is still unknown about the nature of visual object representations, particularly at the level of conceptual categories. How early in the hierarchy of visual, temporal, and prefrontal areas are the representations abstracted away from variations in lower level stimulus form and position (DiCarlo and Maunsell 2003; Freedman et al. 2003; Hemond et al. 2007; Schwarzlose et al. 2008; Andresen et al. 2009)? How broadly and categorically tuned are object representations in temporal and prefrontal cortex, and how might such representations support more executive functions such as comparing or selecting between different concepts (e.g., Thompson-Schill et al. 1997; Moss et al. 2005) or retrieving conceptual information from memory (e.g., Wagner et al. 2001)?

In monkeys, Freedman and colleagues have characterized the nature of visual object representations in prefrontal and inferotemporal cortices in a visual category–learning task (see Freedman and Miller 2008 for review). Morphed pictures of cats and dogs were assigned to 2 or more categories, and monkeys were trained to match sequentially presented stimuli that belonged to the same category. Single neurons in lateral prefrontal cortex were found to be selective to the trained categories and were surprisingly insensitive to the stimulus form similarity of sequentially matched pictures (Freedman et al. 2001, 2002). When the monkeys were retrained using new orthogonal category boundaries over the same stimulus set, the prefrontal neurons became selective to the new categories and lost selectivity to the old categories. These results indicate that prefrontal neurons are capable of representing category information through learning in a relatively flexible and abstract manner, reflecting the behavioral relevance of stimulus distinctions separately from stimulus form characteristics. In contrast, responses of inferotemporal neurons recorded in the same monkeys were more strongly determined by stimulus form similarity and were tuned relatively weakly to the trained category boundaries (Freedman et al. 2003). Similar results have been reported recently in humans performing a visual category–learning task with morphed pictures of cars (Jiang et al. 2007; see also Gillebert et al. 2009).

For more natural categories of objects such as faces, animals, tools, and places, preference for object category can be observed in humans as early as occipitotemporal cortex when different categories of stimuli are directly contrasted with one another (see Martin 2007; Binder et al. 2009 for reviews). Neuropsychological studies of patients with selective conceptual deficits also strongly implicate the involvement of the temporal lobes in conceptual representation (e.g., Hart and Gordon 1990; Tranel et al. 1997; Vandenbulcke et al. 2006; Capitani et al. 2009). In contrast, damage to prefrontal areas does not routinely lead to selective conceptual deficits but instead to deficits in executive functions such as planning, problem solving, and task switching, as well as deficits in attentional selection, working memory, and aspects of speech production (e.g., Duncan 1986; Shallice and Burgess 1991).

Neuroimaging studies in humans have nevertheless demonstrated that prefrontal cortex, particularly the left inferior frontal gyrus, is intricately involved in conceptual processing. Inferior frontal brain regions exhibit greater activity when tasks require selection among conceptually related alternatives (e.g., Thompson-Schill et al. 1997) or the strategic retrieval of conceptual knowledge from memory (e.g., Wagner et al. 2001; Gold and Buckner 2002). There is some further indication that separate divisions within inferior frontal cortex may participate in distinct conceptual functions (e.g., Thompson-Schill et al. 1997; Badre et al. 2005; Moss et al. 2005). These proposed functions would appear to require at least some local representation of conceptual information in prefrontal cortex. For example, a brain region that serves to “select” between activity states corresponding to 2 highly related object concepts would necessarily have to represent information that distinguishes these concepts. A few studies have shown that prefrontal cortex is sensitive to the conceptual relationships between pairs of items (e.g., Wheatley et al. 2005; Gold et al. 2006), but much is still unknown about the range and precise nature of these relationships. Does prefrontal cortex represent broad category information, distinctive information about single-object concepts, or some combination of both?

In the current study, we examine the fine-grained nature of visual object representations throughout the human brain, ranging from visual stimulus form up through the level of object concepts within the domain of animals. To do this, we employ functional magnetic resonance imaging (fMRI) adaptation (Grill-Spector and Malach 2001; Naccache and Dehaene 2001), a method inspired by single-neuron recording experiments in monkeys (e.g., Baylis and Rolls 1987; Miller et al. 1991; see Desimone 1996 for review) and used previously to characterize neural tuning curves within single fMRI voxels (e.g., Piazza et al. 2004; Andresen et al. 2009). Based on a paradigm described by Piazza et al. (2004), we repeat single-animal pictures (referred to as “anchor” pictures) several times in a row over a few seconds (see Fig. 1). This is expected to result in a large temporary decrease in neural activity (i.e., adaptation) throughout the visual brain in cells that are responsive to the stimulus. Recovery from adaptation can then be measured within each fMRI voxel to a single “deviant” picture that occurs immediately after the anchor picture and shares a particular conceptual relationship with it. If the neural representations of the anchor and deviant stimuli share many cells within a voxel, as one might expect for identical or highly related objects that have many component parts or features in common, the recovered response should be relatively weak due to persistent adaptation. In contrast, if they share few cells, as one would expect for very different objects, the response should be recovered to nonadapted levels.

fMRI adaptation paradigm and task. One adaptation or “anchor” animal picture was presented several times in a row (3–7 times) at a rate of once per second (picture duration = 200 ms, crosshair duration = 800 ms). Immediately following the repeated anchor picture, a single “deviant” animal picture was presented, drawn from 1 of 5 levels of conceptual distance from the anchor. Adaptation sequences were intermixed randomly with phase-scrambled baseline pictures and pictures of man-made objects. Subjects were instructed to press a response button to pictures of man-made objects but were asked to attend to all pictures.
Figure 1.

fMRI adaptation paradigm and task. One adaptation or “anchor” animal picture was presented several times in a row (3–7 times) at a rate of once per second (picture duration = 200 ms, crosshair duration = 800 ms). Immediately following the repeated anchor picture, a single “deviant” animal picture was presented, drawn from 1 of 5 levels of conceptual distance from the anchor. Adaptation sequences were intermixed randomly with phase-scrambled baseline pictures and pictures of man-made objects. Subjects were instructed to press a response button to pictures of man-made objects but were asked to attend to all pictures.

We manipulated the conceptual relationship between anchor and deviant pictures in a graded manner at 5 levels, ranging from identical in stimulus form and concept (Level 1: identical picture to anchor) to same concept (Level 2: different exemplar picture of the same type of animal with a reversed left/right orientation) to different concepts with varying degrees of similarity (Levels 3–5: high-, medium-, and low-related concepts, see Fig. 2A for examples). This allowed us to measure recovery from adaptation and conceptual tuning along 5 data points in each fMRI voxel, spanning Rosch’s taxonomy of basic- and superordinate-level conceptual categories (Rosch et al. 1976; Rosch 1978). We anticipated that recovery curves could cover the full range of tuning from 2D stimulus form up to conceptual categories. At the perceptual extreme of tuning to stimulus form, recovery might show an “image-selective” pattern, with continued adaptation only to an identical picture and full recovery to any picture with different stimulus form in 2D retinotopic coordinates, including different exemplars of the same type of animal that have been reversed in their left/right orientation. At the conceptual extreme, recovery might show a “category-selective” pattern with continued adaptation to any object within the same superordinate-level category (either “land animals” or “sea creatures”) and full recovery to any object from a different superordinate category. Within the scope of these perceptual (image selective) and conceptual (category selective) extremes, we can make the following predictions about tuning to visual objects within visually responsive cortical regions:

  1. Occipital cortex: Recovery curves in occipital and occipitotemporal areas that are predominantly tuned to stimulus form should be similar to an image-selective pattern, particularly at stages of visual processing that represent only the contralateral hemifield since Deviant Levels 2–5 will share relatively little stimulus form with the anchor by hemifield.

  2. Temporal cortex: Areas within the temporal lobes known to be selectively responsive to animals over other conceptual categories and known to be sensitive to the conceptual relationships between items, such as the lateral portion of the fusiform gyrus within the ventral temporal cortex (e.g., Chao et al. 1999; Noppeney et al. 2006; Wiggett et al. 2009), should show continued adaptation to Deviant Levels 1–3 (identical, same concept, and high related) relative to Level 5 (low related). Whether Deviant Level 4 (medium related) is also adapted relative to Level 5 within the lateral fusiform will depend on its breadth of conceptual tuning within the class of animals, and this breadth is currently unknown. If it is more broadly tuned to superordinate categories such as land animals, Level 4 should remain adapted relative to Level 5. If it is more narrowly tuned to close conceptual associates, Levels 4 and 5 may show more comparable levels of recovery. We can also predict at least a partial recovery to Deviant Levels 2–5 relative to Level 1 in the fusiform gyrus, in the sense that exemplar repetition effects have been found previously to be weaker than identical repetition (e.g., Koutstaal et al. 2001; Vuilleumier et al. 2002; Simons et al. 2003), and a variety of studies have shown residual tuning to stimulus form and position in visually responsive portions of the temporal lobe (e.g., Op de Beeck and Vogels 2000; DiCarlo and Maunsell 2003; Hemond et al. 2007; Schwarzlose et al. 2008; see Kravitz et al. 2008 for review).

  3. Prefrontal cortex: In lateral prefrontal cortex, visual category–learning experiments in monkeys and humans have indicated that neural activity can flexibly represent category-level information with little contribution of stimulus form (e.g., Freedman et al. 2001; Jiang et al. 2007). Conceptual repetition effects over brief durations have also commonly been observed over large portions of lateral prefrontal cortex (e.g., Wheatley et al. 2005; Gold et al. 2006). We expect to observe adaptation to Deviant Levels 1–3 relative to Level 5, perhaps in the inferior frontal gyrus, with a reduced dependence on stimulus form (i.e., similar responses to Levels 1 and 2). As with the fusiform gyrus, the breadth of conceptual tuning to natural categories within prefrontal cortex is unknown. However, if prefrontal cortex is to play a central role in selecting between or retrieving close conceptual associates (Thompson-Schill et al. 1997; Wagner et al. 2001), tuning should be narrow enough to represent the corresponding stimulus distinctions (e.g., Levels 2–4: same concept, and high and medium related). One might also anticipate a graded recovery pattern across adjacent deviant levels, with discriminable responses between each possible combination of levels (e.g., same concept vs. high-related concept). This would permit flexible selection, retrieval, and categorization over a wide range of conceptual levels.

Conceptual distance manipulation, similarity ratings, and stimulus form similarity. (A) Repeated anchor pictures in the fMRI experiment were followed by a single deviant picture sharing 1 of 5 levels of conceptual distance from the anchor. Pictures could be 1) identical (e.g., identical “cow” picture), 2) same concept (e.g., different exemplar picture of a cow), 3) highly related conceptually to the anchor (e.g., another farm animal such as a “donkey”), 4) medium related (e.g., another land animal such as an “elephant”), or 5) low related (e.g., both anchor and deviant are living things, such as cow and “lobster”). Four unique anchor pictures were used throughout the experiment (cow, lion, bass fish, and shark, as shown in the left-most column), and each of the deviant levels, with the exception of Level 1 (identical), employed multiple distinct examples (4 or more) of the relation for each anchor (see Supplementary Material for complete list). (B) A ratings study (n = 7 subjects) confirmed that the 5 levels of conceptual distance used in the fMRI experiment were significantly different from one another. Subjects were shown anchor–deviant stimuli in pairs presented simultaneously on the screen (one left, one right) and asked to rate how similar the objects were on a scale from 1 to 5 (5 = very similar). Rated similarity of the anchor–deviant pairs decreased as a function of increasing conceptual distance (Levels 1–5). (C) Differences in visual stimulus form between anchor–deviant pairs in the 5 deviant conditions. Pairwise distance values (D) were calculated from an automated shape similarity algorithm (Belongie et al. 2002). Distances were small to Deviant Level 1 (identical to anchor) and large to the other deviant conditions, with little variation among them. Distance values have an inverted scale relative to the similarity values shown in (B).
Figure 2.

Conceptual distance manipulation, similarity ratings, and stimulus form similarity. (A) Repeated anchor pictures in the fMRI experiment were followed by a single deviant picture sharing 1 of 5 levels of conceptual distance from the anchor. Pictures could be 1) identical (e.g., identical “cow” picture), 2) same concept (e.g., different exemplar picture of a cow), 3) highly related conceptually to the anchor (e.g., another farm animal such as a “donkey”), 4) medium related (e.g., another land animal such as an “elephant”), or 5) low related (e.g., both anchor and deviant are living things, such as cow and “lobster”). Four unique anchor pictures were used throughout the experiment (cow, lion, bass fish, and shark, as shown in the left-most column), and each of the deviant levels, with the exception of Level 1 (identical), employed multiple distinct examples (4 or more) of the relation for each anchor (see Supplementary Material for complete list). (B) A ratings study (n = 7 subjects) confirmed that the 5 levels of conceptual distance used in the fMRI experiment were significantly different from one another. Subjects were shown anchor–deviant stimuli in pairs presented simultaneously on the screen (one left, one right) and asked to rate how similar the objects were on a scale from 1 to 5 (5 = very similar). Rated similarity of the anchor–deviant pairs decreased as a function of increasing conceptual distance (Levels 1–5). (C) Differences in visual stimulus form between anchor–deviant pairs in the 5 deviant conditions. Pairwise distance values (D) were calculated from an automated shape similarity algorithm (Belongie et al. 2002). Distances were small to Deviant Level 1 (identical to anchor) and large to the other deviant conditions, with little variation among them. Distance values have an inverted scale relative to the similarity values shown in (B).

To evaluate these predictions systematically and in an unbiased and data-driven manner, we developed a novel whole-brain analysis method that would allow us to detect the full range of variation in tuning along the transition between image-selective and category-selective tuning.

Materials and Methods

Magnetic Resonance Data Collection Parameters

Eighteen volunteer subjects (8 female) were recruited and paid for their participation in the study. All subjects completed health questionnaires and none reported a history of head injury or other neurological problems. In accordance with the National Institutes of Health (NIH) Institutional Review Board protocols, all subjects read and signed informed consent documents. fMRI data were collected using a GE Signa 3 Tesla whole-body MRI scanner and 8-channel head coil at the NIH Clinical Center NMR Research Facility using standard imaging procedures. Prior to the experimental task, a high-resolution magnetization-prepared rapid gradient-echo anatomical sequence (124 axial slices, 1.2-mm thickness, Field of View (FOV) = 24 cm, acquisition matrix = 256 × 256) was performed. fMRI data were collected using a gradient-echo echo-planar series (Repetition Time = 2000 ms, Echo Time = 30 ms, FOV = 24 cm, acquisition matrix = 64 × 64, in-plane resolution = 3.5 mm). A total of 35 axial contiguous interleaved slices were collected for the functional volume (single-voxel volume = 3.75 × 3.75 × 3.5 mm3). Each subject had 6 functional series with 210 volumes per run. The first 3 volumes of each run were removed to allow the scanner to reach equilibrium magnetization.

fMRI Experimental Design

During the fMRI experiment, subjects were exposed to adaptation sequences of grayscale animal pictures (each presented foveally and subtending the central 7.8° × 6.2° of visual angle, horizontal × vertical), as well as pictures of man-made objects and phase-scrambled baseline pictures created from the animal images. Man-made objects and scrambled baselines occurred randomly between adaptation sequences, with the objects occurring at an average rate of approximately 1 every 15 s (∼30 total per run) and baselines making up 30% of all images. Subjects were instructed to respond to pictures of man-made objects with a button press but were asked to attend to all images. Four unique adaptation or anchor pictures were used in the adaptation sequences, 2 selected from the superordinate category of land animals (“cow” and “lion”) and 2 from sea creatures (“bass” and “shark”). In each sequence, a single anchor picture was repeated anywhere from 3 to 7 times (uniform distribution) at a rate of 1 picture per second (stimulus duration = 200 ms, fixation screen = 800 ms). After the final presentation of the anchor picture, a single deviant animal picture was presented and shared a conceptual relationship with the anchor at 1 of 5 levels: 1) identical to the anchor (e.g., same picture of a cow), 2) same concept as the anchor (e.g., different exemplar picture of a cow; reversed in left/right orientation and often in part/whole view from anchor, such as the face of a cow vs. face + body), 3) high-related conceptual associate (e.g., another farm animal, such as a “donkey”), 4) medium-related conceptual associate (e.g., another land animal, such as an “elephant”), and 5) low-related conceptual associate (e.g., a sea creature, such as “lobster,” when the anchor is a land animal). This conceptual distance manipulation defined 5 deviant conditions. Each condition was randomly sampled on average approximately 45 times per subject over the course of the experiment (11–12 samples per anchor), with multiple stimuli (at least 4) satisfying each condition for each of the 4 anchor stimuli (see Supplementary Material for a full list of anchor–deviant pairs, as well as a description of the taxonomic design of the conceptual distance manipulation) The order of the deviant conditions and the placement of the baseline trials were determined through the use of the program “optseq2” (http://surfer.nmr.mgh.harvard.edu/optseq/) and then modified to allow variable-length adaptation sequences and the insertion of man-made object stimuli.

fMRI Data Analysis

MRI data were analyzed using a random-effects approach within the general linear model, as implemented in the AFNI software package (Cox 1996). Preprocessing steps for each subject consisted of slice-time correction, registration to the volume acquired closest to the high-resolution anatomy, spatial smoothing with a 4.5-mm full-width half-maximum Gaussian filter, and mean-based intensity normalization of all volumes by the same factor. Echo-planar and anatomical volumes were transformed into the standardized Talairach and Tournoux (1988) volume and resampled to 1.0 × 1.0 × 1.0–mm3 isotropic voxels for the purposes of group analyses.

Time series were modeled using 9 event-related regressors of interest: the first, middle, and last anchor images in the adaptation sequence, along with the 5 levels of deviant stimuli (identical, same concept, and high, medium, and low related) and the man-made objects response condition. Temporal jitter between the onset of the first and last anchors in the adaptation sequence was achieved through varying the number of anchor presentations (3–7 repetitions) rather than varying the duration between individual stimulus events, with anchors between the first and last marked as “middle.” The regressors of interest were then convolved with the standard hemodynamic response function, combined with a set of regressors of no interest (e.g., head motion parameters from the output of the volume registration, regressors representing AFNI’s model of baseline activity), and then compared through multiple regression to a baseline of phase-scrambled versions of the animal pictures used in the experiment. The regression model provided the β weights for the response to each stimulus type in each voxel for each subject. A 2-way mixed-effects analysis of variance (ANOVA) was performed on each voxel in standardized space, with a fixed-effects contrast performed on the 9 stimulus conditions and subjects acting as the random-effect repeated measure. The effect of adaptation was evaluated as a weighted contrast between the regressors for the first and last stimuli in the adaptation sequence (first > last), thresholded at P < 0.025 (1 tailed) and corrected for multiple comparisons (P < 0.05) using a voxel-wise-threshold by cluster-size algorithm (AlphaSim in AFNI: http://afni.nimh.nih.gov/afni/doc/manual/AlphaSim). Similarly, the effect of recovery from adaptation was evaluated as a weighted contrast between the regressor for Deviant Level 1 and those for Deviant Levels 2–5 (Deviant Level 1 < Deviant Levels 2–5), thresholded at P < 0.025 (1 tailed) and corrected for multiple comparisons (P < 0.05) using cluster size. Relatively permissive alpha levels were intentionally chosen for the initial thresholds for these effects to afford a more comprehensive set of possible recovery patterns in task-relevant brain regions. The intersection of the corrected adaptation and recovery masks then served as the conjunction of the 2 effects (see Friston et al. 2005; Nichols et al. 2005 for discussion). These contrasts were statistically independent, as they were carried out on nonoverlapping sets of stimulus events (see Baker et al. 2007; Kriegeskorte et al. 2009 for discussion).

Analyses of Recovery Patterns

Following the identification of the conjunction between adaptation and recovery effects, voxels from the conjunction mask were submitted to a series of further analyses to characterize the full range of recovery patterns present. To visualize patterns in the 5D space defined by the beta weights for Deviant Levels 1–5, we adopted a mixture of empirical and theory-driven approaches. We defined 2 models at the extremes of the continuum between visual perceptual and conceptual tuning: 1) an image-selective model for which adaptation remained saturated to Deviant Level 1 (identical image to anchor) and was completely recovered to any image with different 2D visual form (Deviant Levels 2–5), as one might expect for brain regions that are organized retinotopically, and 2) a category-selective model for which adaptation remained saturated to any picture from the same superordinate category (either land animals or sea creatures) but was fully recovered for a picture from a different superordinate category. After normalizing the group mean beta weights to the 5 Deviant Levels between 0 and 1 (minimum to maximum) for each voxel, we calculated the proximity to each of these 2 models and tabulated a 3D frequency histogram over voxels in the conjunction mask, where the x-axis corresponded to the sum-squared distance (squared Euclidean distance) from the category-selective model, the y-axis corresponded to the distance from the image-selective model, and the z-axis displayed the frequency count of voxels at each combination of distances (with distances broken into discrete bins of width 0.05). For visualization of the basic distinction between selectivity to visual stimulus form and conceptual information, these 2 distances were combined into a single “relative distance” measure from the category-selective model (DCategory-Selective/[DCategory-Selective + DImage-Selective]) that could be placed on a color scale and viewed in the brain volume. A relative distance of 0 indicated an exact match to the category-selective model, and a relative distance of 1 indicated an exact match to the image-selective model (see Supplementary Material for full details).

We then characterized the discriminability of different tuning curve shapes across subjects using a 2-step approach: 1) we identified peaks in the 3D frequency histogram, corresponding to patterns that occurred with a higher likelihood than expected based on a null hypothesis of one average recovery pattern being present and 2) with each of these delimited peaks in model/pattern space defining a large anatomical region of interest (ROI) over the corresponding voxels in the conjunction mask, we ran more standard ROI analyses across subjects to verify that the patterns present at the different peaks were indeed reliably different from one another. This approach was relatively data driven in that we were able to identify the most common and reliable recovery patterns without presupposing what types of tuning should be in the data. We accomplished the first step of identifying peaks in the frequency histogram by applying a novel random data-shuffling method (see Supplementary Material for details). This resulted in 3 large contiguous sets or zones of bins in the 3D histogram for which the actual frequency counts exceeded that expected by the null hypothesis of a single average recovery pattern. We then labeled these as 3 different tuning curve types, based on their observed shapes (image selective, perceptual/conceptual-broad, and conceptual-narrow). Having identified these different types of tuning present in the group mean beta weights, we then evaluated their discriminability across subjects. We constructed ROI brain masks for each tuning type that included all the voxels that contributed to the 3 demarcated zones of bins in the 3D histogram, regardless of where in the brain they occurred. Prior to ROI analysis, we also required that each individual anatomical cluster consist of enough contiguous 1.0-mm3 resampled voxels (in Talairach coordinates) to make up one original scanning voxel (i.e., >49.22 mm3). We then performed ROI analyses using these 3 masks. Beta weights from the individual subjects, averaged over the voxels in each of the 3 group-level masks, were submitted to a series of 3-way mixed-effects ANOVAs, with tuning type as the first fixed effects factor, deviant level as the second fixed-effects factor, and subject as the random-effects repeated measure. The Tuning Type × Deviant Level interactions were assessed between each pair of tuning types with separate ANOVAs (image selective vs. perceptual/conceptual-broad, image selective vs. conceptual-narrow, and perceptual/conceptual-broad vs. conceptual-narrow). Post hoc paired comparisons (paired t-tests) were then conducted for each type of tuning curve to determine the statistical significance of differences between individual deviant level responses.

Similarity Ratings Study

Subjects who did not participate in the fMRI experiment (n = 7) were asked to rate the similarity (1 = low, 5 = high) of pairs of objects presented on a computer screen simultaneously. The object pictures used were identical to those used in the fMRI study. Subjects were not guided explicitly to rate conceptual or visual similarity but rather were given 2 extreme end point examples. They were told to rate 2 very similar objects (such as 2 different “dog” pictures) as highly similar (e.g., with a rating of 5). They were asked to rate 2 very different objects (e.g., a dog vs. an octopus) as very different (e.g., with a rating of 1). They were informed that there were many possible shades of similarity in between these extremes, that there was no “correct” answer, and that they should just use their instinct and best judgment as to how similar the 2 objects were. Each subject was presented in a random order with all of the anchor–deviant stimulus pairs encountered in the fMRI experiment (with the same frequency of presentation). The 2 stimuli in each pair were always presented on the left and right halves of the screen, with the location of the anchor versus deviant assigned randomly from trial to trial.

Visual Form Similarity Measures

We applied an automated shape similarity algorithm (Belongie et al. 2002) to the anchor–deviant stimulus pairs in our experiment to analyze the possible role of shape similarity in determining neural tuning curves. This algorithm was a weighted average of 3 separate distance measures. The composite distance measure (D) is robust to size differences and relatively robust to in-plane rotation and small-to-moderate discrepancies between perspective (see Supplementary Material for discussion). In brief, the 2 images to be compared are submitted to standard edge detection (Sobel method in the Matlab Image Processing Toolbox, http://www.mathworks.com/), and the contours are sampled with a discrete set of points (N = 200 for our analyses). The distance measure D reflects the difficulty of spatially transforming or “warping” one point-based image into the other, as well as the extent of agreement after warping of the interpoint relationships (referred to as “shape context”).

Results

We first confirmed in a behavioral rating study (N = 7 subjects) that our manipulation of conceptual relatedness between anchor and deviant pictures was valid (Fig. 2B). Subjects did indeed judge anchor–deviant pairs with a larger conceptual distance (Deviant Levels 1–5) to be less similar (F4,24 = 252.69; P < 0.0005). Each comparison between adjacent deviant levels was also found to be highly significant (paired t-tests, P < 0.002 for all comparisons). This was expected, given the taxonomic structure used to construct anchor–deviant pairs (see Supplementary Material for a full description of this taxonomy and stimuli).

We next quantified the similarity of the anchor and deviant pictures used in our experiment with respect to their visual stimulus form (Belongie et al. 2002; see Mahon et al. 2007 for a similar application). The calculated distances (D) between the anchor and deviant images used in our experiment are shown in Figure 2C for the different deviant conditions (Levels 1–5). Unlike the behavioral similarity ratings shown in Figure 2B, the stimulus form distances show an abrupt change between Deviant Levels 1 and 2 (2-sample t-test, P < 0.0003), with little difference between the subsequent deviant levels. While the shape similarity algorithm that we employ may fail to capture all the relevant aspects of 2D stimulus form, the current pattern nevertheless indicates that stimulus form is unlikely to explain large sources of variation observed among Deviant Levels 2–5 in the fMRI study.

During fMRI, a different group of subjects than that involved in the behavioral ratings (N = 18) viewed adaptation sequences of animal pictures, with intervening baseline images and pictures of man-made objects. Subjects were instructed to attend to each picture while performing a simple categorization task (i.e., man-made or not?), giving a button press to pictures of man-made objects and no response to other pictures. This task ensured that subjects would be attending to the adaptation sequences of animal pictures, but neural activity to the different deviant conditions would not be confounded by differences in response latency or accuracy since responses were only given to man-made objects (see also Henson et al. 2000). After linearly transforming the location of each subject’s beta weights to the adaptation and deviant stimuli into Talairach coordinates, we first performed whole-brain analyses on the group data to find brain voxels that were relevant to the current task. Previous studies have assumed that significant variation observed among deviant conditions (i.e., recovery) necessarily indicates that adaptation effects have occurred in the same voxels. A virtue of our experimental design is that it allows us to explicitly evaluate this assumption. We performed a conjunction analysis to find voxels that showed both adaptation (first > last in the anchor sequence, P < 0.025, 1 tailed, corrected for cluster size at P < 0.05) and recovery from adaptation (Deviant Level 1 < average of Levels 2–5, P < 0.025, 1 tailed, corrected for cluster size at P < 0.05) (Fig. 3; see Nichols et al., 2005). Importantly, these 2 effects were calculated on separate stimulus events, and therefore, the tests were statistically independent (e.g., Baker et al. 2007; Kriegeskorte et al. 2009). This resulted in a large area of overlap between brain regions showing adaptation and some form of recovery, including large bilateral extents in the dorsal and ventral visual streams, thalamus, and frontal areas (green mask in Fig. 3). Some voxels appeared to show only 1 of the 2 effects. However, these voxels showed the expected patterns—adaptation and recovery—qualitatively (for further analyses, see Supplementary Material). Subsequent analyses therefore excluded these voxels for which adaptation or recovery was less reliable across subjects.

Conjunction of adaptation and recovery effects. Voxels in the group analysis showing only significant effects of adaptation (first anchor > last anchor) are shown in yellow superimposed on the brain of an individual subject who participated in the study. Voxels showing only significant effects of recovery from adaptation (Deviant Level 1 < average of Deviant Levels 2–5) are shown in blue. Voxels showing both effects of adaptation and recovery (conjunction) are shown in green and are used in all subsequent analyses. Coronal slices are shown in equal steps of 8 mm from a y-coordinate of −80 in occipital cortex through +24 in frontal cortex (Talairach).
Figure 3.

Conjunction of adaptation and recovery effects. Voxels in the group analysis showing only significant effects of adaptation (first anchor > last anchor) are shown in yellow superimposed on the brain of an individual subject who participated in the study. Voxels showing only significant effects of recovery from adaptation (Deviant Level 1 < average of Deviant Levels 2–5) are shown in blue. Voxels showing both effects of adaptation and recovery (conjunction) are shown in green and are used in all subsequent analyses. Coronal slices are shown in equal steps of 8 mm from a y-coordinate of −80 in occipital cortex through +24 in frontal cortex (Talairach).

Measuring Tuning Preferences to Conceptual Category versus Stimulus Form

Having identified voxels that show both adaptation and recovery from adaptation, we next turned to the task of characterizing the different recovery patterns. It is difficult to visualize patterns that occur in 5 dimensions (defined by the 5 beta weights in each voxel to the different deviant level conditions). We developed an economical approach to visualizing the pattern shapes in terms of their proximity to 2 different models or templates of interest, one perceptual, having to do with visual stimulus form, and one conceptual (Fig. 4A). At the perceptual extreme, the image-selective model was defined by continued adaptation (value of 0) to the identical picture (Deviant Level 1) and full recovery (value of 1) to any picture with different 2D stimulus form (Deviant Levels 2–5). At the conceptual extreme, the category-selective model was defined by continued adaptation to any picture within the same superordinate category (e.g., land animals; Deviant Levels 1–4) with full recovery to a picture from a different superordinate category (e.g., sea creatures; Deviant Level 5). For each voxel in the conjunction mask, we first normalized the group-averaged beta weights to the 5 deviant conditions between 0 and 1, while maintaining the dynamic range of the betas, to place the recovery curves on the same numerical scale as the 2 models. We then calculated the sum-squared distance (squared Euclidean distance) between the voxel pattern and each of the 2 models. Voxel patterns that are identical to either the image-selective or the category-selective model will have a distance of 0 from that model and a distance of 3.0 from the opposite model (equal to the distance between the 2 models; for full details, see Supplementary Material). We then took these distances and tabulated them in a 3D frequency histogram over voxels (Fig. 4A), with the distance from the category-selective model on the x-axis, the distance from the image-selective model on the y-axis, and the frequency count of voxels in the conjunction mask on the z-axis. Each unique recovery curve pattern (i.e., a particular combination of the 5 beta weight values) has a corresponding location within this histogram at a particular x- and y-coordinate. This approach effectively projects the recovery curves along the dimension of most interest theoretically—the transition from visual perceptual to conceptual (from right to left in the xy plane of the histogram), and it allows one to examine easily which pattern shapes are most common. The frequency histogram makes clear that the majority of voxels in the conjunction mask have recovery curves that are close to the image-selective model, with 2 separate branches of pattern shapes that spread out toward but do not reach the category-selective model (moving to the left; see Fig. 5A for a top-down view). It is also clear from this histogram that recovery curves are not simply of one sort or another; they exhibit a gradual transition between visual perceptual and conceptual, with many intermediate shapes in between. The anatomical locations of these different recovery curves can be viewed succinctly by simplifying these 2 distance measures into a single value representing the relative distance from the category-selective model and placing that relative distance on a color scale (Fig. 4B). A relative distance of 0.0 from the category-selective model corresponds to an exact match to this model, and a relative distance of 1.0 corresponds to an exact match to the image-selective model. In Figure 4C, blue colors are assigned to curves that are the most perceptual and red to the curves that are most conceptual within the range (a relative distance of ∼0.5 from the category-selective model). Recovery throughout much of occipital cortex and extending into the temporal and parietal lobes is similar to the image-selective model. More conceptual patterns (red) are observed in the fusiform gyrus (Fig. 4C: f, g), parietal cortex along the intraparietal sulcus (IPS) (Fig. 4C: b), and frontal areas, extending from the precentral gyrus (Fig. 4C: d) down into the inferior frontal gyrus and anterior insula (Fig. 4C: e, h). Patterns intermediate between these extremes were observed prominently in right occipital cortex, including the middle occipital gyrus (shown in green, Fig. 4C: a, h).

Model-based approach to analyzing the shapes of recovery curves. (A) For voxels showing both significant adaptation and recovery, the recovery patterns or “curves” defined over the 5 deviant levels were examined by first normalizing the corresponding mean beta weights for each voxel in the group analysis between 0 and 1. The shape of each normalized curve was then compared through a simple distance metric (sum-squared distance) to 2 different models: 1) an “image-selective” model (shown in blue) and 2) and a “category-selective” model (shown in red). The full range and prevalence of different types of tuning or selectivity between perceptual and conceptual extremes could then be examined by constructing a 3D frequency histogram of the voxel patterns, with the x- and y-axes representing the distances from each model and the z-axis representing the number of voxels in the conjunction mask that possessed the same curve shape. Voxel number in this histogram is also conveyed by color (see color bar). The green line in the x–y plane of the histogram represents the distance between the image-selective and category-selective models (a distance of 3.0), and voxel patterns lying inside this line have shapes intermediate to the models. (B) To more succinctly represent the pattern shapes observed in (A), the full range of tuning from image-selective model to category-selective model was compressed into a single “relative distance” measure from the category-selective model, which in turn could be viewed in the brain with a color scale (red to blue). This relative distance ranges from a minimum of 0.0 (equal to the category-selective model) to a maximum of 1.0 (equal to the image-selective model). The histogram above the color scale is the frequency histogram of the relative distance measure across voxels in the conjunction mask. The most conceptual of the curve shapes have relative distances of approximately 0.5, and the saturation of the color scale (red) below 0.5 is chosen to reflect this. (C) The extent of visual perceptual versus conceptual tuning of voxels in the group analysis can then be viewed in the brain volume using the color scale in (B). As in (B), blue colors indicate similarity of recovery curve shape to the image-selective model, red colors indicate more conceptual tuning, and green colors indicate tuning curves with intermediate shapes between the extremes. Coronal and axial slices a–h, shown with red lines in the anatomical reference, correspond to the following Talairach coordinates—a: y = −79; b: y = −60; c: y = −49; d: y = +1; e: y = +19; f: z = −16; g: z = −10; h: z = +13; LH = left hemisphere; RH = right hemisphere.
Figure 4.

Model-based approach to analyzing the shapes of recovery curves. (A) For voxels showing both significant adaptation and recovery, the recovery patterns or “curves” defined over the 5 deviant levels were examined by first normalizing the corresponding mean beta weights for each voxel in the group analysis between 0 and 1. The shape of each normalized curve was then compared through a simple distance metric (sum-squared distance) to 2 different models: 1) an “image-selective” model (shown in blue) and 2) and a “category-selective” model (shown in red). The full range and prevalence of different types of tuning or selectivity between perceptual and conceptual extremes could then be examined by constructing a 3D frequency histogram of the voxel patterns, with the x- and y-axes representing the distances from each model and the z-axis representing the number of voxels in the conjunction mask that possessed the same curve shape. Voxel number in this histogram is also conveyed by color (see color bar). The green line in the xy plane of the histogram represents the distance between the image-selective and category-selective models (a distance of 3.0), and voxel patterns lying inside this line have shapes intermediate to the models. (B) To more succinctly represent the pattern shapes observed in (A), the full range of tuning from image-selective model to category-selective model was compressed into a single “relative distance” measure from the category-selective model, which in turn could be viewed in the brain with a color scale (red to blue). This relative distance ranges from a minimum of 0.0 (equal to the category-selective model) to a maximum of 1.0 (equal to the image-selective model). The histogram above the color scale is the frequency histogram of the relative distance measure across voxels in the conjunction mask. The most conceptual of the curve shapes have relative distances of approximately 0.5, and the saturation of the color scale (red) below 0.5 is chosen to reflect this. (C) The extent of visual perceptual versus conceptual tuning of voxels in the group analysis can then be viewed in the brain volume using the color scale in (B). As in (B), blue colors indicate similarity of recovery curve shape to the image-selective model, red colors indicate more conceptual tuning, and green colors indicate tuning curves with intermediate shapes between the extremes. Coronal and axial slices a–h, shown with red lines in the anatomical reference, correspond to the following Talairach coordinates—a: y = −79; b: y = −60; c: y = −49; d: y = +1; e: y = +19; f: z = −16; g: z = −10; h: z = +13; LH = left hemisphere; RH = right hemisphere.

Distinguishing different types of recovery curves. (A) The frequency histogram of recovery curve shapes as in Fig. 4A, along with an enlarged top-down view of the same data in which the number of voxels with a particular curve shape is conveyed entirely through the use of color (see color bar). Different tuning curve “types” were identified by means of a random data-shuffling technique in which the observed voxel counts were compared with the values expected if only a single average recovery curve were truly present (around which variation was random; see Supplementary Material for details). Three large contiguous zones (shown in red, orange, and blue) of histogram bins were identified through this shuffling technique, defining 3 different types of tuning. Points in the pattern space corresponding to the behavioral similarity ratings (Fig. 2B) (green X), as well as to the calculated stimulus form distances (Fig. 2C) (magenta square), are shown for reference. (B) Tuning curves for the 3 types of tuning shown in (A) are presented here in terms of mean normalized beta weights for each of the corresponding anatomical ROIs (averaged over voxels and subjects). Curve shapes constructed for the behavioral similarity ratings and the stimulus form distances are shown for reference in dashed green and magenta lines, respectively. The tuning type closest to the image-selective model, labeled “Image-Selective,” is shown in blue. The other 2 tuning types are both conceptual in nature yet are qualitatively distinct in curve shape. The type that we label “Perceptual/Conceptual-Broad” (in orange) shows sensitivity to both visual stimulus form and broad superordinate category of the anchor, whereas the type that we label “Conceptual-Narrow” (in red) shows no sensitivity to stimulus form (Deviant Levels 1 vs. 2) and sharper recovery between Deviant Levels 2 and 4.
Figure 5.

Distinguishing different types of recovery curves. (A) The frequency histogram of recovery curve shapes as in Fig. 4A, along with an enlarged top-down view of the same data in which the number of voxels with a particular curve shape is conveyed entirely through the use of color (see color bar). Different tuning curve “types” were identified by means of a random data-shuffling technique in which the observed voxel counts were compared with the values expected if only a single average recovery curve were truly present (around which variation was random; see Supplementary Material for details). Three large contiguous zones (shown in red, orange, and blue) of histogram bins were identified through this shuffling technique, defining 3 different types of tuning. Points in the pattern space corresponding to the behavioral similarity ratings (Fig. 2B) (green X), as well as to the calculated stimulus form distances (Fig. 2C) (magenta square), are shown for reference. (B) Tuning curves for the 3 types of tuning shown in (A) are presented here in terms of mean normalized beta weights for each of the corresponding anatomical ROIs (averaged over voxels and subjects). Curve shapes constructed for the behavioral similarity ratings and the stimulus form distances are shown for reference in dashed green and magenta lines, respectively. The tuning type closest to the image-selective model, labeled “Image-Selective,” is shown in blue. The other 2 tuning types are both conceptual in nature yet are qualitatively distinct in curve shape. The type that we label “Perceptual/Conceptual-Broad” (in orange) shows sensitivity to both visual stimulus form and broad superordinate category of the anchor, whereas the type that we label “Conceptual-Narrow” (in red) shows no sensitivity to stimulus form (Deviant Levels 1 vs. 2) and sharper recovery between Deviant Levels 2 and 4.

Distinguishing Different Types of Conceptual Tuning

We next devised a method to assess the reliability and discriminability of different tuning curve shapes across subjects. While the 3D and 2D histograms in Figure 4A,B serve as useful and comprehensive descriptive statistics for the range of different recovery curves that are present in the data, they are based only on the mean beta weights when averaged across the group of subjects. They do not convey the extent of variability that exists across subjects for particular curve shapes in particular voxels. Our approach consisted of 2 steps: 1) find the most common types of tuning curve in the mean beta weights and 2) use those curve types in pattern space to define separate anatomical ROIs, on which more standard ROI analyses can be performed across subjects. For Step 1, we first identified the most commonly occurring tuning curve shapes in the group mean beta weights. This corresponded roughly to finding the main peaks in the 3D frequency histogram in Figure 4A. Using the 3D histogram was preferable over the 2D histogram (relative distance) because it preserved as much information about the curve shapes as possible. Rather than picking these peaks arbitrarily, we identified them empirically through the use of a random data-shuffling technique (see Supplementary Material for full details). This technique derived an estimate of the frequency count in each bin of the 3D histogram that should be observed if the only pattern truly present in the conjunction mask was the mean pattern, averaged across all the voxels in the conjunction mask. Bins in the actual data histogram that significantly exceeded this shuffled estimate were taken to be interesting departures from the mean pattern. Three large contiguous zones of bins in the frequency histogram were identified through this method (see Fig. 5A), one corresponding to the main peak near the image-selective model (outlined in blue) and 2 others corresponding to the end points of the 2 more conceptual branches of the histogram (outlined in orange and red). For Step 2 of the method, the 3 zones of bins in pattern space (blue, orange, and red) were used to define 3 large ROIs in the brain volume by finding the voxels that contributed to the frequency counts in those zones (shown in Fig. 6). These ROIs were selected solely on the basis of variation in pattern shapes for the group-averaged beta weights within the conjunction mask. They were not defined using more standard voxel-wise statistical comparisons (aside from the initial conjunction analysis that involved cluster-size corrections), and there was no requirement that all the relevant voxels be contiguous in the brain volume.

Locations in the brain of ROIs defined by curve shape. The 3 types of tuning shown in Figure 5 (image selective, perceptual/conceptual-broad, conceptual-narrow) are shown here in the brain volume using the same color scheme (blue, orange, and red, respectively). In agreement with Figure 4, the image-selective ROI resides largely in occipitotemporal brain regions, as well as in the parietal cortex, precentral gyrus, and left dorsomedial thalamus. The perceptual/conceptual-broad ROI was found in bilateral fusiform, parietal, and prefrontal cortices near the inferior frontal junction, as well as in the inferior and middle occipital gyri and dorsomedial thalamus on the left. The conceptual-narrow ROI was found only in the prefrontal cortex (inferior frontal gyrus and insula bilaterally, anterior cingulate) and was isolated anatomically from the other ROI types. Coronal and axial slices a–h, shown with red lines in the anatomical reference, are similar to those in Figure 4C—a: y = −74; b: y = −60; c: y = −50; d: y = +1; e: y = +19; f: z = −16; g: z = −6; h: z = +13; LH = left hemisphere; RH = right hemisphere.
Figure 6.

Locations in the brain of ROIs defined by curve shape. The 3 types of tuning shown in Figure 5 (image selective, perceptual/conceptual-broad, conceptual-narrow) are shown here in the brain volume using the same color scheme (blue, orange, and red, respectively). In agreement with Figure 4, the image-selective ROI resides largely in occipitotemporal brain regions, as well as in the parietal cortex, precentral gyrus, and left dorsomedial thalamus. The perceptual/conceptual-broad ROI was found in bilateral fusiform, parietal, and prefrontal cortices near the inferior frontal junction, as well as in the inferior and middle occipital gyri and dorsomedial thalamus on the left. The conceptual-narrow ROI was found only in the prefrontal cortex (inferior frontal gyrus and insula bilaterally, anterior cingulate) and was isolated anatomically from the other ROI types. Coronal and axial slices a–h, shown with red lines in the anatomical reference, are similar to those in Figure 4C—a: y = −74; b: y = −60; c: y = −50; d: y = +1; e: y = +19; f: z = −16; g: z = −6; h: z = +13; LH = left hemisphere; RH = right hemisphere.

The average tuning curves for the 3 large ROIs are shown in Figure 5B, averaged across subjects and across voxels. Repeated measures ANOVAs confirmed that the different curve shapes detected in the group-averaged data were indeed reliable across subjects in the sense that each curve could be reliably discriminated from the other 2. Each ROI × Deviant Level interaction was highly significant (all F4,68 > 5.9, P < 4.0 × 10−4). We also performed a series of post hoc comparisons on the deviant levels (paired t-tests) for each curve to characterize its precise shape. The blue tuning curve corresponds to the blue zone outlined in Figure 5A that is close to the image-selective model, and accordingly, its tuning curve shape is quite similar to this model. Deviant Level 1 (identical) remained significantly adapted relative to Deviant Levels 2–5 (t(17) > 4.84, P < 7.6 × 10−5 [1 tailed], for all), with no differences among the other levels (P > 0.2 for all). We have therefore labeled this curve type image selective. In contrast, the orange and red tuning curves represent 2 qualitatively different types of conceptual tuning. The orange curve shows preserved adaptation to Levels 1–4 relative to Level 5 (L1 < L5: P = 2.5 × 10−6 [1 tailed]; L2 < L5: P = 0.006; L3 < L5: P = 0.004; L4 < L5: P = 0.033), yet partial recovery to any image different from the anchor (L1 < L2–5: P < 0.004 for all), and no significant differences between the responses to Levels 2–4. This pattern represents a mixture of tuning to visual stimulus form and superordinate conceptual category (land animal vs. sea creature), and accordingly, we have labeled it “perceptual/conceptual-broad.” Of the 3 tuning curve types, the perceptual/conceptual-broad curves extend across the largest number of bins of the 3D frequency histogram, implying a broader range of curve shapes. The variability of these curve shapes across voxels and disparate anatomical locations was therefore evaluated. These analyses confirmed that the curve shapes were relatively homogeneous across location (see Supplementary Material). The red curve shows a qualitatively different pattern of conceptual tuning from the orange curve. Adaptation was significantly preserved to Levels 1–3 relative to Level 5 (L1 < L5: P = 0.0014; L2 < L5: P = 0.0067; L3 < L5: P = 0.023), with similar responses to Levels 1 and 2 and to Levels 4 and 5. This patt ern shows both invariance to changes in visual stimulus form and narrower tuning to conceptual associates than the tuning exhibited by the orange curve, with partial adaptation to highly related conceptual associates and full recovery to more distant relationships. Accordingly, we have labeled the red curve “conceptual-narrow.” For comparison, we have also plotted the tuning curves expected from the anchor–deviant similarity ratings (after inverting and rescaling; dashed green curve) and from the stimulus form distances that were calculated from the automated shape similarity algorithm (dashed magenta curve; see Fig. 2B,C). The shape similarity algorithm produces a tuning curve that is quite comparable with the image-selective curve (see also the magenta marker in Fig. 5A), suggesting that stimulus form is indeed what drives responses in the image-selective voxels, as well as suggesting that stimulus form is less responsible for driving responses in the more concept-selective voxels. In contrast, the anchor–deviant similarity ratings produce a curve that is most similar to the conceptual-narrow curve (see also green marker in Fig. 5A). This establishes a basic alignment between the similarity ratings and the conceptual tuning curves.

The ROI analyses above demonstrate that at least 3 distinct tuning patterns exist in the data. However, the experimental predictions articulated earlier involve not only the varieties of tuning curves but where these curves reside anatomically. Figure 6 shows the locations of the 3 corresponding ROIs in the brain volume using the same color scheme (blue, orange, and red; see Table 1 for a full description). As in Figure 4C, image-selective curves were predominantly localized to occipitotemporal brain regions (Fig. 6: a–c, f–h), although selected clusters of voxels were also found in the parietal cortex (Fig. 6: c), precentral gyrus (Fig. 6: d), and dorsomedial thalamus (Fig. 6: h). Perceptual/conceptual-broad curves were observed in the left inferior and middle occipital gyri (e.g., Fig. 6: a, g), lateral portions of fusiform gyrus (Fig. 6: f), parietal cortex along the IPS (Fig. 6: b), dorsomedial thalamus (Fig. 6: h), and prefrontal cortex, extending anteriorly from the precentral gyrus along the medial wall of the inferior frontal sulcus (Fig. 6: d), with a separate cluster located in the right supplementary motor area (see Table 1). Conceptual-narrow curves, in contrast, were located anterior to the other tuning curve types (Fig. 6, sagittal slices), with bilateral activations in the inferior frontal gyrus and anterior insula, as well as the anterior cingulate (Fig. 6: e). The cluster corrections applied during the conjunction analysis guaranteed that the likelihood of an anatomical cluster—of any size—being observed due to random noise was controlled at an alpha level of 0.05. However, more precise estimates of chance probabilities by ROI type and cluster size were possible through Monte Carlo simulation, and these values are reported in Table 1 for each anatomical cluster (see Supplementary Material for details).

Table 1

Individual anatomical clusters for each ROI type

Brain regionTalairach coordinates
Volume (mm3)Cluster-level P value (<)Voxel-level max F
xyz
Image-selective ROI
    L occipitotemporal−36−50−12133430.000119.16
    R occipitotemporal38−60−829690.000116.84
    R middle occipital gyrus30−73293230.0264.87
43−8161270.055.87
40−8319560.055.53
    R fusiform gyrus44−45−15800.058.36
    L superior parietal−24−5149600.052.21
    R superior parietal34−5757800.053.34
    L dorsomedial thalamus−9−21161060.052.45
    L precentral gyrus−46243570.053.25
    R precentral gyrus442342430.045.77
Perceptual/conceptual-broad ROI
    L inferior occipital gyrus−50−71−41610.057.30
−44−76−31030.057.73
    L middle occipital gyrus−34−87242180.0485.29
−36−626930.055.22
    L fusiform gyrus−19−57−1210950.00056.23
−41−49−121620.058.57
    R fusiform gyrus41−46−153840.0299.45
    R inferior temporal gyrus45−41−10490.057.49
    L intraparietal sulcus−27−663413460.00038.65
    R intraparietal sulcus31−64394340.0226.73
    L dorsomedial thalamus−6−1514880.052.23
    L precentral/inferior frontal gyri−352275920.0095.98
    R precentral/inferior frontal gyri492339870.00117.86
    R SMA/cingulate gyrus58501250.055.40
71344610.054.89
Conceptual-narrow ROI
    L inferior frontal/insular gyri−2817154410.00027.12
−295321470.00324.72
−361926510.033.27
    R inferior frontal/insular gyri342071880.0027.98
40131880.0125.86
    L SMA/cingulate gyrus−117421020.00855.20
−11147890.0125.41
Brain regionTalairach coordinates
Volume (mm3)Cluster-level P value (<)Voxel-level max F
xyz
Image-selective ROI
    L occipitotemporal−36−50−12133430.000119.16
    R occipitotemporal38−60−829690.000116.84
    R middle occipital gyrus30−73293230.0264.87
43−8161270.055.87
40−8319560.055.53
    R fusiform gyrus44−45−15800.058.36
    L superior parietal−24−5149600.052.21
    R superior parietal34−5757800.053.34
    L dorsomedial thalamus−9−21161060.052.45
    L precentral gyrus−46243570.053.25
    R precentral gyrus442342430.045.77
Perceptual/conceptual-broad ROI
    L inferior occipital gyrus−50−71−41610.057.30
−44−76−31030.057.73
    L middle occipital gyrus−34−87242180.0485.29
−36−626930.055.22
    L fusiform gyrus−19−57−1210950.00056.23
−41−49−121620.058.57
    R fusiform gyrus41−46−153840.0299.45
    R inferior temporal gyrus45−41−10490.057.49
    L intraparietal sulcus−27−663413460.00038.65
    R intraparietal sulcus31−64394340.0226.73
    L dorsomedial thalamus−6−1514880.052.23
    L precentral/inferior frontal gyri−352275920.0095.98
    R precentral/inferior frontal gyri492339870.00117.86
    R SMA/cingulate gyrus58501250.055.40
71344610.054.89
Conceptual-narrow ROI
    L inferior frontal/insular gyri−2817154410.00027.12
−295321470.00324.72
−361926510.033.27
    R inferior frontal/insular gyri342071880.0027.98
40131880.0125.86
    L SMA/cingulate gyrus−117421020.00855.20
−11147890.0125.41

Note: All individual anatomical clusters within each of the 3 ROIs that were larger than 49.22 mm3 (size of original scanning voxel) are included. Peak statistical values for each cluster were found by calculating a voxel-wise main effect of deviant level using a repeated measures ANOVA across subjects, and the maximum F value (with 4,68 degrees of freedom) is reported along with the corresponding Talairach coordinates. These voxel-wise statistics served only these descriptive purposes and played no role in selecting the clusters. Cluster-level P values were determined through Monte Carlo simulations described in the Supplementary Material. SMA = supplementary motor area; L = left; R = right.

Table 1

Individual anatomical clusters for each ROI type

Brain regionTalairach coordinates
Volume (mm3)Cluster-level P value (<)Voxel-level max F
xyz
Image-selective ROI
    L occipitotemporal−36−50−12133430.000119.16
    R occipitotemporal38−60−829690.000116.84
    R middle occipital gyrus30−73293230.0264.87
43−8161270.055.87
40−8319560.055.53
    R fusiform gyrus44−45−15800.058.36
    L superior parietal−24−5149600.052.21
    R superior parietal34−5757800.053.34
    L dorsomedial thalamus−9−21161060.052.45
    L precentral gyrus−46243570.053.25
    R precentral gyrus442342430.045.77
Perceptual/conceptual-broad ROI
    L inferior occipital gyrus−50−71−41610.057.30
−44−76−31030.057.73
    L middle occipital gyrus−34−87242180.0485.29
−36−626930.055.22
    L fusiform gyrus−19−57−1210950.00056.23
−41−49−121620.058.57
    R fusiform gyrus41−46−153840.0299.45
    R inferior temporal gyrus45−41−10490.057.49
    L intraparietal sulcus−27−663413460.00038.65
    R intraparietal sulcus31−64394340.0226.73
    L dorsomedial thalamus−6−1514880.052.23
    L precentral/inferior frontal gyri−352275920.0095.98
    R precentral/inferior frontal gyri492339870.00117.86
    R SMA/cingulate gyrus58501250.055.40
71344610.054.89
Conceptual-narrow ROI
    L inferior frontal/insular gyri−2817154410.00027.12
−295321470.00324.72
−361926510.033.27
    R inferior frontal/insular gyri342071880.0027.98
40131880.0125.86
    L SMA/cingulate gyrus−117421020.00855.20
−11147890.0125.41
Brain regionTalairach coordinates
Volume (mm3)Cluster-level P value (<)Voxel-level max F
xyz
Image-selective ROI
    L occipitotemporal−36−50−12133430.000119.16
    R occipitotemporal38−60−829690.000116.84
    R middle occipital gyrus30−73293230.0264.87
43−8161270.055.87
40−8319560.055.53
    R fusiform gyrus44−45−15800.058.36
    L superior parietal−24−5149600.052.21
    R superior parietal34−5757800.053.34
    L dorsomedial thalamus−9−21161060.052.45
    L precentral gyrus−46243570.053.25
    R precentral gyrus442342430.045.77
Perceptual/conceptual-broad ROI
    L inferior occipital gyrus−50−71−41610.057.30
−44−76−31030.057.73
    L middle occipital gyrus−34−87242180.0485.29
−36−626930.055.22
    L fusiform gyrus−19−57−1210950.00056.23
−41−49−121620.058.57
    R fusiform gyrus41−46−153840.0299.45
    R inferior temporal gyrus45−41−10490.057.49
    L intraparietal sulcus−27−663413460.00038.65
    R intraparietal sulcus31−64394340.0226.73
    L dorsomedial thalamus−6−1514880.052.23
    L precentral/inferior frontal gyri−352275920.0095.98
    R precentral/inferior frontal gyri492339870.00117.86
    R SMA/cingulate gyrus58501250.055.40
71344610.054.89
Conceptual-narrow ROI
    L inferior frontal/insular gyri−2817154410.00027.12
−295321470.00324.72
−361926510.033.27
    R inferior frontal/insular gyri342071880.0027.98
40131880.0125.86
    L SMA/cingulate gyrus−117421020.00855.20
−11147890.0125.41

Note: All individual anatomical clusters within each of the 3 ROIs that were larger than 49.22 mm3 (size of original scanning voxel) are included. Peak statistical values for each cluster were found by calculating a voxel-wise main effect of deviant level using a repeated measures ANOVA across subjects, and the maximum F value (with 4,68 degrees of freedom) is reported along with the corresponding Talairach coordinates. These voxel-wise statistics served only these descriptive purposes and played no role in selecting the clusters. Cluster-level P values were determined through Monte Carlo simulations described in the Supplementary Material. SMA = supplementary motor area; L = left; R = right.

Discussion

In an fMRI adaptation experiment using pictures of animals and a simple categorization task, we have examined the fine-grained nature of neural tuning to object concepts in the human brain. Short-term adaptation and recovery from adaptation were estimated separately, with a large network of visually responsive brain areas in occipitotemporal, parietal, and prefrontal cortices showing both effects. Tuning curves in these areas spanned a continuous range of different shapes from visual perceptual to conceptual. We identified 3 main types of tuning, 1 selective primarily to 2D visual stimulus form (image selective) and 2 conceptual types—one selective to a mixture of stimulus form and superordinate conceptual category (perceptual/conceptual-broad) and one selective to identical concepts and close conceptual associates (conceptual-narrow). These types were discovered empirically through a novel data-shuffling method, and they corresponded to the main end points of the overall range of tuning curve shapes. The different curve types cannot be attributed easily to alternative factors such as differential attention to the stimulus conditions or item-specific effects; all 3 types of tuning occur under the same anchor–deviant manipulation of conceptual distance, and therefore, attentional processing or item effects should affect each condition in the same manner. Similarly, the different tuning patterns cannot be explained by data smoothing or averaging, as none of the patterns is expressible as a weighted average of the other 2.

Evaluation of Experimental Predictions

Prediction 1: Occipital Cortex

As predicted, tuning curves to visual objects showed sensitivity to stimulus form throughout occipital and occipitotemporal cortical areas. Strong recovery from adaptation was expected even for the same concept condition (Deviant Level 2), as this condition consisted of different exemplar pictures (e.g., a different cow picture) that always varied from anchors in left/right orientation and often in part/whole view (e.g., face of a cow vs. face + body). Visual areas with small receptive fields and/or preferences for stimuli in the contralateral visual hemifield are effectively exposed to entirely different stimuli for anchors and deviants in these circumstances. Indeed, most of the voxels in these visual areas showed recovery curves that were similar to the image-selective model that was used to evaluate curve shapes, indicating that the neural representations of the anchor and deviant stimuli (Levels 2–5) shared few cells (Figs 4 and 6). An automated shape similarity algorithm (Belongie et al. 2002) further confirmed that this recovery pattern follows what would be expected based on the similarity of visual stimulus form between anchors and deviants (see Fig. 5). This is not to imply that tuning to stimulus form does not vary in complexity throughout different areas within the occipital lobe. Our manipulation of stimulus form was probably too coarse to detect such variation.

Smaller clusters of voxels in left inferior and middle occipital gyri showed perceptual/conceptual-broad recovery curves, exhibiting sensitivity to the broad conceptual category of the anchor (e.g., land animals) (see Fig. 6: a, g; Table 1). A number of previous studies have also reported category-selective responses in occipital cortex (Chao et al. 1999; Ishai et al. 2000; Levy et al. 2001; Spiridon et al. 2006). Indeed, it has been argued that factors such as eccentricity within the visual field (foveal vs. peripheral) may be a primary determinant of the localization of category-selective representations (Levy et al. 2001; Hasson et al. 2002). However, unlike the categorical tuning previously described in prefrontal cortex (Freedman et al. 2001; Jiang et al. 2007), these occipital voxels were not “abstractly” tuned to conceptual category. Their curves showed joint sensitivity to stimulus form and/or position, consistent with cells that represent category information within spatially restricted visual receptive fields (for related findings, see Hemond et al. 2007; Sayres and Grill-Spector 2008; Schwarzlose et al. 2008). A novel contribution of our method is that by separately estimating the stimulus form similarity of anchor and deviant pictures (Fig. 2C), we were able to identify voxels that show sensitivity to conceptual category that goes beyond what would be expected due to shared stimulus form. On this point, there is an important caveat to mention: It is difficult for our method at present to distinguish between true perceptual/conceptual-broad tuning and tuning to more complex shape properties that is nevertheless entirely visual (e.g., high-dimensional shape contours; Brincat and Connor 2004, 2006). The automated shape similarity algorithm that we employ, while robust to size, position, and moderate viewpoint differences, may not capture all abstract aspects of stimulus form. However, many of the same brain regions showing perceptual/conceptual-broad tuning in our study also show selectivity to conceptual category (animate vs. artifact) and conceptual repetition effects when words are used as stimuli (e.g., Wheatley et al. 2005; Gold et al. 2006). When combined with the corresponding neuropsychological evidence (discussed below), the results suggest that these tuning curves, even in occipital cortex, are likely to reflect true conceptual and not solely stimulus form distinctions.

Prediction 2: Temporal Cortex

We predicted that residual adaptation should be observed to Deviant Levels 1–3 relative to Level 5 at sites within the temporal lobes that are known to prefer animal over tool stimuli, such as the lateral aspects of the fusiform gyrus (Chao et al. 1999, 2002; Noppeney et al. 2006; Wiggett et al. 2009). We also predicted partial recovery from adaptation due to changes in stimulus form (Deviant Level 1 < Levels 2–5) on the basis of studies showing a substantial amount of residual tuning to position within the visual field at even the latest stages of the ventral visual processing pathway (see Kravitz et al. 2008 for review). Both of these predictions held remarkably well in the lateral aspects of the fusiform gyrus (see Fig. 6: c, f). To our knowledge, our results show for the first time that tuning to conceptual category within the fusiform follows broad superordinate category distinctions (i.e., perceptual/conceptual-broad tuning). As with the perceptual/conceptual-broad voxels in occipital cortex, tuning to category in the fusiform is not entirely free from sensitivity to stimulus form and position, nor is it entirely explained by it. The more robust clusters of perceptual/conceptual-broad voxels observed bilaterally in the lateral fusiform provide strong further support for the idea that tuning to natural conceptual categories in humans is firmly established by occipitotemporal cortex. Recent neuropsychological evidence from patients with damage to the fusiform gyrus also attests to the necessity of this cortex for intact conceptual processing (Williams et al. 2005; Capitani et al. 2009; see also patient MV in Vandenbulcke et al. 2006). Portions of visually responsive cortex elsewhere in the temporal lobes (e.g., medial aspects of the fusiform gyrus) instead showed image-selective recovery curves (Fig. 6: c, f), perhaps indicating the presence of cells that are activated by animal stimuli above baseline but that do not represent conceptual relationships between animals. Previous work has shown that medial aspects of the fusiform gyrus prefer man-made objects over animal stimuli (Chao et al. 1999, 2002; Noppeney et al. 2006; Wiggett et al. 2009) and show selective repetition effects to tools compared with other manipulable artifacts (Mahon et al. 2007). More anterior activations within the temporal lobes were notably absent in the current study. Indeed, Figure 3 shows that only medial aspects of the temporal lobe (parahippocampal gyrus) showed visual adaptation effects, with no voxels showing significant adaptation or recovery in more lateral aspects of anterior temporal cortex that generally have better magnetic resonance signal strength (Bellgowan et al. 2006). This may be due to the lack of “unique entity” concepts in the current experiment or the absence of overt social and/or verbal conceptual task requirements (see Simmons and Martin 2009 for a recent review).

Prediction 3: Prefrontal Cortex

We predicted that residual adaptation should be observed to Deviant Levels 1–3 relative to Level 5 in inferior frontal cortex, indicating conceptual repetition effects, and that such adaptation should show a reduced dependence on stimulus form when compared with recovery curves in occipital and temporal cortex. We further reasoned that the pattern of recovery might be graded across adjacent deviant levels, affording flexible selection, retrieval, and categorization of concepts at a variety of levels of abstraction. What we observed rather than a continuous and graded pattern of recovery was 2 distinct types of conceptual tuning, one tuned narrowly to highly related concepts and the other tuned more broadly to superordinate category. The first type, labeled conceptual-narrow, was indeed invariant to stimulus form, showing tuning to the same type of object as the anchor and other highly related concepts. Voxels showing this pattern were relatively anterior, ventral, and medial within lateral frontal cortex, restricted to inferior frontal and insular cortex bilaterally, as well as the anterior cingulate (Fig. 6: e, h, and left/right sagittal views). The second type was the same perceptual/conceptual-broad tuning pattern observed in occipitotemporal cortex, localized more posteriorly in the precentral gyrus (Fig. 6: d).

The conceptual-narrow tuning pattern is the most reminiscent of the category-selective responses observed in category-learning experiments in monkeys (Freedman et al. 2001, 2002) and humans (Jiang et al. 2007) in the sense that it shows no dependence on stimulus form/position. The narrow tuning to highly related concepts would appear at first glance to be at odds with the abstract nature of the category responses observed in these prior studies. However, natural categories of stimuli such as animals differ markedly from artificial categories that are encountered solely in the confines of an experimental session in that the corresponding concepts occur in many different behavioral contexts and tasks. Accordingly, the natural category representations that develop through experience have to balance all these various behavioral pressures to be useful in all the relevant contexts. Subjective similarity ratings of different concepts might be thought to serve as an aggregate measure of these varied contingencies, and on this point, it is interesting to note that the conceptual-narrow curves are the closest match to similarity ratings of the anchor–deviant pairs taken from a separate group of subjects (Fig. 5). The sharp, narrow conceptual tuning in these prefrontal voxels may be acquired through experience-dependent plasticity that occurs during the experimental session, reflecting the basic-level distinctions (Rosch et al. 1976; Rosch 1978) that are most useful for distinguishing between (and relating) objects in the current context. These prefrontal representations may then support more executive cognitive functions such as selection among multiple objects that are all highly related conceptually within a given behavioral context (Thompson-Schill et al. 1997; Badre et al. 2005; Moss et al. 2005; Jefferies and Lambon Ralph 2006; see also Robinson et al. 1998). If the tuning of the corresponding cells were too broad, giving similar responses to all related objects, it would be impossible for them to help select the most relevant object. Similar issues are involved in retrieving information about highly related, as opposed to moderately or weakly related, objects from memory (Wagner et al. 2001; Badre et al. 2005). The critical requirement of prefrontal neurons in these circumstances is that they represent information narrowly enough to perform fine-grained conceptual selection, retrieval, or categorization, and our results show that this information is indeed represented in prefrontal cortex.

The corollary of this point, though, is that conceptual-narrow tuning is not particularly useful for selecting, retrieving, or categorizing conceptual information at the level of more general superordinate categories, nor is it useful for distinguishing among different examples of the same type of object that differ solely in stimulus form properties. Under these circumstances, the perceptual/conceptual-broad tuning observed in the precentral gyrus (and occipitotemporal regions) may play a more important role. When combined, the 2 types of conceptual tuning permit discrimination between all the adjacent deviant conditions. Some previous studies have argued for functional subdivisions within the ventrolateral frontal cortex (Badre et al. 2005; Moss et al. 2005), although none has argued for this particular division by perceptual/conceptual breadth of the representations. While the curve for the subjective similarity ratings shown in Figure 5B is closest in shape to the conceptual-narrow curve, Figure 5A makes clear that the curve shape is actually intermediate between conceptual-narrow and perceptual/conceptual-broad tuning (green X), perhaps indicating some cooperative determination of similarity judgments by both types of tuning.

The observation of perceptual/conceptual-broad tuning in parietal cortex bilaterally along the IPS was unexpected (Fig. 6: b). This finding, however, is in line with 2 recent observations suggesting that parietal cortex may be more involved in representing stimulus form (Konen and Kastner 2008) and learned category distinctions (Freedman and Assad 2006) than has previously been assumed. The parietal lobes are thought to be critical for representing visual space (e.g., Ungerleider and Mishkin 1982), transforming vision into action (e.g., Milner and Goodale 1996; Quiroga et al. 2006), and flexibly orienting visual attention (e.g., Posner and Petersen 1990; Colby and Goldberg 1999). The regions of parietal cortex that we observe, along with the frontal regions described earlier, have been activated in a variety of task contexts ranging from conceptual (e.g., Kraut et al. 2002a, 2002b; Slotnick et al. 2002) to nonconceptual in nature, such as go/no-go, visual delayed match-to-sample, N-back working memory, and decision-making tasks (e.g., Courtney et al. 1997; Derrfuss et al. 2005; Owen et al. 2005; Simmonds et al. 2008). These previous studies suggest that the prefrontal and parietal activations in our experiment may not be exclusively, nor even primarily, conceptual in nature. Rather, these regions probably come to represent behaviorally relevant conceptual and categorical distinctions through recent experience and plasticity, interacting with more posterior brain regions in occipitotemporal cortex that are more exclusively perceptual or conceptual to select between similar alternatives or retrieve related information.

Summary

Our method allowed us to separate out the contribution of 2D stimulus form/position from tuning to conceptual information about visual objects. Tuning to form/position was observed throughout occipital and temporal cortical regions, with selected sites in occipitotemporal cortex also showing tuning to broad superordinate conceptual categories. Stimulus form may therefore be an important organizational constraint not only in occipital cortex but also in ventral temporal cortical sites that represent object concepts. Separate sites in prefrontal cortex showed tuning to broad and narrow conceptual distinctions, with tuning in relatively anterior sites showing invariance to stimulus form and providing a good match to behavioral similarity ratings. Different subregions of prefrontal cortex may therefore represent objects at different levels of categorical abstraction, affording flexible selection, retrieval, and categorization in a wide range of behavioral contexts.

Funding

National Institute of Mental Health, Division of Intramural Research.

Notes

The authors thank Serge Belongie, Gang Chen, Avniel Ghuman, Kathleen Hansen, Kyle Simmons, and members of the Laboratory of Brain and Cognition, National Institute of Mental Health, for useful discussions. Conflict of Interest: None declared.

References

Andresen
DR
Vinberg
J
Grill-Spector
K
The representation of object viewpoint in human visual cortex
Neuroimage.
2009
, vol. 
45
 (pg. 
522
-
536
)
Badre
D
Poldrack
RA
Paré-Blagoev
EJ
Insler
RZ
Wagner
AD
Dissociable controlled retrieval and generalized selection mechanisms in ventrolateral prefrontal cortex
Neuron.
2005
, vol. 
47
 (pg. 
907
-
918
)
Baker
CI
Hutchison
TL
Kanwisher
N
Does the fusiform face area contain subregions highly selective for nonfaces?
Nat Neurosci.
2007
, vol. 
10
 (pg. 
3
-
4
)
Baylis
G
Rolls
ET
Responses of neurons in the inferior temporal cortex in short term and serial recognition memory tasks
Exp Brain Res.
1987
, vol. 
65
 (pg. 
614
-
622
)
Bellgowan
PS
Bandettini
PA
van Gelderen
P
Martin
A
Bodurka
J
Improved BOLD detection in the medial temporal region using parallel imaging and voxel volume reduction
Neuroimage.
2006
, vol. 
29
 (pg. 
1244
-
1251
)
Belongie
S
Malik
J
Puzicha
J
Shape matching and object recognition using shape contexts
IEEE Trans Pattern Anal Mach Intell.
2002
, vol. 
24
 (pg. 
509
-
522
)
Binder
JR
Desai
RH
Graves
WW
Conant
LL
Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies
Cereb Cortex.
2009
, vol. 
19
 (pg. 
2767
-
2796
)
Boussaoud
D
Desimone
R
Ungerleider
LG
Visual topography of area TEO in the macaque
J Comp Neurol.
1991
, vol. 
306
 (pg. 
554
-
575
)
Brewer
AA
Liu
J
Wade
AR
Wandell
BA
Visual field maps and stimulus selectivity in human ventral occipital cortex
Nat Neurosci.
2005
, vol. 
8
 (pg. 
1102
-
1109
)
Brincat
SL
Connor
CE
Underlying principles of visual shape selectivity in posterior inferotemporal cortex
Nat Neurosci.
2004
, vol. 
7
 (pg. 
880
-
886
)
Brincat
SL
Connor
CE
Dynamic shape synthesis in posterior inferotemporal cortex
Neuron.
2006
, vol. 
49
 (pg. 
17
-
24
)
Capitani
E
Laiacona
M
Pagani
R
Capasso
R
Zampetti
P
Miceli
G
Posterior cerebral artery infarcts and semantic category dissociations: a study of 28 patients
Brain.
2009
, vol. 
132
 (pg. 
965
-
981
)
Chao
LL
Haxby
JV
Martin
A
Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects
Nat Neurosci.
1999
, vol. 
2
 (pg. 
913
-
919
)
Chao
LL
Weisberg
J
Martin
A
Experience-dependent modulation of category-related cortical activity
Cereb Cortex.
2002
, vol. 
12
 (pg. 
545
-
551
)
Colby
CL
Goldberg
ME
Space and attention in parietal cortex
Annu Rev Neurosci.
1999
, vol. 
22
 (pg. 
319
-
349
)
Courtney
SM
Ungerleider
LG
Keil
K
Haxby
JV
Transient and sustained activity in a distributed neural system for human working memory
Nature.
1997
, vol. 
386
 (pg. 
608
-
611
)
Cox
RW
AFNI: software for analysis and visualization of functional magnetic resonance neuroimages
Comput Biomed Res.
1996
, vol. 
29
 (pg. 
162
-
173
)
Derrfuss
J
Brass
M
Neumann
J
von Cramon
DY
Involvement of the inferior frontal junction in cognitive control: meta-analyses of switching and stroop studies
Hum Brain Mapp.
2005
, vol. 
25
 (pg. 
22
-
34
)
Desimone
R
Neural mechanisms for visual memory and their role in attention
Proc Natl Acad Sci U S A.
1996
, vol. 
93
 (pg. 
13494
-
13499
)
DeYoe
EA
Carman
GJ
Bandettini
P
Glickman
S
Wieser
J
Cox
R
Miller
D
Neitz
J
Mapping striate and extrastriate visual area in human cerebral cortex. Proc Natl Acad Sci U S A.
1996
, vol. 
93
 (pg. 
2382
-
2386
)
DiCarlo
JJ
Maunsell
JH
Anterior inferotemporal neurons of monkeys engaged in object recognition can be highly sensitive to object retinal position
J Neurophysiol.
2003
, vol. 
89
 (pg. 
3264
-
3278
)
Duncan
J
Disorganization of behaviour after frontal lobe damage
Cogn Neuropsychol.
1986
, vol. 
3
 (pg. 
270
-
290
)
Freedman
DJ
Assad
JA
Experience-dependent representation of visual categories in parietal cortex
Nature.
2006
, vol. 
443
 (pg. 
85
-
88
)
Freedman
DJ
Miller
EK
Neural mechanisms of visual categorization: insights from neurophysiology
Neurosci Biobehav Rev.
2008
, vol. 
32
 (pg. 
311
-
329
)
Freedman
DJ
Riesenhuber
M
Poggio
T
Miller
EK
Categorical representation of visual stimuli in the primate prefrontal cortex
Science.
2001
, vol. 
291
 (pg. 
312
-
316
)
Freedman
DJ
Riesenhuber
M
Poggio
T
Miller
EK
Visual categorization and the primate prefrontal cortex: neurophysiology and behavior
J Neurophysiol.
2002
, vol. 
88
 (pg. 
929
-
941
)
Freedman
DJ
Riesenhuber
M
Poggio
T
Miller
EK
A comparison of primate prefrontal and inferior temporal cortices during visual categorization
J Neurosci.
2003
, vol. 
23
 (pg. 
5235
-
5246
)
Friston
KJ
Penny
WD
Glaser
DE
Conjunction revisited
Neuroimage.
2005
, vol. 
25
 (pg. 
661
-
671
)
Gillebert
CR
Op de Beeck
HP
Panis
S
Wagemans
J
Subordinate categorization enhances the neural selectivity in human object-selective cortex for fine shape differences
J Cogn Neurosci.
2009
, vol. 
21
 (pg. 
1054
-
1064
)
Gold
BT
Balota
DA
Jones
SJ
Powell
DK
Smith
CD
Andersen
AH
Dissociation of automatic and strategic lexical-semantics: functional magnetic resonance imaging evidence for differing roles of multiple frontotemporal regions
J Neurosci.
2006
, vol. 
26
 (pg. 
6523
-
6532
)
Gold
BT
Buckner
RL
Common prefrontal regions coactivate with dissociable posterior regions during controlled semantic and phonological tasks
Neuron.
2002
, vol. 
35
 (pg. 
803
-
812
)
Grill-Spector
K
Malach
R
fMR-adaptation: a tool for studying the functional properties of human cortical neurons
Acta Psychol.
2001
, vol. 
107
 (pg. 
293
-
321
)
Grill-Spector
K
Malach
R
The human visual cortex
Annu Rev Neurosci.
2004
, vol. 
27
 (pg. 
649
-
677
)
Hart
J
Gordon
B
Delineation of single-word semantic comprehension deficits in aphasia, with anatomical correlation
Ann Neurol.
1990
, vol. 
27
 (pg. 
226
-
231
)
Hasson
U
Levy
I
Behrmann
M
Hendler
T
Malach
R
Eccentricity bias as an organizing principle for human high-order object areas
Neuron.
2002
, vol. 
34
 (pg. 
479
-
490
)
Hemond
CC
Kanwisher
NG
Op de Beeck
HP
A preference for contralateral stimuli in human object- and face-selective cortex
PLoS ONE.
2007
, vol. 
2:
  
e574, 1–5
Henson
R
Shallice
T
Dolan
R
Neuroimaging evidence for dissociable forms of repetition priming
Science.
2000
, vol. 
287
 (pg. 
1269
-
1272
)
Hubel
DH
Wiesel
TN
Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex
J Physiol.
1962
, vol. 
160
 (pg. 
106
-
154
)
Ishai
A
Ungerleider
LG
Martin
A
Haxby
JV
The representation of objects in the human occipital and temporal cortex
J Cogn Neurosci.
2000
, vol. 
12
 (pg. 
35
-
51
)
Jefferies
E
Lambon Ralph
MA
Semantic impairment in stroke aphasia versus semantic dementia: a case-series comparison
Brain.
2006
, vol. 
129
 (pg. 
2132
-
2147
)
Jiang
X
Bradley
E
Rini
RA
Zeffiro
T
Vanmeter
J
Riesenhuber
M
Categorization training results in shape- and category-selective human neural plasticity
Neuron.
2007
, vol. 
53
 (pg. 
891
-
903
)
Konen
CS
Kastner
S
The hierarchically organized neural systems for object information in human visual cortex
Nat Neurosci.
2008
, vol. 
11
 (pg. 
224
-
231
)
Koutstaal
W
Wagner
AD
Rotte
M
Maril
A
Buckner
RL
Schacter
DL
Perceptual specificity in visual object priming: functional magnetic resonance imaging evidence for a laterality difference in fusiform cortex
Neuropsychologia.
2001
, vol. 
39
 (pg. 
184
-
199
)
Kraut
MA
Kremen
S
Moo
LR
Segal
JB
Calhoun
V
Hart
J
Object activation in semantic memory from visual multimodal feature input
J Cogn Neurosci.
2002
, vol. 
14
 (pg. 
37
-
47
)
Kraut
MA
Kremen
S
Segal
JB
Calhoun
V
Moo
LR
Hart
J
Object activation from features in the semantic system
J Cogn Neurosci.
2002
, vol. 
14
 (pg. 
24
-
36
)
Kravitz
DJ
Vinson
LD
Baker
CI
How position dependent is visual object recognition?
Trends Cogn Sci.
2008
, vol. 
12
 (pg. 
114
-
122
)
Kriegeskorte
N
Simmons
WK
Bellgowan
PS
Baker
CI
Circular analysis in systems neuroscience: the dangers of double dipping
Nat Neurosci.
2009
, vol. 
12
 (pg. 
535
-
540
)
Levy
I
Hasson
U
Avidan
G
Hendler
T
Malach
R
Center-periphery organization of human object areas
Nat Neurosci.
2001
, vol. 
4
 (pg. 
533
-
539
)
Mahon
BZ
Milleville
SC
Negri
GAL
Rumiati
RI
Alfonso
C
Martin
A
Action-related properties shape object representations in the ventral stream
Neuron.
2007
, vol. 
55
 (pg. 
507
-
520
)
Malach
R
Reppas
JB
Benson
RR
Kwong
KK
Jiang
H
Kennedy
WA
Ledden
PJ
Brady
TJ
Rosen
BR
Tootell
RBH
Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex
Proc Natl Acad Sci U S A.
1995
, vol. 
92
 (pg. 
8135
-
8139
)
Martin
A
The representation of object concepts in the brain
Annu Rev Psychol.
2007
, vol. 
58
 (pg. 
25
-
45
)
Martin
A
Chao
LL
Semantic memory and the brain: structure and processes
Curr Opin Neurobiol.
2001
, vol. 
11
 (pg. 
194
-
201
)
Miller
EK
Freedman
DJ
Wallis
JD
The prefrontal cortex: categories, concepts and cognition
Phil Trans R Soc Lond B Biol Sci.
2002
, vol. 
357
 (pg. 
1123
-
1136
)
Miller
EK
Li
L
Desimone
R
A neural mechanism for working and recognition memory in inferior temporal cortex
Science.
1991
, vol. 
254
 (pg. 
1377
-
1379
)
Milner
AD
Goodale
MA
The visual brain in action
1996
Oxford
Oxford University Press
Moss
HE
Abdallah
S
Fletcher
P
Bright
P
Pilgrim
L
Acres
K
Tyler
LK
Selecting among competing alternatives: selection and retrieval in the left inferior frontal gyrus
Cereb Cortex.
2005
, vol. 
15
 (pg. 
1723
-
1735
)
Naccache
L
Dehaene
S
The priming method: imaging unconscious repetition priming reveals an abstract representation of number in the parietal lobes
Cereb Cortex.
2001
, vol. 
11
 (pg. 
966
-
974
)
Nichols
T
Brett
M
Andersson
J
Wager
T
Poline
JB
Valid conjunction inference with the minimum statistic
Neuroimage.
2005
, vol. 
25
 (pg. 
653
-
660
)
Noppeney
U
Price
CJ
Penny
WD
Friston
KJ
Two distinct neural mechanisms for category-selective responses
Cereb Cortex.
2006
, vol. 
16
 (pg. 
437
-
445
)
Op de Beeck
H
Vogels
R
Spatial sensitivity of macaque inferior temporal neurons
J Comp Neurol.
2000
, vol. 
426
 (pg. 
505
-
518
)
Owen
AM
McMillan
KM
Laird
AR
Bullmore
E
N-back working memory paradigm: a meta-analysis of normative functional neuroimaging studies
Hum Brain Mapp.
2005
, vol. 
25
 (pg. 
46
-
59
)
Piazza
M
Izard
V
Pinel
P
Le Bihan
D
Dehaene
S
Tuning curves for approximate numerosity in the human intraparietal sulcus
Neuron.
2004
, vol. 
44
 (pg. 
547
-
555
)
Posner
MI
Petersen
SE
The attention system of the human brain
Annu Rev Neurosci.
1990
, vol. 
13
 (pg. 
25
-
42
)
Quiroga
Q
Snyder
LH
Batista
AP
Cui
H
Andersen
RA
Movement intention is better predicted than attention in the posterior parietal cortex
J Neurosci.
2006
, vol. 
26
 (pg. 
3615
-
3620
)
Riesenhuber
M
Poggio
T
Neural mechanisms of object recognition
Curr Opin Neurobiol.
2002
, vol. 
12
 (pg. 
162
-
168
)
Robinson
G
Blair
J
Cipolotti
L
Dynamic aphasia: an inability to select between competing verbal responses?
Brain.
1998
, vol. 
121
 (pg. 
77
-
89
)
Rosch
E
Rosch
E
Lloyd
B
Principles of categorization
Cognition and categorization
1978
Hillsdale (NJ)
Erlbaum
(pg. 
27
-
48
)
Rosch
E
Mervis
CB
Gray
W
Johnson
D
Boyes-Braem
P
Basic objects in natural categories
Cogn Psychol.
1976
, vol. 
8
 (pg. 
382
-
439
)
Sayres
R
Grill-Spector
K
Relating retinotopic and object-selective responses in human lateral occipital cortex
J Neurophysiol.
2008
, vol. 
100
 (pg. 
249
-
267
)
Schwarzlose
RF
Swisher
JD
Dang
S
Kanwisher
N
The distribution of category and location information across object-selective regions in human visual cortex
Proc Natl Acad Sci U S A.
2008
, vol. 
105
 (pg. 
4447
-
4452
)
Shallice
T
Burgess
PW
Deficits in strategy application following frontal lobe damage in man
Brain.
1991
, vol. 
114
 (pg. 
727
-
741
)
Simmonds
DJ
Pekar
JJ
Mostofsky
SH
Meta-analysis of Go/No-go tasks demonstrating that fMRI activation associated with response inhibition is task-dependent
Neuropsychologia.
2008
, vol. 
46
 (pg. 
224
-
232
)
Simmons
WK
Martin
A
The anterior temporal lobes and the functional architecture of semantic memory
J Int Neuropsychol Soc.
2009
, vol. 
15
 (pg. 
645
-
649
)
Simons
JS
Koutstaal
W
Prince
S
Wagner
AD
Schacter
DL
Neural mechanisms of visual object priming: evidence for perceptual and semantic distinctions in fusiform cortex
Neuroimage.
2003
, vol. 
19
 (pg. 
613
-
626
)
Slotnick
SD
Moo
LR
Kraut
MA
Lesser
RP
Hart
J
Interactions between thalamic and cortical rhythms during semantic memory recall in human
Proc Natl Acad Sci U S A.
2002
, vol. 
99
 (pg. 
6440
-
6443
)
Spiridon
M
Fischl
B
Kanwisher
N
Location and spatial profile of category-specific regions in human extrastriate cortex
Hum Brain Mapp.
2006
, vol. 
27
 (pg. 
77
-
89
)
Talairach
J
Tournoux
P
Co-planar stereotaxic atlas of the human brain
1988
New York
Thieme
Thompson-Schill
SL
D'Esposito
M
Aguirre
GK
Farah
MJ
Role of left inferior prefrontal cortex in retrieval of semantic knowledge: a reevaluation
Proc Natl Acad Sci U S A.
1997
, vol. 
94
 (pg. 
14792
-
14797
)
Tootell
RBH
Hadjikhani
NK
Vanduffel
W
Liu
AK
Mendola
JD
Sereno
MI
Dale
AM
Functional analysis of primary visual cortex (V1) in humans
Proc Natl Acad Sci U S A.
1998
, vol. 
95
 (pg. 
811
-
817
)
Tranel
D
Damasio
H
Damasio
AR
A neural basis for the retrieval of conceptual knowledge
Neuropsychologia.
1997
, vol. 
35
 (pg. 
1319
-
1327
)
Ungerleider
LG
Haxby
JV
'What' and 'where' in the human brain
Curr Opin Neurobiol.
1994
, vol. 
4
 (pg. 
157
-
165
)
Ungerleider
LG
Mishkin
M
Ingle
DJ
Goodale
MA
Masfield
RJW
Two cortical visual systems
Analysis of visual behavior
1982
Cambridge (MA)
MIT Press
(pg. 
549
-
586
)
Van Essen
DC
Gallant
JL
Neural mechanisms of form and motion processing in the primate visual system
Neuron.
1994
, vol. 
13
 (pg. 
1
-
10
)
Vandenbulcke
M
Peeters
R
Fannes
K
Vandenberghe
R
Knowledge of visual attributes in the right hemisphere
Nat Neurosci.
2006
, vol. 
6
 (pg. 
327
-
333
)
Vuilleumier
P
Henson
RNA
Driver
J
Dolan
RJ
Multiple levels of visual object constancy revealed by event-related fMRI of repetition priming
Nat Neurosci.
2002
, vol. 
5
 (pg. 
491
-
499
)
Wagner
AD
Paré-Blagoev
EJ
Clark
J
Poldrack
RA
Recovering meaning: left prefrontal cortex guides controlled semantic retrieval
Neuron.
2001
, vol. 
31
 (pg. 
329
-
338
)
Wheatley
T
Weisberg
J
Beauchamp
MS
Martin
A
Automatic priming of semantically related words reduces activity in the fusiform gyrus
J Cogn Neurosci.
2005
, vol. 
17
 (pg. 
1871
-
1885
)
Wiggett
AJ
Pritchard
IC
Downing
PE
Animate and inanimate objects in human visual cortex: evidence for task-independent category effects
Neuropsychologia.
2009
, vol. 
47
 (pg. 
3111
-
3117
)
Williams
GB
Nestor
PJ
Hodges
JR
Neural correlates of semantic and behavioural deficits in frontotemporal dementia
Neuroimage.
2005
, vol. 
24
 (pg. 
1042
-
1051
)
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/2.5), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Supplementary data