%0 Journal Article %J Nature Methods %D 2021 %T Chronically implantable LED arrays for behavioral optogenetics in primates %A Rajalingham, Rishi %A Sorenson, Michael %A Azadi, Reza %A Bohn, Simon %A DiCarlo, James J. %A Afraz, Arash %X

Optogenetic methods have been widely used in rodent brains, but remain relatively under-developed for nonhuman primates such as rhesus macaques, an animal model with a large brain expressing sophisticated sensory, motor and cognitive behaviors. To address challenges in behavioral optogenetics in large brains, we developed Opto-Array, a chronically implantable array of light-emitting diodes for high-throughput optogenetic perturbation. We demonstrated that optogenetic silencing in the macaque primary visual cortex with the help of the Opto-Array results in reliable retinotopic visual deficits in a luminance discrimination task. We separately confirmed that Opto-Array illumination results in local neural silencing, and that behavioral effects are not due to tissue heating. These results demonstrate the effectiveness of the Opto-Array for behavioral optogenetic applications in large brains.

%B Nature Methods %V 18 %P 1112 - 1116 %8 Jan-09-2021 %G eng %U https://www.nature.com/articles/s41592-021-01238-9 %N 9 %! Nat Methods %R 10.1038/s41592-021-01238-9 %0 Journal Article %J Nature Communications %D 2020 %T The inferior temporal cortex is a potential cortical precursor of orthographic processing in untrained monkeys %A Rajalingham, Rishi %A Kar, Kohitij %A Sanghavi, Sachi %A Dehaene, Stanislas %A DiCarlo, James J. %X

The ability to recognize written letter strings is foundational to human reading, but the underlying neuronal mechanisms remain largely unknown. Recent behavioral research in baboons suggests that non-human primates may provide an opportunity to investigate this question. We recorded the activity of hundreds of neurons in V4 and the inferior temporal cortex (IT) while naïve macaque monkeys passively viewed images of letters, English words and non-word strings, and tested the capacity of those neuronal representations to support a battery of orthographic processing tasks. We found that simple linear read-outs of IT (but not V4) population responses achieved high performance on all tested tasks, even matching the performance and error patterns of baboons on word classification. These results show that the IT cortex of untrained primates can serve as a precursor of orthographic processing, suggesting that the acquisition of reading in humans relies on the recycling of a brain network evolved for other visual functions.

%B Nature Communications %V 11 %8 Jan-12-2020 %G eng %U http://www.nature.com/articles/s41467-020-17714-3 %N 1 %! Nat Commun %R 10.1038/s41467-020-17714-3 %0 Journal Article %J Neuron %D 2020 %T An Open Resource for Non-human Primate Optogenetics %A Tremblay, Sebastien %A Acker, Leah %A Afraz, Arash %A Albaugh, Daniel L. %A Amita, Hidetoshi %A Andrei, Ariana R. %A Angelucci, Alessandra %A Aschner, Amir %A Balan, Puiu F. %A Basso, Michele A. %A Benvenuti, Giacomo %A Bohlen, Martin O. %A Caiola, Michael J. %A Calcedo, Roberto %A Cavanaugh, James %A Chen, Yuzhi %A Chen, Spencer %A Chernov, Mykyta M. %A Clark, Andrew M. %A Dai, Ji %A Debes, Samantha R. %A Deisseroth, Karl %A Desimone, Robert %A Dragoi, Valentin %A Egger, Seth W. %A Eldridge, Mark A.G. %A El-Nahal, Hala G. %A Fabbrini, Francesco %A Federer, Frederick %A Fetsch, Christopher R. %A Fortuna, Michal G. %A Friedman, Robert M. %A Fujii, Naotaka %A Gail, Alexander %A Galvan, Adriana %A Ghosh, Supriya %A Gieselmann, Marc Alwin %A Gulli, Roberto A. %A Hikosaka, Okihide %A Hosseini, Eghbal A. %A Hu, Xing %A üer, Janina %A Inoue, Ken-ichi %A Janz, Roger %A Jazayeri, Mehrdad %A Jiang, Rundong %A Ju, Niansheng %A Kar, Kohitij %A Klein, Carsten %A Kohn, Adam %A Komatsu, Misako %A Maeda, Kazutaka %A Martinez-Trujillo, Julio C. %A Matsumoto, Masayuki %A Maunsell, John H.R. %A Mendoza-Halliday, Diego %A Monosov, Ilya E. %A Muers, Ross S. %A Nurminen, Lauri %A Ortiz-Rios, Michael %A ’Shea, Daniel J. %A Palfi, éphane %A Petkov, Christopher I. %A Pojoga, Sorin %A Rajalingham, Rishi %A Ramakrishnan, Charu %A Remington, Evan D. %A Revsine, Cambria %A Roe, Anna W. %A Sabes, Philip N. %A Saunders, Richard C. %A Scherberger, örg %A Schmid, Michael C. %A Schultz, Wolfram %A Seidemann, Eyal %A Senova, Yann-Suhan %A Shadlen, Michael N. %A Sheinberg, David L. %A Siu, Caitlin %A Smith, Yoland %A Solomon, Selina S. %A Sommer, Marc A. %A Spudich, John L. %A Stauffer, William R. %A Takada, Masahiko %A Tang, Shiming %A Thiele, Alexander %A Treue, Stefan %A Vanduffel, Wim %A Vogels, Rufin %A Whitmire, Matthew P. %A Wichmann, Thomas %A Wurtz, Robert H. %A Xu, Haoran %A Yazdan-Shahmorad, Azadeh %A Shenoy, Krishna V. %A DiCarlo, James J. %A Platt, Michael L. %X

Optogenetics has revolutionized neuroscience in small laboratory animals, but its effect on animal models more closely related to humans, such as non-human primates (NHPs), has been mixed. To make evidence-based decisions in primate optogenetics, the scientific community would benefit from a centralized database listing all attempts, successful and unsuccessful, of using optogenetics in the primate brain. We contacted members of the community to ask for their contributions to an open science initiative. As of this writing, 45 laboratories around the world contributed more than 1,000 injection experiments, including precise details regarding their methods and outcomes. Of those entries, more than half had not been published. The resource is free for everyone to consult and contribute to on the Open Science Framework website. Here we review some of the insights from this initial release of the database and discuss methodological considerations to improve the success of optogenetic experiments in NHPs.

%B Neuron %8 January 10, 2020 %G eng %U https://linkinghub.elsevier.com/retrieve/pii/S0896627320307510 %9 NeuroResource %! Neuron %R 10.1016/j.neuron.2020.09.027 %0 Conference Paper %B Neural Information Processing Systems %D 2019 %T Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs %A Jonas Kubilius %A Martin Schrimpf %A Ha Hong %A Najib Majaj %A Rajalingham, Rishi %A Issa, Elias B. %A Kohitij Kar %A Bashivan, Pouya %A Jonathan Prescott-Roy %A Kailyn Schmidt %A Aran Nayebi %A Daniel Bear %A Daniel L. K. Yamins %A James J. DiCarlo %X

Deep convolutional artificial neural networks (ANNs) are the leading class of candidate models of the mechanisms of visual processing in the primate ventral stream. While initially inspired by brain anatomy, over the past years, these ANNs have evolved from a simple eight-layer architecture in AlexNet to extremely deep and branching architectures, demonstrating increasingly better object categorization performance, yet bringing into question how brain-like they still are. In particular, typical deep models from the machine learning community are often hard to map onto the brain's anatomy due to their vast number of layers and missing biologically-important connections, such as recurrence. Here we demonstrate that better anatomical alignment to the brain and high performance on machine learning as well as neuroscience measures do not have to be in contradiction. We developed CORnet-S, a shallow ANN with four anatomically mapped areas and recurrent connectivity, guided by Brain-Score, a new large-scale composite of neural and behavioral benchmarks for quantifying the functional fidelity of models of the primate ventral visual stream. Despite being significantly shallower than most models, CORnet-S is the top model on Brain-Score and outperforms similarly compact models on ImageNet. Moreover, our extensive analyses of CORnet-S circuitry variants reveal that recurrence is the main predictive factor of both Brain-Score and ImageNet top-1 performance. Finally, we report that the temporal evolution of the CORnet-S "IT" neural population resembles the actual monkey IT population dynamics. Taken together, these results establish CORnet-S, a compact, recurrent ANN, as the current best model of the primate ventral visual stream.

%B Neural Information Processing Systems %G eng %U https://papers.nips.cc/paper/9441-brain-like-object-recognition-with-high-performing-shallow-recurrent-anns.pdf %R https://papers.nips.cc/paper/9441-brain-like-object-recognition-with-high-performing-shallow-recurrent-anns %0 Journal Article %J Neuron %D 2019 %T Reversible Inactivation of Different Millimeter-Scale Regions of Primate IT Results in Different Patterns of Core Object Recognition Deficits %A Rajalingham, Rishi %A DiCarlo, James J. %X

Extensive research suggests that the inferior temporal (IT) population supports visual object recognition behavior. However, causal evidence for this hypothesis has been equivocal, particularly beyond the specific case of face-selective subregions of IT. Here, we directly tested this hypothesis by pharmacologically inactivating individual, millimeter-scale subregions of IT while monkeys performed several core object recognition subtasks, interleaved trial-by trial. First, we observed that IT inactivation resulted in reliable contralateral-biased subtask-selective behavioral deficits. Moreover, inactivating different IT subregions resulted in different patterns of subtask deficits, predicted by each subregion’s neuronal object discriminability. Finally, the similarity between different inactivation effects was tightly related to the anatomical distance between corresponding inactivation sites. Taken together, these results provide direct evidence that the IT cortex causally supports general core object recognition and that the underlying IT coding dimensions are topographically organized.

%B Neuron %V 102 %P 493 - 505.e5 %8 01/2019 %G eng %U https://www.cell.com/neuron/pdfExtended/S0896-6273(19)30110-2 %N 2 %! Neuron %R 10.1016/j.neuron.2019.02.001 %0 Conference Paper %B Computational and Systems Neuroscience (COSYNE) %D 2019 %T Using Brain-Score to Evaluate and Build Neural Networks for Brain-Like Object Recognition %A Schrimpf, Martin %A Kubilius, Jonas %A Hong, Ha %A Majaj, Najib %A Rajalingham, Rishi %A Issa, Elias B %A Kar, Kohitij %A Ziemba, Corey M %A Bashivan, Pouya %A Prescott-Roy, Jonathan %A Schmidt, Kailyn %A Yamins, Daniel LK %A DiCarlo, James J %B Computational and Systems Neuroscience (COSYNE) %C Denver, CO %G eng %0 Journal Article %J bioRxiv %D 2018 %T Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like? %A Martin Schrimpf %A Kubilius, Jonas %A Ha Hong %A Najib Majaj %A Rajalingham, Rishi %A Issa, Elias B. %A Kar, Kohitij %A Bashivan, Pouya %A Jonathan Prescott-Roy %A Schmidt, Kailyn %A Daniel L. K. Yamins %A DiCarlo, James J. %X

The internal representations of early deep artificial neural networks (ANNs) were found to be remarkably similar to the internal neural representations measured experimentally in the primate brain. Here we ask, as deep ANNs have continued to evolve, are they becoming more or less brain-like? ANNs that are most functionally similar to the brain will contain mechanisms that are most like those used by the brain. We therefore developed Brain-Score - a composite of multiple neural and behavioral benchmarks that score any ANN on how similar it is to the brain's mechanisms for core object recognition - and we deployed it to evaluate a wide range of state-of-the-art deep ANNs. Using this scoring system, we here report that: (1) DenseNet-169, CORnet-S and ResNet-101 are the most brain-like ANNs. (2) There remains considerable variability in neural and behavioral responses that is not predicted by any ANN, suggesting that no ANN model has yet captured all the relevant mechanisms. (3) Extending prior work, we found that gains in ANN ImageNet performance led to gains on Brain-Score. However, correlation weakened at >= 70% top-1 ImageNet performance, suggesting that additional guidance from neuroscience is needed to make further advances in capturing brain mechanisms. (4) We uncovered smaller (i.e. less complex) ANNs that are more brain-like than many of the best-performing ImageNet models, which suggests the opportunity to simplify ANNs to better understand the ventral stream. The scoring system used here is far from complete. However, we propose that evaluating and tracking model-benchmark correspondences through a Brain-Score that is regularly updated with new brain data is an exciting opportunity: experimental benchmarks can be used to guide machine network evolution, and machine networks are mechanistic hypotheses of the brain's network and thus drive next experiments. To facilitate both of these, we release Brain-Score.org: a platform that hosts the neural and behavioral benchmarks, where ANNs for visual processing can be submitted to receive a Brain-Score and their rank relative to other models, and where new experimental data can be naturally incorporated.

%B bioRxiv %8 09/2018 %G eng %U https://www.biorxiv.org/content/10.1101/407007v2.full.pdf %9 preprint %R https://doi.org/10.1101/407007 %0 Journal Article %J The Journal of Neuroscience %D 2018 %T Large-Scale, High-Resolution Comparison of the Core Visual Object Recognition Behavior of Humans, Monkeys, and State-of-the-Art Deep Artificial Neural Networks %A Rajalingham, Rishi %A Issa, Elias B. %A Bashivan, Pouya %A Kar, Kohitij %A Schmidt, Kailyn %A DiCarlo, James J. %X

Primates-including humans-can typically recognize objects in visual images at a glance even in spite of naturally occurring identity-preserving image transformations (e.g. changes in viewpoint). A primary neuroscience goal is to uncover neuron-level mechanistic models that quantitatively explain this behavior by predicting primate performance for each and every image. Here, we applied this stringent behavioral prediction test to the leading mechanistic models of primate vision (specifically, deep, convolutional, artificial neural networks; ANNs) by directly comparing their behavioral signatures against those of humans and rhesus macaque monkeys. Using high-throughput data collection systems for human and monkey psychophysics, we collected over one million behavioral trials from 1472 anonymous humans and five male macaque monkeys for 2400 images over 276 binary object discrimination tasks. Consistent with previous work, we observed that state-of-the-art deep, feed-forward convolutional ANNs trained for visual categorization (termed DCNN models) accurately predicted primate patterns of object-level confusion. However, when we examined behavioral performance for individual images within each object discrimination task, we found that all tested DCNN models were significantly non-predictive of primate performance, and that this prediction failure was not accounted for by simple image attributes, nor rescued by simple model modifications. These results show that current DCNN models cannot account for the image-level behavioral patterns of primates, and that new ANN models are needed to more precisely capture the neural mechanisms underlying primate object vision. To this end, large-scale, high-resolution primate behavioral benchmarks-such as those obtained here-could serve as direct guides for discovering such models.Recently, specific feed-forward deep convolutional artificial neural networks (ANNs) models have dramatically advanced our quantitative understanding of the neural mechanisms underlying primate core object recognition. In this work, we tested the limits of those ANNs by systematically comparing the behavioral responses of these models with the behavioral responses of humans and monkeys, at the resolution of individual images. Using these high-resolution metrics, we found that all tested ANN models significantly diverged from primate behavior. Going forward, these high-resolution, large-scale primate behavioral benchmarks could serve as direct guides for discovering better ANN models of the primate visual system.

%B The Journal of Neuroscience %V 38 %P 7255 - 7269 %8 03/2019 %G eng %U http://www.jneurosci.org/content/38/33/7255 %N 33 %! J. Neurosci. %R 10.1523/JNEUROSCI.0388-18.2018 %0 Journal Article %J bioRxiv %D 2018 %T Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks %A Rajalingham, Rishi %A Issa, Elias B %A Bashivan, Pouya %A Kar, Kohitij %A Schmidt, Kailyn %A DiCarlo, James J. %X

Primates—including humans—can typically recognize objects in visual images at a glance even in the face of naturally occurring identity-preserving image transformations (e.g. changes in viewpoint). A primary neuroscience goal is to uncover neuron-level mechanistic models that quantitatively explain this behavior by predicting primate performance for each and every image. Here, we applied this stringent behavioral prediction test to the leading mechanistic models of primate vision (specifically, deep, convolutional, artificial neural networks; ANNs) by directly comparing their behavioral signatures against those of humans and rhesus macaque monkeys. Using high-throughput data collection systems for human and monkey psychophysics, we collected over one million behavioral trials for 2400 images over 276 binary object discrimination tasks. Consistent with previous work, we observed that state-of-the-art deep, feed-forward convolutional ANNs trained for visual categorization (termed DCNNIC models) accurately predicted primate patterns of object-level confusion. However, when we examined behavioral performance for individual images within each object discrimination task, we found that all tested DCNNIC models were significantly non-predictive of primate performance, and that this prediction failure was not accounted for by simple image attributes, nor rescued by simple model modifications. These results show that current DCNNIC models cannot account for the image-level behavioral patterns of primates, and that new ANN models are needed to more precisely capture the neural mechanisms underlying primate object vision. To this end, large-scale, high-resolution primate behavioral benchmarks—such as those obtained here—could serve as direct guides for discovering such models.

%B bioRxiv %8 02/2018 %G eng %U https://www.biorxiv.org/content/10.1101/240614v4.full.pdf %9 preprint %R https://doi.org/10.1101/240614 %0 Journal Article %J bioRxiv %D 2018 %T Reversible inactivation of different millimeter-scale regions of primate IT results in different patterns of core object recognition deficits %A Rajalingham, Rishi %A DiCarlo, James J. %X

Extensive research suggests that the inferior temporal (IT) population supports visual object recognition behavior. However, causal evidence for this hypothesis has been equivocal, particularly beyond the specific case of face-selective sub-regions of IT. Here, we directly tested this hypothesis by pharmacologically inactivating individual, millimeter-scale sub-regions of IT while monkeys performed several object discrimination tasks, interleaved trial-by-trial. First, we observed that IT inactivation resulted in reliable contralateral-biased task-selective behavioral deficits. Moreover, inactivating different IT sub-regions resulted in different patterns of task deficits, each predicted by that sub-region's neuronal object discriminability. Finally, the similarity between different inactivation effects was tightly related to the anatomical distance between corresponding inactivation sites. Taken together, these results provide direct evidence that IT cortex causally supports general core object recognition, and that the underlying IT codes are topographically organized.

%B bioRxiv %8 08/2018 %G eng %U https://www.biorxiv.org/content/10.1101/390245v1.full.pdf %9 preprint %R https://doi.org/10.1101/390245