Click here to view the article published in January 2012 by MIT News.
Click here to view the article published in February 2012 by The Tech.
James DiCarlo, associate professor of neuroscience, has been named head of the Department of Brain and Cognitive Sciences. His five-year term will begin March 1.
DiCarlo succeeds Mriganka Sur, who will leave his position as department head to become the director of the Simons Center for the Social Brain at MIT, a new initiative that aims to catalyze innovative research on the social brain and translate that work into the improved diagnosis and treatment of autism spectrum disorders. Read More...
Click Here to view the article published in November 2010 by The McGovern Institute for Brain Research.
Neuroscientists at MIT and Harvard have made the surprising discovery that the brain sees some faces as male when they appear in one area of a person’s field of view, but female when they appear in a different location.
The findings challenge a longstanding tenet of neuroscience — how the brain sees an object should not depend on where the object is located relative to the observer, says Arash Afraz, a postdoctoral associate at MIT’s McGovern Institute for Brain Research and lead author of a new paper on the work. “It’s the kind of thing you would not predict — that you would look at two identical faces and think they look different,” says Afraz, who is a postdoc in James DiCarlo's lab. He and two colleagues from Harvard, Patrick Cavanagh and Maryam Vaziri Pashkam, described their findings in the Nov. 24 online edition of the journal Current Biology. Read More...
Click here to view the article published in September 2010 by The McGovern Institute for Brain Research.
A new study by McGovern neuroscientists suggests that the brain learns to solve the problem of object recognition through its vast experience in the natural world.
Understanding how the brain recognizes objects is a central challenge for understanding human vision, and for designing artificial vision systems. (No computer system comes close to human vision.)Take for example, a dog. It may be sitting nearby or far away or standing in sunshine or shadow. Although each variation in the dog’s position, pose or illumination produces a different pattern of light on the retina, we still recognize it as a dog.
One possible way to acquire this ability to recognize an object, despite these variations, is through a simple form of learning. Objects in the real world usually don’t suddenly change their identity, so any two patterns appearing on the retina in rapid succession likely arise from the same object. Any difference between the two patterns probably means the object has changed its position rather than having been replaced by another object. So by simply learning to associate images that appear in rapid succession, the brain might learn to recognize objects even if viewed from different angles, distances, or lighting conditions.
To test this idea, called “temporal contiguity,” graduate student Nuo Li and associate professor James DiCarlo at MIT’s McGovern Institute for Brain Research “tricked” monkeys by exposing them to an altered visual world in which the normal rules of temporal contiguity did not apply. They recorded electrical activity from individual neurons in a region of the monkey brain called the inferior temporal cortex (IT), where object recognition is thought to happen. IT neurons respond selectively to particular objects; a neuron might, for example, fire more strongly in response to images of a Dalmatian than to pictures a rhinoceros, regardless of its size or position on the retina. Read More...
Click here to view the article published in September 2008 by MIT Media Relations.
In work that could aid efforts to develop more brain-like computer vision systems, MIT neuroscientists have tricked the visual brain into confusing one object with another, thereby demonstrating that time teaches us how to recognize objects.
As you scan this visual scene (indicated with green circle), you spot a beaver out of the corner of your eye. As you glance towards it, the image is swapped for a monkey. Using analogous stimuli to produce swaps at specific locations in the visual field, MIT graduate student Nuo Li and professor James DiCarlo show that the brain starts to confuse different objects after a few hours exposure to this altered visual world. The confusion is exactly that which is expected if the brain uses temporal contiguity to teach it how to recognize objects. Read More...
Click here to view the article published in January 2008 by MIT Media Relations.
For years, scientists have been trying to teach computers how to see like humans, and recent research has seemed to show computers making progress in recognizing visual objects. A new MIT study, however, cautions that this apparent success may be misleading because the tests being used are inadvertently stacked in favor of computers.
Computer vision is important for applications ranging from "intelligent" cars to visual prosthetics for the blind. Recent computational models show apparently impressive progress, boasting 60-percent success rates in classifying natural photographic image sets. These include the widely used Caltech101 database, intended to test computer vision algorithms against the variety of images seen in the real world.
However, James DiCarlo, a neuroscientist in the McGovern Institute for Brain Research at MIT, graduate student Nicolas Pinto and David Cox of the Rowland Institute at Harvard argue that these image sets have design flaws that enable computers to succeed where they would fail with more-authentically varied images. For example, photographers tend to center objects in a frame and to prefer certain views and contexts. The visual system, by contrast, encounters objects in a much broader range of conditions. Read more...