An empirical assay of view-invariant object learning in humans and comparison with baseline image-computable models

TitleAn empirical assay of view-invariant object learning in humans and comparison with baseline image-computable models
Publication TypeJournal Article
Year of Publication2023
AuthorsLee, MJ, DiCarlo, JJ
JournalbioRxiv
Pagination2022.12.31.522402
Date Published2023/01/01
Type of Articlepreprint
Abstract

How humans learn new visual objects is a longstanding scientific problem. Previous work has led to a diverse collection of models for how it is accomplished, but a current limitation in the field is a lack of empirical benchmarks which can be used to evaluate and compare specific models against each other. Here, we use online psychophysics to measure human behavioral learning trajectories over a set of tasks involving novel 3D objects. Consistent with intuition, these results show that humans generally require very few images (≈ 6) to approach their asymptotic accuracy, find some object discriminations more easy to learn than others, and generalize quite well over a range of image transformations after even one view of each object. We then use those data to develop benchmarks that may be used to evaluate a learning model’s similarity to humans. We make these data and benchmarks publicly available [GitHub], and, to our knowledge, they are currently the largest publicly-available collection of learning-related psychophysics data in humans. Additionally, to serve as baselines for those benchmarks, we implement and test a large number of baseline models (n=1,932), each based on a standard cognitive theory of learning: that humans re-represent images in a fixed, Euclidean space, then learn linear decision boundaries in that space to identify objects in future images. We find some of these baseline models make surprisingly accurate predictions. However, we also find reliable prediction gaps between all baseline models and humans, particularly in the few-shot learning setting.Competing Interest StatementThe authors have declared no competing interest.

URLhttps://www.biorxiv.org/content/biorxiv/early/2023/01/02/2022.12.31.522402.full.pdf
DOI10.1101/2022.12.31.522402v1
Refereed DesignationNon-Refereed