Symposium organiser and speaker at TeaP 2021

Abstract

Deep neural networks (DNNs) have revolutionised computer vision, often now recognising objects and faces as well as humans can. An initial wave of fMRI and electrophysiological studies around 2015 showed that features in object-recognition-trained DNNs predict neural responses in high-level visual cortex. DNNs have since flourished as models of perception, with diverse custom networks, training tasks, and evaluation methods emerging. The talks in this symposium highlight a range of approaches to current challenges, as well as spanning the gamut of visual processing from color perception, through material and contour perception, to object and face recognition. One open challenge is building DNNs with ecologically plausible training tasks and experience. Katherine Storrs explores how perceptual dimensions can form in DNNs through unsupervised statistical learning, without the need for labelled examples. Katharina Dobs and Kshitij Dwivedi tease apart how experience of different visual diets and ecologically-relevant learning objectives affect representations in DNNs, and their performance as models of brain and behaviour. As DNNs become more powerful, it becomes crucial to find nuanced ways to compare their perception to ours. Alban Flachot uses a large-scale custom dataset to probe how the fundamental visual competencies of colour perception and constancy develop. Judy Borowski shows how tasks like closed-contour detection present particular challenges to artificial vision, providing leverage to study functional differences. Finally, the talks showcase approaches for peering inside the ‘black box’ of DNNs. For example, Martin Hebart presents a novel data-driven method for finding interpretable dimensions in DNNs, and compares these to those underlying human perception. Collectively, the talks capture the diversity of DNN modelling in vision science.

Date
Location
TeaP