Recent & Upcoming Talks

Using visual perception as a case study, I will propose that questions in cognitive science are not passed from one discipline to the …

A photograph or painting of a glazed vase might consist of irregularly-shaped bright patches, small white dots, and large low-contrast …

Deep neural networks (DNNs) have revolutionised computer vision, often now recognising objects and faces as well as humans can. An …

Models of vision have come far in the past 10 years. Deep neural networks can recognise objects with near-human accuracy, and predict …

Computational visual neuroscience has come a long way in the past 10 years. For the first time, we have fully explicit, …

Level Up Human is a podcast panel show, in which scientists compete to pitch improvements to the human design. In this episode, I pitch …

Perceiving the glossiness of a surface is a challenging visual inference that requires disentangling the contributions of reflectance, …



Paper published in Nature Human Behaviour

Nature Human Behaviour

My main recent project, a statistical learning account of material perception, has recently come out in Nature Human Behaviour: Unsupervised learning predicts human perception and misperception of gloss. PDF of press release available here (in German) and in English translation

Paper accepted at Journal of Cognitive Neuroscience

Journal of Cognitive Neuroscience

A major project from my previous postdoc, with Niko Kriegeskorte, has been accepted for publication. You can read it in preprint from on bioRxiv: Diverse deep neural networks all predict human IT well, after training and fitting.

Preprint on unsupervised learning of human-like gloss perception


A preprint of one of the main projects I’ve been working on for the past couple of years is now up on bioRxiv: Unsupervised Learning Predicts Human Perception and Misperception of Specular Surface Reflectance.

Social Media Editor for Perception and i-Perception journals

SAGE Publishers

As of 2020, I’ve taken up a small role as Social Media Editor for the Perception and i-Perception journals. See what we’re up to at

Awarded Humboldt Research Fellowship

Alexander von Humboldt Foundation

In 2019 was awarded a Humboldt Research Fellowship for Postdoctoral Researchers, from the Alexander von Humboldt Foundation: I will use the 2-year fellowship to continue my work with Prof. Roland Fleming at the Justus Liebig University in Giessen, Germany, on unsupervised deep learning of visual properties.

New paper published: Learning to See Stuff

Current Opinion in Behavioral Science

My first paper with Roland Fleming is out, in which we present a case for the importance of unsupervised learning in core visual perception:
Aug-12-19 – Aug-14-19

Symposium: How Humans and Machines Learn to See

Rauischholzhausen Castle, Hesse, Germany

Together with Prof Roland Fleming, I organised a 3-day symposium this August, bringing together machine learning researchers, computational visual neuroscientists, and developmental vision scientists to discuss ecologically plausible visual learning at the Rauischholzhausen Castle, Hesse, Germany. Funded by the German Research Council (DFG) under the SFB project ‘Cardinal Mechanisms of Perception’


December 2019 – Present
Giessen, Germany

Alexander von Humboldt Research Fellow

Justus-Liebig University

  • Psychophysics
  • Unsupervised deep learning
January 2018 – November 2019
Giessen, Germany


Justus-Liebig University

  • Psychophysics
  • Unsupervised deep learning
July 2017 – December 2017
London, UK

Data Scientist


Optimising measurement of perceived video quality.
February 2015 – June 2017
Cambridge, UK


MRC Cognition and Brain Sciences Unit, University of Cambridge

  • Representational similarity analysis
  • fMRI
  • Deep learning
July 2014 – October 2013
London, UK

Teaching Fellow in Visual Perception

University College London

Coordinating and lecturing undergraduate psychology courses.
November 2011 – February 2015
Brisbane, Australia

PhD candidate

University of Queensland

Thesis: “Norms are Not the Norm: Testing Theories of Sensory Encoding using Visual Aftereffects”