"Why do things look as they do?”
(K. Koffka, Principles of Gestalt Psychology, 1935).
The Wichmann-lab's research goal is to understand visual perception: How does the visual system transform the photons absorbed by the retinal photoreceptors into meaning, that is, into perceived scenes, objects and actions? Hermann von Helmholtz in his Handbuch der Physiologischen Optik, Teil III (1867) is typically credited to have been the first to realize that visual perception is a form of inference: visual percepts are---typically reasonably accurate---hypotheses about the world based on incomplete sensory evidence. To make progress towards understanding this inference, our laboratory currently investigates perceptual processes from the level of simple artificial stimuli up to complex object and scene perception, combining psychophysical experiments and computational modeling.
Currently we have five research foci:
Early Vision: Models & Data
We attempt to understand the initial coding of visual stimuli, the spatial and temporal dynamics of contrast gain-control, the form of spatial filters in early vision, and how this encoding is adapted to the characteristics of natural stimuli (Henning & Wichmann, 2007; Goris et al., 2009; Gerhard et al., in press). Furthermore, we work on a computationally efficient implementation of a psychophysical model of early spatial vision.
- Mid-Level Vision: Luminance, Lightness & Brightness
Much of the results from early vision research is in the form of forced-choice discrimination thresholds: objective measurements of the sensitivity and limits of the visual system. Conscious vision, on the other hand, is typically concerned with the phenomenal appearance of objects and scenes, and research results often come in the form of setting phenomenal matches, or points of subjective equality. We are interested in the relation between the two types of measurements, e.g. does Weber's law hold for physical luminance or subjective lightness?
- Mid-Level Vision: Saliency, Eye-Movements and Models of Oculomotor-Control
The human visual system is foveated, that is, only a fraction of the visual field can be analysed in detail at a given point in time. Thus we need to actively sample the visual scene through eye-movements. We are interested in disentangling bottom-up and top-down components of eye-movement control, and seek to understand how specific task demands---scene perception, visual search, reading---influence fixation location selection (Barthelmé et al., 2012).
Not knowing the critical---that is behaviour-determining---features in the input constitutes one of the major obstacles preventing the development of successful computational models of perceptual processes: Given the high-dimensional input, which are the features the sensory systems base their computations on? We develop inverse machine learning methods for (nonlinear) systems identification, and have applied them successfully to identify visually salient image features (Kienzle et al. ,2009), to gender discrimination of human faces (Macke & Wichmann, 2010) and recently to the identification of auditory tones in noise using L1-regularization (Schönfelder & Wichmann, 2012).
Typically, psychophysical research is limited to at most a few hundred trials per stimulus condition. Most traditional statistical techniques rely, however, on asymptotic theory in that they assume Gaussian distributions of responses and measurement errors. We are exploring alternative statistical techniques based on Monte Carlo resampling techniques (Wichmann & Hill, 2001) or Bayesian statistics (Kuss et al., 2005). Recently, we began to develop methods to correct for the size of the confidence intervals of estimated parameters resulting from fitting a stationary observer model to non-stationary data (Fründ et al., 2011).