

Thus, we are still lacking a complete neural model of color constancy, which encompasses physiological similarities to the primate’s visual system and at the same time exhibits similar behavior to humans on color constancy relevant tasks. For example, better color constancy has been observed for known objects than for unknown ones ( Granzier & Gegenfurtner, 2012 Olkkonen et al., 2008). Yet, other studies suggest that higher-level and even cognitive processes such as memory also contribute. Low-level, feedforward processes, such as adaptation and the double opponency of cells in early stages of the visual system, have been identified as being useful for color constancy ( Gao et al., 2015).

It also remains unclear which neural mechanisms contribute to color constancy. Yet, observing that humans do achieve at least partial color constancy sparks the question about which cues and computations they use to do so. It is argued from theoretical and mathematical considerations that perfect color constancy is not possible using only the available visual information ( Maloney & Wandell, 1986 Logvinenko et al., 2015). Behavioral studies disagree on the degree of color constancy exhibited by human observers ( Witzel & Gegenfurtner, 2018), and color constancy is considered an ill-posed problem. Although extensively studied (see Gegenfurtner & Kiper, 2003 Witzel & Gegenfurtner, 2018 Foster, 2011, for reviews), it has yet to be fully understood. However, DeepCC, our simplest sequential convolutional network, represented colors along the three color dimensions of human color vision, while ResNets showed a more complex representation.Ĭolor constancy denotes the ability to perceive constant colors, even though variations in illumination change the spectrum of the light entering the eye.

Both ResNets and classical ConvNets of varying degrees of complexity performed well. When gradually removing cues from the scene, constancy decreased.

High levels of color constancy were achieved with different deep neural networks, and constancy was higher along the daylight locus. Testing was done with four new illuminations with equally spaced CIEL*a*b* chromaticities, two along the daylight locus and two orthogonal to it. The models were trained to classify the reflectance of the objects. Inputs to the networks consisted of two-dimensional images of simulated cone excitations derived from three-dimensional (3D) rendered scenes of 2,115 different 3D shapes, with spectral reflectances of 1,600 different Munsell chips, illuminated under 278 different natural illuminations. Here, we trained deep neural networks to be color constant and evaluated their performance with varying cues. Color constancy is our ability to perceive constant colors across varying illuminations.
