Skip to yearly menu bar Skip to main content


Poster

Learning what and where to attend

Drew Linsley · Dan Shiebler · Sven Eberhardt · Thomas Serre

Great Hall BC #47

Keywords: [ object recognition ] [ cognitive science ] [ attention models ] [ human feature importance ]


Abstract:

Most recent gains in visual recognition have originated from the inclusion of attention mechanisms in deep convolutional networks (DCNs). Because these networks are optimized for object recognition, they learn where to attend using only a weak form of supervision derived from image class labels. Here, we demonstrate the benefit of using stronger supervisory signals by teaching DCNs to attend to image regions that humans deem important for object recognition. We first describe a large-scale online experiment (ClickMe) used to supplement ImageNet with nearly half a million human-derived "top-down" attention maps. Using human psychophysics, we confirm that the identified top-down features from ClickMe are more diagnostic than "bottom-up" saliency features for rapid image categorization. As a proof of concept, we extend a state-of-the-art attention network and demonstrate that adding ClickMe supervision significantly improves its accuracy and yields visual features that are more interpretable and more similar to those used by human observers.

Live content is unavailable. Log in and register to view live content