Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Does Deep Learning Learn to Abstract? A Systematic Probing Framework

Shengnan An · Zeqi Lin · Bei Chen · Qiang Fu · Nanning Zheng · Jian-Guang Lou

MH1-2-3-4 #87

Keywords: [ Deep Learning and representational learning ] [ Pre-Trained Language Model ] [ Abstraction Capability ] [ deep learning ] [ Probing Tasks ]


Abstract:

Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context. At the same time, there is a lack of clear understanding about both the presence and further characteristics of this capability in deep learning models. In this paper, we introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective. A set of controlled experiments are conducted based on this framework, providing strong evidence that two probed pre-trained language models (PLMs), T5 and GPT2, have the abstraction capability. We also conduct in-depth analysis, thus shedding further light: (1) the whole training phase exhibits a "memorize-then-abstract" two-stage process; (2) the learned abstract concepts are gathered in a few middle-layer attention heads, rather than being evenly distributed throughout the model; (3) the probed abstraction capabilities exhibit robustness against concept mutations, and are more robust to low-level/source-side mutations than high-level/target-side ones; (4) generic pre-training is critical to the emergence of abstraction capability, and PLMs exhibit better abstraction with larger model sizes and data scales.

Chat is not available.