Skip to yearly menu bar Skip to main content

In-Person Poster presentation / poster accept

DamoFD: Digging into Backbone Design on Face Detection

Yang Liu · Jiankang Deng · Fei Wang · Lei Shang · Xuansong Xie · Baigui Sun

MH1-2-3-4 #35

Keywords: [ Applications ] [ Face Detection ] [ neural architecture search ] [ Network Expressivity ]


Face detection (FD) has achieved remarkable success over the past few years, yet,these leaps often arrive when consuming enormous computation costs. Moreover,when considering a realistic situation, i.e., building a lightweight face detectorunder a computation-scarce scenario, such heavy computation cost limits the applicationof the face detector. To remedy this, several pioneering works designtiny face detectors through off-the-shelf neural architecture search (NAS) technologies,which are usually applied to the classification task. Thus, the searchedarchitectures are sub-optimal for the face detection task since some design criteriabetween detection and classification task are different. As a representative, theface detection backbone design needs to guarantee the stage-level detection abilitywhile it is not required for the classification backbone. Furthermore, the detectionbackbone consumes a vast body of inference budgets in the whole detection framework.Considering the intrinsic design requirement and the virtual importance roleof the face detection backbone, we thus ask a critical question: How to employNAS to search FD-friendly backbone architecture? To cope with this question,we propose a distribution-dependent stage-aware ranking score (DDSAR-Score)to explicitly characterize the stage-level expressivity and identify the individualimportance of each stage, thus satisfying the aforementioned design criterion ofthe FD backbone. Based on our proposed DDSAR-Score, we conduct comprehensiveexperiments on the challenging Wider Face benchmark dataset and achievedominant performance across a wide range of compute regimes. In particular,compared to the tiniest face detector SCRFD-0.5GF, our method is +2.5 % betterin Average Precision (AP) score when using the same amount of FLOPs. Thecode is avaliable at

Chat is not available.