Skip to yearly menu bar Skip to main content


Poster

Designing Concise ConvNets with Columnar Stages

Ashish Kumar · Jaesik Park

Hall 3 + Hall 2B #364
[ ] [ Project Page ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

In the era of vision Transformers, the recent success of VanillaNet shows the hugepotential of simple and concise convolutional neural networks (ConvNets). Wheresuch models mainly focus on runtime, it is also crucial to simultaneously focuson other aspects, e.g., FLOPs, parameters, etc, to strengthen their utility further.To this end, we introduce a refreshing ConvNet macro design called ColumnarStage Network (CoSNet). CoSNet has a systematically developed simple andconcise structure, smaller depth, low parameter count, low FLOPs, and attention-less operations, well suited for resource-constrained deployment. The key noveltyof CoSNet is deploying parallel convolutions with fewer kernels fed by inputreplication, using columnar stacking of these convolutions, and minimizing the useof 1×1 convolution layers. Our comprehensive evaluations show that CoSNet rivalsmany renowned ConvNets and Transformer designs under resource-constrainedscenarios. Pretrained models shall be open-sourced.

Live content is unavailable. Log in and register to view live content