PaperReading

RepVGG: Making VGG-style ConvNets Great Again

January 2021

tl;dr: Train inception-style, inference 3x3 only. Very deployment-friendly.

Overall impression

From the authors of ACNet.

The paper introduces a simple algebraic transformation to re-parameterization technique to transform a multi-branch topology into a stack of 3x3 conv and ReLUs.

Depthwise conv and channel shuffle increase the memory access cost and lack support of various devices. The amount of FLOPs does not accurately reflect the actual speed.

The structural reparameterization reminds me of the inflated 3D conv I3D initiated from 2d convs.

This seems to be an architecture very friendly to be deployed on embedded device.

Key ideas

Technical details

Notes