Due to the variabilities in image structures caused by perspective scaling transformations, it is essential for deep networks to have an ability to generalise to scales not seen during training. This paper presents an in-depth analysis of the scale generalisation properties of the scale-covariant and scale-invariant Gaussian derivative networks, complemented with both conceptual and algorithmic extensions. For this purpose, Gaussian derivative networks (GaussDerNets) are evaluated on new rescaled versions of the Fashion-MNIST and the CIFAR-10 datasets, with spatial scaling variations over a factor of 4 in the testing data, that are not present in the training data. Additionally, evaluations on the previously existing STIR datasets show that the GaussDerNets achieve better scale generalisation than previously reported for these datasets for other types of deep networks.
We first experimentally demonstrate that the GaussDerNets have quite good scale generalisation properties on the new datasets, and that average pooling of feature responses over scales may sometimes also lead to better results than the previously used approach of max pooling over scales. Then, we demonstrate that using a spatial max pooling mechanism after the final layer enables localisation of non-centred objects in image domain, with maintained scale generalisation properties. We also show that regularisation during training, by applying dropout across the scale channels, referred to as scale-channel dropout, improves both the performance and the scale generalisation.
In additional ablation studies, we show that, for the rescaled CIFAR-10 dataset, basing the layers in the GaussDerNets on derivatives up to order three leads to better performance and scale generalisation for coarser scales, whereas networks based on derivatives up to order two achieve better scale generalisation for finer scales. Moreover, we demonstrate that discretisations of GaussDerNets based on the discrete analogue of the Gaussian kernel in combination with central difference operators perform best or among the best, compared to a set of other discrete approximations of the Gaussian derivative kernels. Furthermore, we show that the improvement in performance obtained by learning the scale values of the Gaussian derivatives, as opposed to using the previously proposed choice of a fixed logarithmic distribution of the scale levels, is usually only minor, thus supporting the previously postulated choice of using a logarithmic distribution as a very reasonable prior.
Finally, by visualising the activation maps and the learned receptive fields, we demonstrate that the GaussDerNets have very good explainability properties.
2025. , p. 52
Scale covariance, Scale invariance, Scale generalisation, Scale selection, Gaussian derivative, Scale space, Deep learning, Receptive fields