The workhorses of CNNs are its filters, located at different layers and tuned to different features. Their responses are combined using weights obtained via network training. Training is aimed at optimal results for the entire training data, e.g., highest average classification accuracy. In this paper, we are interested in extending the current understanding of the roles played by the filters, their mutual interactions, and their relationship to classification accuracy. This is motivated by observations that the classification accuracy for some classes increases, instead of decreasing when some filters are pruned from a CNN. We are interested in experimentally addressing the following question: Under what conditions does filter pruning increase classification accuracy? We show that improvement of classification accuracy occurs for certain classes. These classes are placed during learning into a space (spanned by filter usage) populated with semantically related neighbors. The neighborhood structure of such classes is however sparse enough so that during pruning, the resulting compression bringing all classes together brings sample data closer together and thus increases the accuracy of classification.
- K. Abdiyeva, M. Lukac and N. Ahuja, Remove to Improve, Int. Conf. on Pattern Recognition (ICPR) EDL-AI Workshop, Jan, 2021.