Manuscript received May 31, 2024; revised June 22, 2024; accepted July 17, 2024; published February 25, 2025
Abstract—Representing the weight in the network with only 1bit contributes to saving of the required memory footprint. Channel attention with the squeeze-and-excitation (SE) technique can eliminate redundant channels, resulting in saving of the number of the weights. Nevertheless, this causes an unstable and slow learning curve. To address this issue, this paper proposes the first attempt to accelerate the learning curve, even with a 1-bit weight representation across the whole SEResNet14 network, which significantly reduced the number of model parameters with only a minimal loss in accuracy. We also experimented with more aggressive activation functions such as HardTanh. We demonstrated that the FMB (Feature Map Binarization) method can reduce the number of active channels across different layers, thereby decreasing the quantity of weights in the channel direction. We also introduced the first attempt to utilize the EigenCAM for evaluating the channel attention effects. Experimental results demonstrate the efficacy of the proposed technique in the SE module in terms of speed-up of the learning curve and positional accuracy of the heat map based on the EigenCAM. We found the difference in the heat map position between the two cases with and without the proposed technique.
Keywords—EigenCAM, ResNet14, CIFAR-10, SVHN, SE attention mechanism, 1-bit quantization, model compression, activation functions, channel feature maps binarization, ultra-compact ai deployment
Cite: Wu Shaoqing and Hiroyuki Yamauchi, "A Binarized Feature Mapping Technique for Enhancing Squeeze-and-Excitation (SE) Channel Attention Mechanism," International Journal of Machine Learning vol. 15, no. 1, pp. 23-28, 2025.
Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).