DOI: https://doie.org/10.0618/Jbse.2024912129
Manjula B Bhajantri, Dr.Sharanabasaveshwar G Hiremath
CNN, FPGA, MAC, Multiplier, Approximate Compressor
The latest Convolutional Neural Networks (CNNs) incorporate a greater number of convolution layers compared to previous iterations, aiming to enhance classification accuracy and superresolution. While numerous researchers concentrate on minimizing network size to lower computational costs while maintaining accuracy, others explore optimizing individual convolution layers to reduce computational expenses. In this paper, we introduce a novel approach to approximate computing utilizing 4-2 compressors, applied to both Baugh Wooley and Booth multipliers. Convolution layers within CNNs typically employ multiply-andaccumulate (MAC) operations. We have integrated approximate multipliers into the adapted MAC structure to enhance the efficient utilization of Field Programmable Gate Array (FPGA) resources. Our results demonstrate that the proposed approximate compressors exhibit a reduction of 15.4% in the area-delay product (ADP) and 35.7% in the area-power product (APP) compared to previous design methodologies.