The Weight of a Bit: EMFI Sensitivity Analysis of Embedded Deep Learning Models

arXiv:2602.16309v2 Announce Type: replace-cross Abstract: Fault injection attacks on embedded neural network models have been shown as a potent threat. Numerous works studied resilience of models from various points of view. As of now, there is no comprehensive study that would evaluate the influence of number representations used for model parameters against electromagnetic fault injection (EMFI) attacks. In this paper, we investigate how four different number representations influence the success of an EMFI attack on embedded neural network models. We chose two common floating-point representations (32-bit, and 16-bit), and two integer representations (8-bit, and 4-bit). We deployed four common image classifiers, ResNet-18, ResNet-34, ResNet-50, and VGG-11, on an embedded memory chip, and utilized a low-cost EMFI platform to trigger faults. Beyond accuracy evaluation, we characterize the injected fault pattern by analyzing the bit error rate, the spatial distribution of corrupted bytes, and the prevalence of 0xFE/0xFF byte values across formats, identifying the mechanisms responsible for the observed differences in resilience. Our results show that while floating-point representations exhibit almost a complete degradation in accuracy (Top-1 and Top-5) after a single fault injection, integer representations offer better resistance overall. In particular, the 8-bit representation on a relatively large network (VGG-11) retains Top-1 accuracy of around 70% and Top-5 at around 90%.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top