Journal article

An empirical study of fault localisation techniques for deep neural networks

  • 2025
Published in:
  • Empirical Software Engineering. - 2025, vol. 30, no. 5, p. 124
English With the increased popularity of Deep Neural Networks (DNNs), increases also the need for tools to assist developers in the DNN implementation, testing and debugging process. Several approaches have been proposed that automatically analyse and localise potential faults in DNNs under test. In this work, we evaluate and compare existing state-of-the-art fault localisation techniques, which operate based on both dynamic and static analysis of the DNN. The evaluation is performed on a benchmark consisting of both real faults obtained from bug reporting platforms and faulty models produced by a mutation tool. Our findings indicate that the usage of a single, specific ground truth (e.g. the human-defined one) for the evaluation of DNN fault localisation tools results in pretty low performance (maximum average recall of 0.33 and precision of 0.21). However, such figures increase when considering alternative, equivalent patches that exist for a given faulty DNN. The results indicate that DeepFD is the most effective tool, achieving an average recall of 0.55 and a precision of 0.37 on our benchmark.
Collections
Language
  • English
Classification
Computer science and technology
License
CC BY
Open access status
hybrid
Identifiers
Persistent URL
https://n2t.net/ark:/12658/srd1333991
Statistics

Document views: 6 File downloads:
  • Tonella_2025_Springer_s10664-025-10657-7: 11