Conference paper (in proceedings)

Large language models for in-file vulnerability localization can be “Lost in the End”

  • 2025
Published in:
  • ACM on Software Engineering. - 2025, vol. 2, no. FSE
English Traditionally, software vulnerability detection research has focused on individual small functions due to earlier language processing technologies’ limitations in handling larger inputs. However, this function-level approach may miss bugs that span multiple functions and code blocks. Recent advancements in artificial intelligence have enabled processing of larger inputs, leading everyday software developers to increasingly rely on chat-based large language models (LLMs) like GPT-3.5 and GPT-4 to detect vulnerabilities across entire files, not just within functions. This new development practice requires researchers to urgently investigate whether commonly used LLMs can effectively analyze large file-sized inputs, in order to provide timely insights for software developers and engineers about the pros and cons of this emerging technological trend. Hence, the goal of this paper is to evaluate the effectiveness of several state-of-the-art chat-based LLMs, including the GPT models, in detecting in-file vulnerabilities. We conducted a costly investigation into how the performance of LLMs varies based on vulnerability type, input size, and vulnerability location within the file. To give enough statistical power (𝛽 ≥.8) to our study, we could only focus on the three most common (as well as dangerous) vulnerabilities: XSS, SQL injection, and path traversal. Our findings indicate that the effectiveness of LLMs in detecting these vulnerabilities is strongly influenced by both the location of the vulnerability and the overall size of the input. Specifically, regardless of the vulnerability type, LLMs tend to significantly (𝑝 < .05) underperform when detecting vulnerabilities located toward the end of larger files—a pattern we call the ‘lost-in-the-end’ effect. Finally, to further support software developers and practitioners, we also explored the optimal input size for these LLMs and presented a simple strategy for identifying it, which can be applied to other models and vulnerability types. Eventually, we show how adjusting the input size can lead to significant improvements in LLM-based vulnerability detection, with an average recall increase of over 37% across all models.
Collections
Language
  • English
Classification
Computer science and technology
Notes
  • ACM International Conference on the Foundations of Software Engineering (FSE)
  • Trondheim, Norway
  • Mon 23 - Fri 27 June 2025
License
CC BY
Open access status
hybrid
Identifiers
  • ISSN 2994-970X
  • DOI 10.1145/3715758
  • RICERCO 37396
  • ARK ark:/12658/srd1334061
Persistent URL
https://n2t.net/ark:/12658/srd1334061
Statistics

Document views: 5 File downloads:
  • Sovrano_2025_3715758: 3