Doctoral thesis

Exploring the usage of pre-trained models for code-related tasks

  • 2024

PhD: Università della Svizzera italiana

English Developers are often faced with the challenge of writing high-quality code while meeting strong time constraints. Recent literature exploits Deep Learning (DL) models to support developers in code-related tasks. For example, DL-based approaches have been proposed to automate bug-fixing activities, code summarization, and code review. Some of these tasks require to work with both code and technical natural language (e.g., code summarization), posing additional challenges in the training of DL models which must deal with bi-modal data. Our goal is to widen the support given to developers when dealing with code-related tasks characterized by technical natural language and code. To this extent, we started investigating the benefits brought by the ``pretrain-then-finetune'' paradigm when using DL models to automate code-related activities. The basic idea of this paradigm is to first pre-train the model with self-supervised tasks with the only goal of learning the languages of interest (e.g., technical English and code). Then, the fine-tuning phase takes care of specializing the model for the specific task of interest (e.g., code summarization). Given the positive results we achieved, we focused our research on two code-related tasks characterized by both code and natural language. The first is the already mentioned code summarization which consists in generating a code summary for a given piece of code at hand (e.g., method, code snippet). In this context, we also present a novel metric aimed at assessing automatically generated code summaries. The second is the generation and injection of complete log statements in which the DL model takes as input a code component and is in charge of recommending to developers which log statements may be beneficial to inject, thus taking care of generating the log statement (including a meaningful log message) and inject it in the correct code location. Finally, given the increasing popularity of the GitHub Copilot code recommender, we ran an empirical study on a third task characterized by both code and natural language, namely code generation (i.e., generate the code needed to implement a functionality described in natural language). In particular, we investigated Copilot's robustness in handling different yet semantically-equivalent natural language descriptions of the code to implement (prompts), showing its sensitivity to the wording used in the prompt (i.e., minor changes to the prompt result in different code synthesized by Copilot).
Collections
Language
  • English
Classification
Computer science and technology
License
License undefined
Open access status
green
Identifiers
Persistent URL
https://n2t.net/ark:/12658/srd1328566
Statistics

Document views: 4 File downloads:
  • 2024INF007.pdf: 5