Safetensors
roberta

NLBSE2025

This repository contains the replication package for the work Optimizing Deep Learning Models to Address Class Imbalance in Code Comment Classification. The work was conducted by Moritz Mock, Thomas Borsani, Giuseppe Di Fatta, and Barbara Russo.

Link to preprint: https://doi.org/10.48550/arXiv.2501.15854

Abstract

Developers rely on code comments to document their work, track issues, and understand the source code. As such, comments provide valuable insights into developers' understanding of their code and describe their various intentions in writing the surrounding code. Recent research leverages natural language processing and deep learning to classify comments based on developers' intentions. However, such labelled data are often imbalanced, causing learning models to perform poorly. This work investigates the use of different weighting strategies of the loss function to mitigate the scarcity of certain classes in the dataset. In particular, various RoBERTa-based transformer models are fine-tuned by means of a hyperparameter search to identify their optimal parameter configurations. Additionally, we fine-tuned the transformers with different weighting strategies for the loss function to address class imbalances. Our approach outperforms the STACC baseline by 8.9 per cent on the NLBSE'25 Tool Competition dataset in terms of the average F1$_c$ score, and exceeding the baseline approach in 17 out of 19 cases with a gain ranging from -5.0 to 38.2. The source code is publicly available at https://github.com/moritzmock/NLBSE2025.

This repository contains the model for the subset of the language Python.

Downloads last month
93
Safetensors
Model size
125M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for mmock/NLBSE2025_python

Finetuned
(33)
this model

Dataset used to train mmock/NLBSE2025_python