Enhancing Code Generation for Low-Resource Languages: No Silver Bullet
Abstract
The advent of Large Language Models (LLMs) has significantly advanced the field of automated code generation. LLMs rely on large and diverse datasets to learn syntax, semantics, and usage patterns of programming languages. For low-resource languages (i.e., niche programming languages characterized by the scarcity of training data), the limited availability of such data hampers the models' ability to generalize effectively, resulting in poorer code generation performance as compared to high-resource languages. For this reason, there is a quest for techniques able to close this performance gap. We present an empirical study investigating the effectiveness of several approaches for boosting LLMs' performance on low-resource languages, namely: (i) a classic fine-tuning, which is however capped in size by the scarcity of training data; (ii) three variants of in-context learning, with prompts crafted to provide the LLM with additional information about the low-resource language (e.g., few-shot examples showcasing features of the targeted language); and (iii) a pre-training objective teaching the model how to translate between high- and low-resource languages. The context of our study are two low-resource languages (R and Racket) and six LLMs having different architectures and sizes. Our findings reveal that a fine-tuning is usually the best choice for smaller LLMs, possibly due to the fact that even a small dataset is sufficient to train their limited number of parameters. With the increase in size of the models, in-context learning becomes more and more effective, representing a safe and cheap bet (i.e., it always helps, but with different magnitudes). Differently, very large LLMs may deteriorate their performance on low-resource languages when fine-tuning is performed, possibly due to the lack of enough data needed to effectively update their weights.
Community
In this work, we empirically analyze code generation capabilities of commercial (Copilot) and open-source models (CodeLlama and DeepSeek Coder) of different sizes for uncommon programming languages (Lua, Julia, R, and Racket). Our experiments includes several fine-tuning and prompting techniques aimed at improving the models' performance on niche programming languages. Our findings reveal that smaller models benefit more with fine-tuning techniques, while in-context learning is always the safest (and cheapest) option, in particular for larger models (more than 30B parameters).
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Towards Neural No-Resource Language Translation: A Comparative Evaluation of Approaches (2024)
- How Much Do Code Language Models Remember? An Investigation on Data Extraction Attacks before and after Fine-tuning (2025)
- Pipeline Analysis for Developing Instruct LLMs in Low-Resource Languages: A Case Study on Basque (2024)
- How to Select Pre-Trained Code Models for Reuse? A Learning Perspective (2025)
- LLMic: Romanian Foundation Language Model (2025)
- The Heap: A Contamination-Free Multilingual Code Dataset for Evaluating Large Language Models (2025)
- Generating Move Smart Contracts based on Concepts (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper