BoRA: Bi-dimensional Weight-Decomposed Low-Rank Adaptation
Abstract
In recent years, Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) have significantly enhanced the adaptability of large-scale pre-trained models. Weight-Decomposed Low-Rank Adaptation (DoRA) improves upon LoRA by separating the magnitude and direction components of the weight matrix, leading to superior performance. However, DoRA's improvements are limited to the vertical dimension, resulting in an asymmetrical pattern between horizontal and vertical dimensions. This paper introduces BoRA, an innovative extension of LoRA and DoRA, characterized by symmetrical properties across horizontal and vertical dimensions. Our approach optimizes the weight matrix symmetrically by adjusting both column-wise and row-wise magnitudes. Extensive experiments demonstrate that BoRA surpasses state-of-the-art PEFT methods, including LoRA and DoRA, achieving superior results across various benchmarks.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Transformed Low-rank Adaptation via Tensor Decomposition and Its Applications to Text-to-image Models (2025)
- One Head Eight Arms: Block Matrix based Low Rank Adaptation for CLIP-based Few-Shot Learning (2025)
- Complementary Subspace Low-Rank Adaptation of Vision-Language Models for Few-Shot Classification (2025)
- EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value Decomposition (2025)
- Gradient Weight-normalized Low-rank Projection for Efficient LLM Training (2024)
- Adaptive Parameter-Efficient Federated Fine-Tuning on Heterogeneous Devices (2024)
- TriAdaptLoRA: Brain-Inspired Triangular Adaptive Low-Rank Adaptation for Parameter-Efficient Fine-Tuning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper