Text Generation
Transformers
Inference Endpoints
mikecovlee commited on
Commit
ebdc2e3
1 Parent(s): 29186b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -6,7 +6,11 @@ datasets:
6
  metrics:
7
  - accuracy
8
  ---
9
- # MixLoRA: Resource-Efficient Model with Mix-of-Experts Architecture for Enhanced LoRA Performance
 
 
 
 
10
 
11
  <div align="left"><img src="MixLoRA.png" width=60%"></div>
12
 
@@ -27,7 +31,7 @@ The table above presents the performance of MixLoRA and compares these results w
27
 
28
  ## How to Use
29
 
30
- Please visit our GitHub repository: https://github.com/mikecovlee/mLoRA
31
 
32
  ## Citation
33
  If MixLoRA has been useful for your work, please consider citing it using the appropriate citation format for your publication.
 
6
  metrics:
7
  - accuracy
8
  ---
9
+ # MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
10
+
11
+ [![arXiv](https://img.shields.io/badge/arXiv-2404.15159-b31b1b.svg)](https://arxiv.org/abs/2404.15159)
12
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/mixlora-enhancing-large-language-models-fine/question-answering-on-social-iqa)](https://paperswithcode.com/sota/question-answering-on-social-iqa?p=mixlora-enhancing-large-language-models-fine)
13
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/mixlora-enhancing-large-language-models-fine/question-answering-on-piqa)](https://paperswithcode.com/sota/question-answering-on-piqa?p=mixlora-enhancing-large-language-models-fine)
14
 
15
  <div align="left"><img src="MixLoRA.png" width=60%"></div>
16
 
 
31
 
32
  ## How to Use
33
 
34
+ Please visit our GitHub repository: https://github.com/TUDB-Labs/MixLoRA
35
 
36
  ## Citation
37
  If MixLoRA has been useful for your work, please consider citing it using the appropriate citation format for your publication.