-
Combining Modular Skills in Multitask Learning
Paper • 2202.13914 • Published • 4 -
The Power of Scale for Parameter-Efficient Prompt Tuning
Paper • 2104.08691 • Published • 9 -
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Paper • 2101.00190 • Published • 6 -
GPT Understands, Too
Paper • 2103.10385 • Published • 8
Collections
Discover the best community collections!
Collections including paper arxiv:2309.15223
-
Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
Paper • 2303.10512 • Published -
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Paper • 2205.05638 • Published • 3 -
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Paper • 2303.16199 • Published • 4 -
FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning
Paper • 2108.06098 • Published • 2
-
AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models
Paper • 2309.16414 • Published • 19 -
Dynamic ASR Pathways: An Adaptive Masking Approach Towards Efficient Pruning of A Multilingual ASR Model
Paper • 2309.13018 • Published • 9 -
Robust Speech Recognition via Large-Scale Weak Supervision
Paper • 2212.04356 • Published • 23 -
Language models in molecular discovery
Paper • 2309.16235 • Published • 10
-
Agents: An Open-source Framework for Autonomous Language Agents
Paper • 2309.07870 • Published • 42 -
Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts
Paper • 2309.07430 • Published • 27 -
Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers
Paper • 2309.08532 • Published • 52 -
Investigating Answerability of LLMs for Long-Form Question Answering
Paper • 2309.08210 • Published • 12