Collections
Discover the best community collections!
Collections including paper arxiv:2402.00838
-
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper • 2401.17464 • Published • 16 -
Divide and Conquer: Language Models can Plan and Self-Correct for Compositional Text-to-Image Generation
Paper • 2401.15688 • Published • 11 -
SliceGPT: Compress Large Language Models by Deleting Rows and Columns
Paper • 2401.15024 • Published • 68 -
From GPT-4 to Gemini and Beyond: Assessing the Landscape of MLLMs on Generalizability, Trustworthiness and Causality through Four Modalities
Paper • 2401.15071 • Published • 34
-
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 80 -
Efficient Exploration for LLMs
Paper • 2402.00396 • Published • 21 -
Can Large Language Models Understand Context?
Paper • 2402.00858 • Published • 21 -
Transforming and Combining Rewards for Aligning Large Language Models
Paper • 2402.00742 • Published • 11
-
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 80 -
OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Paper • 2402.01739 • Published • 26 -
LLM Agent Operating System
Paper • 2403.16971 • Published • 65 -
Poro 34B and the Blessing of Multilinguality
Paper • 2404.01856 • Published • 13
-
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 52 -
Simple linear attention language models balance the recall-throughput tradeoff
Paper • 2402.18668 • Published • 18 -
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition
Paper • 2402.15220 • Published • 19 -
Linear Transformers are Versatile In-Context Learners
Paper • 2402.14180 • Published • 6
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 143 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 48 -
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 80 -
OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Paper • 2402.01739 • Published • 26
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
You Only Look Once: Unified, Real-Time Object Detection
Paper • 1506.02640 • Published • 1 -
HEp-2 Cell Image Classification with Deep Convolutional Neural Networks
Paper • 1504.02531 • Published -
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Paper • 2401.05566 • Published • 25
-
TinyLlama: An Open-Source Small Language Model
Paper • 2401.02385 • Published • 89 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 44 -
SliceGPT: Compress Large Language Models by Deleting Rows and Columns
Paper • 2401.15024 • Published • 68 -
Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling
Paper • 2401.16380 • Published • 47
-
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper • 2401.02954 • Published • 40 -
Qwen Technical Report
Paper • 2309.16609 • Published • 34 -
GPT-4 Technical Report
Paper • 2303.08774 • Published • 5 -
Gemini: A Family of Highly Capable Multimodal Models
Paper • 2312.11805 • Published • 45