Collections
Discover the best community collections!
Collections including paper arxiv:2301.08727
-
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
Paper • 2011.09011 • Published • 2 -
HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
Paper • 2005.14187 • Published • 2 -
BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models
Paper • 2003.11142 • Published • 2 -
Efficient Architecture Search by Network Transformation
Paper • 1707.04873 • Published • 2
-
Measuring the Effects of Data Parallelism on Neural Network Training
Paper • 1811.03600 • Published • 2 -
Adafactor: Adaptive Learning Rates with Sublinear Memory Cost
Paper • 1804.04235 • Published • 2 -
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Paper • 1905.11946 • Published • 3 -
Yi: Open Foundation Models by 01.AI
Paper • 2403.04652 • Published • 62
-
Towards an Understanding of Large Language Models in Software Engineering Tasks
Paper • 2308.11396 • Published • 1 -
Several categories of Large Language Models (LLMs): A Short Survey
Paper • 2307.10188 • Published • 1 -
Large Language Models for Generative Recommendation: A Survey and Visionary Discussions
Paper • 2309.01157 • Published • 1 -
A Survey on Large Language Models for Recommendation
Paper • 2305.19860 • Published • 1
-
AutoML-GPT: Large Language Model for AutoML
Paper • 2309.01125 • Published • 1 -
SAI: Solving AI Tasks with Systematic Artificial Intelligence in Communication Network
Paper • 2310.09049 • Published • 1 -
Prompt2Model: Generating Deployable Models from Natural Language Instructions
Paper • 2308.12261 • Published • 1 -
LLMatic: Neural Architecture Search via Large Language Models and Quality Diversity Optimization
Paper • 2306.01102 • Published • 1