MONSTER: Monash Scalable Time Series Evaluation Repository
Abstract
We introduce MONSTER-the MONash Scalable Time Series Evaluation Repository-a collection of large datasets for time series classification. The field of time series classification has benefitted from common benchmarks set by the UCR and UEA time series classification repositories. However, the datasets in these benchmarks are small, with median sizes of 217 and 255 examples, respectively. In consequence they favour a narrow subspace of models that are optimised to achieve low classification error on a wide variety of smaller datasets, that is, models that minimise variance, and give little weight to computational issues such as scalability. Our hope is to diversify the field by introducing benchmarks using larger datasets. We believe that there is enormous potential for new progress in the field by engaging with the theoretical and practical challenges of learning effectively from larger quantities of data.
Community
Initial release of MONSTER, a new benchmark collection of large datasets (10K–50M) for time series classification, with baseline results for key models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Sundial: A Family of Highly Capable Time Series Foundation Models (2025)
- Mantis: Lightweight Calibrated Foundation Model for User-Friendly Time Series Classification (2025)
- Large Language Models are Few-shot Multivariate Time Series Classifiers (2025)
- Two-stage hybrid models for enhancing forecasting accuracy on heterogeneous time series (2025)
- Closing the Gap Between Synthetic and Ground Truth Time Series Distributions via Neural Mapping (2025)
- TimeDP: Learning to Generate Multi-Domain Time Series with Domain Prompts (2025)
- TS-OOD: Evaluating Time-Series Out-of-Distribution Detection and Prospective Directions for Progress (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 29
Browse 29 datasets citing this paperSpaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper