The checkpoints for the MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training.
Multimodal Art Projection
community
AI & ML interests
None defined yet.
Recent Activity
View all activity
Organization Card
Multimodal Art Projection (M-A-P) is an open-source AI research community.
The community members are working on research topics in a wide range of spectrum, including but not limited to pre-training paradigm of foundation models, large-scale data collection and processing, and the derived applciations on coding, reasoning and music creativity.
The community is open to researchers keen on any relevant topic. Welcome to join us!
- Discord Channel
- Our Full Paper List
- mail: [email protected]
The development log of our Multimodal Art Projection (m-a-p) model family:
- 🔥08/05/2024: We release the fully transparent large language model MAP-Neo, series models for scaling law exploraltion and post-training alignment, and along with the training corpus Matrix.
- 🔥11/04/2024: MuPT paper and demo are out. HF collection.
- 🔥08/04/2024: Chinese Tiny LLM is out. HF collection.
- 🔥28/02/2024: The release of ChatMusician's demo, code, model, data, and benchmark. 😆
- 🔥23/02/2024: The release of OpenCodeInterpreter, beats GPT-4 code interpreter on HumanEval.
- 23/01/2024: we release CMMMU for better Chinese LMMs' Evaluation.
- 13/01/2024: we release a series of Music Pretrained Transformer (MuPT) checkpoints, with size up to 1.3B and 8192 context length. Our models are LLAMA2-based, pre-trained on world's largest 10B tokens symbolic music dataset (ABC notation format). We currently support Megatron-LM format and will release huggingface checkpoints soon.
- 02/06/2023: officially release the MERT pre-print paper and training codes.
- 17/03/2023: we release two advanced music understanding models, MERT-v1-95M and MERT-v1-330M , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks.
- 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset MERT-v0-public
- 29/12/2022: a music understanding model MERT-v0 trained with MLM paradigm, which performs better at downstream tasks.
- 29/10/2022: a pre-trained MIR model music2vec trained with BYOL paradigm.
Collections
12
spaces
4
models
103
m-a-p/FineFineWeb-bert
Updated
•
1
m-a-p/MIO-7B-Instruct
Updated
•
67
•
2
m-a-p/MIO-7B-Base
Updated
•
28
m-a-p/MusiLingo-short-v1
Feature Extraction
•
Updated
•
39
•
2
m-a-p/MusiLingo-long-v1
Feature Extraction
•
Updated
•
21
•
5
m-a-p/MusiLingo-musicqa-v1
Feature Extraction
•
Updated
•
24
•
2
m-a-p/neo_scalinglaw_460M
Updated
•
1
m-a-p/neo_scalinglaw_980M
Updated
•
1
m-a-p/neo_2b_general
Updated
•
4
m-a-p/neo_scalinglaw_250M
Updated
•
2
datasets
34
m-a-p/PIN-100M
Viewer
•
Updated
•
68.1k
•
2.02k
•
1
m-a-p/PIN-14M
Viewer
•
Updated
•
68.1k
•
66.1k
•
27
m-a-p/FineFineWeb-fasttext-seeddata
Preview
•
Updated
•
147
m-a-p/FineFineWeb-validation
Viewer
•
Updated
•
35.6k
•
63
•
1
m-a-p/FineFineWeb-test
Viewer
•
Updated
•
1.41M
•
111
•
3
m-a-p/FineFineWeb-sample
Viewer
•
Updated
•
224M
•
906
m-a-p/FineFineWeb
Viewer
•
Updated
•
4.89B
•
7.4k
•
23
m-a-p/FineFineWeb-bert-seeddata
Viewer
•
Updated
•
8.8M
•
243
•
1
m-a-p/MDEVAL
Preview
•
Updated
•
112
•
1
m-a-p/CII-Bench
Viewer
•
Updated
•
800
•
406
•
2