The MTG-Jamendo Dataset

We present the MTG-Jamendo Dataset, a new open dataset for music auto-tagging. It is built using music available at Jamendo under Creative Commons licenses and tags provided by content uploaders. The dataset contains over 55,000 full audio tracks with 195 tags from genre, instrument, and mood/theme categories. We provide elaborated data splits for researchers and report the performance of a simple baseline approach on five different sets of tags: genre, instrument, mood/theme, top-50, and overall.

This repository contains metadata, scripts, instructions on how to download and use the dataset and reproduce baseline results.

A subset of the dataset is used in the Emotion and Theme Recognition in Music Task within MediaEval 2019 (you are welcome to participate).

Structure

Metadata files in data

Pre-processing

Subsets

Splits

Note: A few tags are discarded in the splits to guarantee the same list of tags across all splits. For autotagging.tsv, this results in 55,525 tracks annotated by 87 genre tags, 40 instrument tags, and 56 mood/theme tags available in the splits.

Splits are generated from autotagging.tsv, containing all tags. For each split, the related subsets (top50, genre, instrument, mood/theme) are built filtering out unrelated tags and tracks without any tags.

Statistics in stats

Statistics of number of tracks, albums and artists per tag sorted by number of artists. Each directory has statistics for metadata file with the same name. Statistics for subsets based on categories are not kept seperated due to it already included in autotagging

Using the dataset

Requirements

Downloading the data

All audio is distributed in 320kbps MP3 format. In addition we provide precomputed mel-spectrograms which are distributed as NumPy Arrays in NPY format. We also provide precomputed statistical features from Essentia (used in the AcousticBrainz music database) in JSON format. The audio files and the NPY/JSON files are split into folders packed into TAR archives. The dataset is hosted online at MTG UPF.

We provide the following data subsets:

For faster downloads, we host a copy of the dataset on Google Drive. We provide a script to download and validate all files in the dataset. See its help message for more information:

python scripts/download/download.py -h
usage: download.py [-h] [--dataset {raw_30s,autotagging_moodtheme}]
                   [--type {audio,melspecs,acousticbrainz}]
                   [--from {gdrive,mtg}] [--unpack] [--remove]
                   outputdir

Download the MTG-Jamendo dataset

positional arguments:
  outputdir             directory to store the dataset

optional arguments:
  -h, --help            show this help message and exit
  --dataset {raw_30s,autotagging_moodtheme}
                        dataset to download (default: raw_30s)
  --type {audio,melspecs,acousticbrainz}
                        type of data to download (audio, mel-spectrograms,
                        AcousticBrainz features) (default: audio)
  --from {gdrive,mtg}   download from Google Drive (fast everywhere) or MTG
                        (server in Spain, slow) (default: gdrive)
  --unpack              unpack tar archives (default: False)
  --remove              remove tar archives while unpacking one by one (use to
                        save disk space) (default: False)

For example, to download audio for the autotagging_moodtheme.tsv subset, unpack and validate all tar archives:

mkdir /path/to/download
python3 scripts/download/download.py --dataset autotagging_moodtheme --type audio /path/to/download --unpack --remove

Unpacking process is run after tar archive downloads are complete and validated. In the case of download errors, re-run the script to download missing files.

Due to the large size of the dataset, it can be useful to include the --remove flag to save disk space: in this case, tar archive are unpacked and immediately removed one by one.

Loading data in python

Assuming you are working in scripts folder

import commons

input_file = '../data/autotagging.tsv'
tracks, tags, extra = commons.read_file(input_file)

tracks is a dictionary with track_id as key and track data as value:

{
    1376256: {
    'artist_id': 490499,
    'album_id': 161779,
    'path': '56/1376256.mp3',
    'duration': 166.0,
    'tags': [
        'genre---easylistening',
        'genre---downtempo',
        'genre---chillout',
        'mood/theme---commercial',
        'mood/theme---corporate',
        'instrument---piano'
        ],
    'genre': {'chillout', 'downtempo', 'easylistening'},
    'mood/theme': {'commercial', 'corporate'},
    'instrument': {'piano'}
    }
    ...
}

tags contains mapping of tags to track_id:

{
    'genre': {
        'easylistening': {1376256, 1376257, ...},
        'downtempo': {1376256, 1376257, ...},
        ...
    },
    'mood/theme': {...},
    'instrument': {...}
}

extra has information that is useful to format output file, so pass it to write_file if you are using it, otherwise you can just ignore it

Reproduce postprocessing & statistics

Recreate subsets

Results

Research challenges using the dataset

MediaEval 2019 Emotion and Theme Recognition in Music

Citing the dataset

Please consider citing the following publication when using the dataset:

Bogdanov, D., Won M., Tovstogan P., Porter A., & Serra X. (2019). The MTG-Jamendo Dataset for Automatic Music Tagging. Machine Learning for Music Discovery Workshop, International Conference on Machine Learning (ICML 2019).

An expanded version of the paper describing the dataset and the baselines will be announced later.

License

Copyright 2019 Music Technology Group

Acknowledgments

This work was funded by the predoctoral grant MDM-2015-0502-17-2 from the Spanish Ministry of Economy and Competitiveness linked to the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502).

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 765068.

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688382 “AudioCommons”.