muspy¶
A toolkit for symbolic music generation.
MusPy is an open source Python library for symbolic music generation. It provides essential tools for developing a music generation system, including dataset management, data I/O, data preprocessing and model evaluation.
Features¶
- Dataset management system for commonly used datasets with interfaces to PyTorch and TensorFlow.
- Data I/O for common symbolic music formats (e.g., MIDI, MusicXML and ABC) and interfaces to other symbolic music libraries (e.g., music21, mido, pretty_midi and Pypianoroll).
- Implementations of common music representations for music generation, including the pitch-based, the event-based, the piano-roll and the note-based representations.
- Model evaluation tools for music generation systems, including audio rendering, score and piano-roll visualizations and objective metrics.
-
class
muspy.
Base
(**kwargs)[source]¶ Base class for MusPy classes.
This is the base class for MusPy classes. It provides two handy I/O methods—from_dict and to_ordered_dict. It also provides intuitive repr as well as methods pretty_str and print for beautifully printing the content.
In addition, hash is implemented by hash(repr(self)). Comparisons between two Base objects are also supported, where equality check will compare all attributes, while ‘less than’ and ‘greater than’ will only compare the time attribute.
Hint
To implement a new class in MusPy, please inherit from this class and set the following class variables properly.
- _attributes: An OrderedDict with attribute names as keys and their types as values.
- _optional_attributes: A list of optional attribute names.
- _list_attributes: A list of attributes that are lists.
Take
muspy.Note
for example.:_attributes = OrderedDict( [ ("time", int), ("duration", int), ("pitch", int), ("velocity", int), ("pitch_str", str), ] ) _optional_attributes = ["pitch_str"]
See also
muspy.ComplexBase
- Base class that supports advanced operations on list attributes.
-
adjust_time
(func: Callable[[int], int], attr: str = None, recursive: bool = True) → BaseType[source]¶ Adjust the timing of time-stamped objects.
Parameters: Returns: Return type: Object itself.
-
copy
() → BaseType[source]¶ Return a shallow copy of the object.
This is equivalent to
copy.copy(self)()
.Returns: Return type: Shallow copy of the object.
-
deepcopy
() → BaseType[source]¶ Return a deep copy of the object.
This is equivalent to
copy.deepcopy(self)()
Returns: Return type: Deep copy of the object.
-
fix_type
(attr: str = None, recursive: bool = True) → BaseType[source]¶ Fix the types of attributes.
Parameters: Returns: Return type: Object itself.
-
classmethod
from_dict
(dict_: Mapping[KT, VT_co], strict: bool = False, cast: bool = False) → BaseType[source]¶ Return an instance constructed from a dictionary.
Instantiate an object whose attributes and the corresponding values are given as a dictionary.
Parameters: Returns: Return type: Constructed object.
-
is_valid
(attr: str = None, recursive: bool = True) → bool[source]¶ Return True if an attribute has a valid type and value.
This will recursively apply to an attribute’s attributes.
Parameters: Returns: Whether the attribute has a valid type and value.
Return type: See also
muspy.Base.validate()
- Raise an error if an attribute has an invalid type or value.
muspy.Base.is_valid_type()
- Return True if an attribute is of a valid type.
-
is_valid_type
(attr: str = None, recursive: bool = True) → bool[source]¶ Return True if an attribute is of a valid type.
This will apply recursively to an attribute’s attributes.
Parameters: Returns: Whether the attribute is of a valid type.
Return type: See also
muspy.Base.validate_type()
- Raise an error if a certain attribute is of an invalid type.
muspy.Base.is_valid()
- Return True if an attribute has a valid type and value.
-
pretty_str
(skip_missing: bool = True) → str[source]¶ Return the attributes as a string in a YAML-like format.
Parameters: skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists. Returns: Stored data as a string in a YAML-like format. Return type: str See also
muspy.Base.print()
- Print the attributes in a YAML-like format.
-
print
(skip_missing: bool = True)[source]¶ Print the attributes in a YAML-like format.
Parameters: skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists. See also
muspy.Base.pretty_str()
- Return the the attributes as a string in a YAML-like format.
-
to_ordered_dict
(skip_missing: bool = True, deepcopy: bool = True) → collections.OrderedDict[source]¶ Return the object as an OrderedDict.
Return an ordered dictionary that stores the attributes and their values as key-value pairs.
Parameters: Returns: A dictionary that stores the attributes and their values as key-value pairs, e.g., {“attr1”: value1, “attr2”: value2}.
Return type: OrderedDict
-
validate
(attr: str = None, recursive: bool = True) → BaseType[source]¶ Raise an error if an attribute has an invalid type or value.
This will apply recursively to an attribute’s attributes.
Parameters: Returns: Return type: Object itself.
See also
muspy.Base.is_valid()
- Return True if an attribute has a valid type and value.
muspy.Base.validate_type()
- Raise an error if an attribute is of an invalid type.
-
validate_type
(attr: str = None, recursive: bool = True) → BaseType[source]¶ Raise an error if an attribute is of an invalid type.
This will apply recursively to an attribute’s attributes.
Parameters: Returns: Return type: Object itself.
See also
muspy.Base.is_valid_type()
- Return True if an attribute is of a valid type.
muspy.Base.validate()
- Raise an error if an attribute has an invalid type or value.
-
class
muspy.
ComplexBase
(**kwargs)[source]¶ Base class that supports advanced operations on list attributes.
This class extend the Base class with advanced operations on list attributes, including append, remove_invalid, remove_duplicate and sort.
See also
muspy.Base
- Base class for MusPy classes.
-
append
(obj) → ComplexBaseType[source]¶ Append an object to the corresponding list.
This will automatically determine the list attributes to append based on the type of the object.
Parameters: obj – Object to append.
-
extend
(other: Union[ComplexBaseType, Iterable[T_co]], deepcopy: bool = False) → ComplexBaseType[source]¶ Extend the list(s) with another object or iterable.
Parameters: - other (
muspy.ComplexBase
or iterable) – If an object of the same type is given, extend the list attributes with the corresponding list attributes of the other object. If an iterable is given, callmuspy.ComplexBase.append()
for each item. - deepcopy (bool, default: False) – Whether to make deep copies of the appended objects.
Returns: Return type: Object itself.
- other (
-
remove_duplicate
(attr: str = None, recursive: bool = True) → ComplexBaseType[source]¶ Remove duplicate items from a list attribute.
Parameters: Returns: Return type: Object itself.
-
class
muspy.
Annotation
(time: int, annotation: Any, group: str = None)[source]¶ A container for annotations.
-
annotation
¶ Annotation of any type.
Type: any
-
-
class
muspy.
Chord
(time: int, pitches: List[int], duration: int, velocity: int = None, pitches_str: List[int] = None)[source]¶ A container for chords.
-
pitches
¶ Note pitches, as MIDI note numbers. Valid values are 0 to 127.
Type: list of int
-
velocity
¶ Chord velocity. Valid values are 0 to 127.
Type: int, default: muspy.DEFAULT_VELOCITY (64)
-
pitches_str
¶ Note pitches as strings, useful for distinguishing, e.g., C# and Db.
Type: list of str, optional
-
adjust_time
(func: Callable[[int], int], attr: str = None, recursive: bool = True) → muspy.classes.Chord[source]¶ Adjust the timing of the chord.
Parameters: Returns: Return type: Object itself.
-
clip
(lower: int = 0, upper: int = 127) → muspy.classes.Chord[source]¶ Clip the velocity of the chord.
Parameters: Returns: Return type: Object itself.
-
end
¶ End time of the chord.
-
start
¶ Start time of the chord.
-
-
class
muspy.
KeySignature
(time: int, root: int = None, mode: str = None, fifths: int = None, root_str: str = None)[source]¶ A container for key signatures.
-
fifths
¶ Number of sharps or flats. Positive numbers for sharps and negative numbers for flats.
Type: int, optional
Note
A key signature can be specified either by its root (root) or the number of sharps or flats (fifths) along with its mode.
-
-
class
muspy.
Metadata
(schema_version: str = '0.1', title: str = None, creators: List[str] = None, copyright: str = None, collection: str = None, source_filename: str = None, source_format: str = None)[source]¶ A container for metadata.
-
schema_version
¶ Schema version.
Type: str, default: muspy.DEFAULT_SCHEMA_VERSION
-
creators
¶ Creator(s) of the song.
Type: list of str, optional
-
-
class
muspy.
Note
(time: int, pitch: int, duration: int, velocity: int = None, pitch_str: str = None)[source]¶ A container for notes.
-
velocity
¶ Note velocity. Valid values are 0 to 127.
Type: int, default: muspy.DEFAULT_VELOCITY (64)
-
adjust_time
(func: Callable[[int], int], attr: str = None, recursive: bool = True) → muspy.classes.Note[source]¶ Adjust the timing of the note.
Parameters: Returns: Return type: Object itself.
-
clip
(lower: int = 0, upper: int = 127) → muspy.classes.Note[source]¶ Clip the velocity of the note.
Parameters: Returns: Return type: Object itself.
-
end
¶ End time of the note.
-
start
¶ Start time of the note.
-
-
class
muspy.
TimeSignature
(time: int, numerator: int, denominator: int)[source]¶ A container for time signatures.
-
class
muspy.
Track
(program: int = 0, is_drum: bool = False, name: str = None, notes: List[muspy.classes.Note] = None, chords: List[muspy.classes.Chord] = None, lyrics: List[muspy.classes.Lyric] = None, annotations: List[muspy.classes.Annotation] = None)[source]¶ A container for music track.
-
program
¶ Program number, according to General MIDI specification [1]. Valid values are 0 to 127.
Type: int, default: 0 (Acoustic Grand Piano)
-
notes
¶ Musical notes.
Type: list of muspy.Note
, default: []
-
chords
¶ Chords.
Type: list of muspy.Chord
, default: []
-
annotations
¶ Annotations.
Type: list of muspy.Annotation
, default: []
-
lyrics
¶ Lyrics.
Type: list of muspy.Lyric
, default: []
Note
Indexing a Track object returns the note at a certain index. That is,
track[idx]
returnstrack.notes[idx]
. Length of a Track object is the number of notes. That is,len(track)
returnslen(track.notes)
.References
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set -
clip
(lower: int = 0, upper: int = 127) → muspy.classes.Track[source]¶ Clip the velocity of each note.
Parameters: Returns: Return type: Object itself.
-
-
muspy.
adjust_resolution
(music: muspy.music.Music, target: int = None, factor: float = None, rounding: Union[str, Callable] = 'round') → muspy.music.Music[source]¶ Adjust resolution and timing of all time-stamped objects.
Parameters: - music (
muspy.Music
) – Object to adjust the resolution. - target (int, optional) – Target resolution.
- factor (int or float, optional) – Factor used to adjust the resolution based on the formula: new_resolution = old_resolution * factor. For example, a factor of 2 double the resolution, and a factor of 0.5 halve the resolution.
- rounding ({'round', 'ceil', 'floor'} or callable, default: 'round') – Rounding mode.
- music (
-
muspy.
adjust_time
(obj: muspy.base.Base, func: Callable[[int], int]) → muspy.base.Base[source]¶ Adjust the timing of time-stamped objects.
Parameters: - obj (
muspy.Music
ormuspy.Track
) – Object to adjust the timing. - func (callable) – The function used to compute the new timing from the old timing, i.e., new_time = func(old_time).
See also
muspy.adjust_resolution()
- Adjust the resolution and the timing of time-stamped objects.
Note
The resolution are left unchanged.
- obj (
-
muspy.
append
(obj1: muspy.base.ComplexBase, obj2) → muspy.base.ComplexBase[source]¶ Append an object to the correseponding list.
This will automatically determine the list attributes to append based on the type of the object.
Parameters: - obj1 (
muspy.ComplexBase
) – Object to which obj2 to append. - obj2 – Object to be appended to obj1.
Notes
- If obj1 is of type
muspy.Music
, obj2 can bemuspy.Tempo
,muspy.KeySignature
,muspy.TimeSignature
,muspy.Lyric
,muspy.Annotation
ormuspy.Track
. - If obj1 is of type
muspy.Track
, obj2 can bemuspy.Note
,muspy.Chord
,muspy.Lyric
ormuspy.Annotation
.
See also
muspy.ComplexBase.append
- Equivalent function.
- obj1 (
-
muspy.
clip
(obj: Union[muspy.music.Music, muspy.classes.Track, muspy.classes.Note], lower: int = 0, upper: int = 127) → Union[muspy.music.Music, muspy.classes.Track, muspy.classes.Note][source]¶ Clip the velocity of each note.
Parameters: - obj (
muspy.Music
,muspy.Track
ormuspy.Note
) – Object to clip. - lower (int or float, default: 0) – Lower bound.
- upper (int or float, default: 127) – Upper bound.
- obj (
-
muspy.
get_end_time
(obj: Union[muspy.music.Music, muspy.classes.Track], is_sorted: bool = False) → int[source]¶ Return the the time of the last event in all tracks.
This includes tempos, key signatures, time signatures, note offsets, lyrics and annotations.
Parameters: - obj (
muspy.Music
ormuspy.Track
) – Object to inspect. - is_sorted (bool, default: False) – Whether all the list attributes are sorted.
- obj (
-
muspy.
get_real_end_time
(music: muspy.music.Music, is_sorted: bool = False) → float[source]¶ Return the end time in realtime.
This includes tempos, key signatures, time signatures, note offsets, lyrics and annotations. Assume 120 qpm (quarter notes per minute) if no tempo information is available.
Parameters: - music (
muspy.Music
) – Object to inspect. - is_sorted (bool, default: False) – Whether all the list attributes are sorted.
- music (
-
muspy.
remove_duplicate
(obj: muspy.base.ComplexBase) → muspy.base.ComplexBase[source]¶ Remove duplicate change events.
Parameters: obj ( muspy.Music
) – Object to process.
-
muspy.
sort
(obj: muspy.base.ComplexBase) → muspy.base.ComplexBase[source]¶ Sort all the time-stamped objects with respect to event time.
- If a
muspy.Music
is given, this will sort key signatures, time signatures, lyrics and annotations, along with notes, lyrics and annotations for each track. - If a
muspy.Track
is given, this will sort notes, lyrics and annotations.
Parameters: obj ( muspy.ComplexBase
) – Object to sort.- If a
-
muspy.
to_ordered_dict
(obj: muspy.base.Base, skip_missing: bool = True, deepcopy: bool = True) → collections.OrderedDict[source]¶ Return an OrderedDict converted from a Music object.
Parameters: - obj (
muspy.Base
) – Object to convert. - skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists.
- deepcopy (bool, default: True) – Whether to make deep copies of the attributes.
Returns: Converted OrderedDict.
Return type: OrderedDict
- obj (
-
muspy.
transpose
(obj: Union[muspy.music.Music, muspy.classes.Track, muspy.classes.Note], semitone: int) → Union[muspy.music.Music, muspy.classes.Track, muspy.classes.Note][source]¶ Transpose all the notes by a number of semitones.
Parameters: - obj (
muspy.Music
,muspy.Track
ormuspy.Note
) – Object to transpose. - semitone (int) – Number of semitones to transpose the notes. A positive value raises the pitches, while a negative value lowers the pitches.
- obj (
-
class
muspy.
ABCFolderDataset
(root: Union[str, pathlib.Path], convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None)[source]¶ Class for datasets storing ABC files in a folder.
See also
muspy.FolderDataset
- Class for datasets storing files in a folder.
-
class
muspy.
Dataset
[source]¶ Base class for MusPy datasets.
To build a custom dataset, it should inherit this class and overide the methods
__getitem__
and__len__
as well as the class attribute_info
.__getitem__
should return thei
-th data sample as amuspy.Music
object.__len__
should return the size of the dataset._info
should be amuspy.DatasetInfo
instance storing the dataset information.-
save
(root: Union[str, pathlib.Path], kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, verbose: bool = True, **kwargs)[source]¶ Save all the music objects to a directory.
Parameters: - root (str or Path) – Root directory to save the data.
- kind ({'json', 'yaml'}, default: 'json') – File format to save the data.
- n_jobs (int, default: 1) – Maximum number of concurrently running jobs. If equal to 1, disable multiprocessing.
- ignore_exceptions (bool, default: True) – Whether to ignore errors and skip failed conversions. This can be helpful if some source files are known to be corrupted.
- verbose (bool, default: True) – Whether to be verbose.
- **kwargs – Keyword arguments to pass to
muspy.save()
.
-
split
(filename: Union[str, pathlib.Path] = None, splits: Sequence[float] = None, random_state: Any = None) → Dict[str, List[int]][source]¶ Return the dataset as a PyTorch dataset.
Parameters: - filename (str or Path, optional) – If given and exists, path to the file to read the split from. If None or not exists, path to save the split.
- splits (float or list of float, optional) – Ratios for train-test-validation splits. If None, return the full dataset as a whole. If float, return train and test splits. If list of two floats, return train and test splits. If list of three floats, return train, test and validation splits.
- random_state (int, array_like or RandomState, optional) – Random state used to create the splits. If int or
array_like, the value is passed to
numpy.random.RandomState
, and the created RandomState object is used to create the splits. If RandomState, it will be used to create the splits.
-
to_pytorch_dataset
(factory: Callable = None, representation: str = None, split_filename: Union[str, pathlib.Path] = None, splits: Sequence[float] = None, random_state: Any = None, **kwargs) → Union[TorchDataset, Dict[str, TorchDataset]][source]¶ Return the dataset as a PyTorch dataset.
Parameters: - factory (Callable, optional) – Function to be applied to the Music objects. The input is a Music object, and the output is an array or a tensor.
- representation (str, optional) – Target representation. See
muspy.to_representation()
for available representation. - split_filename (str or Path, optional) – If given and exists, path to the file to read the split from. If None or not exists, path to save the split.
- splits (float or list of float, optional) – Ratios for train-test-validation splits. If None, return the full dataset as a whole. If float, return train and test splits. If list of two floats, return train and test splits. If list of three floats, return train, test and validation splits.
- random_state (int, array_like or RandomState, optional) – Random state used to create the splits. If int or
array_like, the value is passed to
numpy.random.RandomState
, and the created RandomState object is used to create the splits. If RandomState, it will be used to create the splits.
Returns: Converted PyTorch dataset(s).
Return type: class:torch.utils.data.Dataset` or Dict of :class:torch.utils.data.Dataset`
-
to_tensorflow_dataset
(factory: Callable = None, representation: str = None, split_filename: Union[str, pathlib.Path] = None, splits: Sequence[float] = None, random_state: Any = None, **kwargs) → Union[TFDataset, Dict[str, TFDataset]][source]¶ Return the dataset as a TensorFlow dataset.
Parameters: - factory (Callable, optional) – Function to be applied to the Music objects. The input is a Music object, and the output is an array or a tensor.
- representation (str, optional) – Target representation. See
muspy.to_representation()
for available representation. - split_filename (str or Path, optional) – If given and exists, path to the file to read the split from. If None or not exists, path to save the split.
- splits (float or list of float, optional) – Ratios for train-test-validation splits. If None, return the full dataset as a whole. If float, return train and test splits. If list of two floats, return train and test splits. If list of three floats, return train, test and validation splits.
- random_state (int, array_like or RandomState, optional) – Random state used to create the splits. If int or
array_like, the value is passed to
numpy.random.RandomState
, and the created RandomState object is used to create the splits. If RandomState, it will be used to create the splits.
Returns: - class:tensorflow.data.Dataset` or Dict of
- class:tensorflow.data.dataset` – Converted TensorFlow dataset(s).
-
-
class
muspy.
DatasetInfo
(name: str = None, description: str = None, homepage: str = None, license: str = None)[source]¶ A container for dataset information.
-
class
muspy.
EMOPIADataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ EMOPIA Dataset.
-
class
muspy.
EssenFolkSongDatabase
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ Essen Folk Song Database.
-
class
muspy.
FolderDataset
(root: Union[str, pathlib.Path], convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None)[source]¶ Class for datasets storing files in a folder.
This class extends
muspy.Dataset
to support folder datasets. To build a custom folder dataset, please refer to the documentation ofmuspy.Dataset
for details. In addition, set class attribute_extension
to the extension to look for when building the dataset and setread
to a callable that takes as inputs a filename of a source file and return the converted Music object.Parameters: - convert (bool, default: False) – Whether to convert the dataset to MusPy JSON/YAML files. If False, will check if converted data exists. If so, disable on-the-fly mode. If not, enable on-the-fly mode and warns.
- kind ({'json', 'yaml'}, default: 'json') – File format to save the data.
- n_jobs (int, default: 1) – Maximum number of concurrently running jobs. If equal to 1, disable multiprocessing.
- ignore_exceptions (bool, default: True) – Whether to ignore errors and skip failed conversions. This can be helpful if some source files are known to be corrupted.
- use_converted (bool, optional) – Force to disable on-the-fly mode and use converted data. Defaults to True if converted data exist, otherwise False.
Important
muspy.FolderDataset.converted_exists()
depends solely on a special file named.muspy.success
in the folder{root}/_converted/
, which serves as an indicator for the existence and integrity of the converted dataset. If the converted dataset is built bymuspy.FolderDataset.convert()
, the.muspy.success
file will be created as well. If the converted dataset is created manually, make sure to create the.muspy.success
file in the folder{root}/_converted/
to prevent errors.Notes
Two modes are available for this dataset. When the on-the-fly mode is enabled, a data sample is converted to a music object on the fly when being indexed. When the on-the-fly mode is disabled, a data sample is loaded from the precomputed converted data.
See also
muspy.Dataset
- Base class for MusPy datasets.
-
convert
(kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, verbose: bool = True, **kwargs) → FolderDatasetType[source]¶ Convert and save the Music objects.
The converted files will be named by its index and saved to
root/_converted
. The original filenames can be found in thefilenames
attribute. For example, the file atfilenames[i]
will be converted and saved to{i}.json
.Parameters: - kind ({'json', 'yaml'}, default: 'json') – File format to save the data.
- n_jobs (int, default: 1) – Maximum number of concurrently running jobs. If equal to 1, disable multiprocessing.
- ignore_exceptions (bool, default: True) – Whether to ignore errors and skip failed conversions. This can be helpful if some source files are known to be corrupted.
- verbose (bool, default: True) – Whether to be verbose.
- **kwargs – Keyword arguments to pass to
muspy.save()
.
Returns: Return type: Object itself.
-
converted_dir
¶ Path to the root directory of the converted dataset.
-
load
(filename: Union[str, pathlib.Path]) → muspy.music.Music[source]¶ Load a file into a Music object.
-
class
muspy.
HaydnOp20Dataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ Haydn Op.20 Dataset.
-
class
muspy.
HymnalDataset
(root: Union[str, pathlib.Path], download: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None)[source]¶ Hymnal Dataset.
-
class
muspy.
HymnalTuneDataset
(root: Union[str, pathlib.Path], download: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None)[source]¶ Hymnal Dataset (tune only).
-
class
muspy.
JSBChoralesDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ Johann Sebastian Bach Chorales Dataset.
-
class
muspy.
LakhMIDIAlignedDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ Lakh MIDI Dataset - aligned subset.
-
class
muspy.
LakhMIDIDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ Lakh MIDI Dataset.
-
class
muspy.
LakhMIDIMatchedDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ Lakh MIDI Dataset - matched subset.
-
class
muspy.
MAESTRODatasetV1
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ MAESTRO Dataset V1 (MIDI only).
-
class
muspy.
MAESTRODatasetV2
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ MAESTRO Dataset V2 (MIDI only).
-
class
muspy.
MAESTRODatasetV3
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ MAESTRO Dataset V3 (MIDI only).
-
class
muspy.
Music21Dataset
(composer: str = None)[source]¶ A class of datasets containing files in music21 corpus.
Parameters: - composer (str) – Name of a composer or a collection. Please refer to the music21 corpus reference page for a full list [1].
- extensions (list of str) – File extensions of desired files.
References
[1] https://web.mit.edu/music21/doc/about/referenceCorpus.html
-
convert
(root: Union[str, pathlib.Path], kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True) → muspy.datasets.base.MusicDataset[source]¶ Convert and save the Music objects.
Parameters: - root (str or Path) – Root directory to save the data.
- kind ({'json', 'yaml'}, default: 'json') – File format to save the data.
- n_jobs (int, default: 1) – Maximum number of concurrently running jobs. If equal to 1, disable multiprocessing.
- ignore_exceptions (bool, default: True) – Whether to ignore errors and skip failed conversions. This can be helpful if some source files are known to be corrupted.
-
class
muspy.
MusicDataset
(root: Union[str, pathlib.Path], kind: str = None)[source]¶ Class for datasets of MusPy JSON/YAML files.
Parameters: - root (str or Path) – Root directory of the dataset.
- kind ({'json', 'yaml'}, optional) – File formats to include in the dataset. Defaults to include both JSON and YAML files.
-
root
¶ Root directory of the dataset.
Type: Path
-
filenames
¶ Path to the files, relative to root.
Type: list of Path
See also
muspy.Dataset
- Base class for MusPy datasets.
-
class
muspy.
MusicNetDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ MusicNet Dataset (MIDI only).
-
class
muspy.
NESMusicDatabase
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ NES Music Database.
-
class
muspy.
NottinghamDatabase
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ Nottingham Database.
-
class
muspy.
RemoteABCFolderDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ Base class for remote datasets storing ABC files in a folder.
See also
muspy.ABCFolderDataset
- Class for datasets storing ABC files in a folder.
muspy.RemoteDataset
- Base class for remote MusPy datasets.
-
class
muspy.
RemoteDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, verbose: bool = True)[source]¶ Base class for remote MusPy datasets.
This class extends
muspy.Dataset
to support remote datasets. To build a custom remote dataset, please refer to the documentation ofmuspy.Dataset
for details. In addition, set the class attribute_sources
to the URLs to the source files (see Notes).Parameters: Raises: RuntimeError: – If
download_and_extract
is False but file{root}/.muspy.success
does not exist (see below).Important
muspy.Dataset.exists()
depends solely on a special file named.muspy.success
in directory{root}/_converted/
. This file serves as an indicator for the existence and integrity of the dataset. It will automatically be created if the dataset is successfully downloaded and extracted bymuspy.Dataset.download_and_extract()
. If the dataset is downloaded manually, make sure to create the.muspy.success
file in directory{root}/_converted/
to prevent errors.Notes
The class attribute
_sources
is a dictionary storing the following information of each source file.- filename (str): Name to save the file.
- url (str): URL to the file.
- archive (bool): Whether the file is an archive.
- md5 (str, optional): Expected MD5 checksum of the file.
- sha256 (str, optional): Expected SHA256 checksum of the file.
Here is an example.:
_sources = { "example": { "filename": "example.tar.gz", "url": "https://www.example.com/example.tar.gz", "archive": True, "md5": None, "sha256": None, } }
See also
muspy.Dataset
- Base class for MusPy datasets.
-
download
(overwrite: bool = False, verbose: bool = True) → RemoteDatasetType[source]¶ Download the dataset source(s).
Parameters: Returns: Return type: Object itself.
-
download_and_extract
(overwrite: bool = False, cleanup: bool = False, verbose: bool = True) → RemoteDatasetType[source]¶ Download source datasets and extract the downloaded archives.
Parameters: Returns: Return type: Object itself.
-
class
muspy.
RemoteFolderDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ Base class for remote datasets storing files in a folder.
Parameters: - download_and_extract (bool, default: False) – Whether to download and extract the dataset.
- cleanup (bool, default: False) – Whether to remove the source archive(s).
- convert (bool, default: False) – Whether to convert the dataset to MusPy JSON/YAML files. If False, will check if converted data exists. If so, disable on-the-fly mode. If not, enable on-the-fly mode and warns.
- kind ({'json', 'yaml'}, default: 'json') – File format to save the data.
- n_jobs (int, default: 1) – Maximum number of concurrently running jobs. If equal to 1, disable multiprocessing.
- ignore_exceptions (bool, default: True) – Whether to ignore errors and skip failed conversions. This can be helpful if some source files are known to be corrupted.
- use_converted (bool, optional) – Force to disable on-the-fly mode and use converted data. Defaults to True if converted data exist, otherwise False.
See also
muspy.FolderDataset
- Class for datasets storing files in a folder.
muspy.RemoteDataset
- Base class for remote MusPy datasets.
-
class
muspy.
RemoteMusicDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, kind: str = None, verbose: bool = True)[source]¶ Base class for remote datasets of MusPy JSON/YAML files.
Parameters: - root (str or Path) – Root directory of the dataset.
- download_and_extract (bool, default: False) – Whether to download and extract the dataset.
- overwrite (bool, default: False) – Whether to overwrite existing file(s).
- cleanup (bool, default: False) – Whether to remove the source archive(s).
- kind ({'json', 'yaml'}, optional) – File formats to include in the dataset. Defaults to include both JSON and YAML files.
- verbose (bool. default: True) – Whether to be verbose.
-
root
¶ Root directory of the dataset.
Type: Path
-
filenames
¶ Path to the files, relative to root.
Type: list of Path
See also
muspy.MusicDataset
- Class for datasets of MusPy JSON/YAML files.
muspy.RemoteDataset
- Base class for remote MusPy datasets.
-
class
muspy.
WikifoniaDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, overwrite: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: bool = None, verbose: bool = True)[source]¶ Wikifonia dataset.
-
muspy.
get_dataset
(key: str) → Type[muspy.datasets.base.Dataset][source]¶ Return a certain dataset class by key.
Parameters: key (str) – Dataset key (case-insensitive). Returns: Return type: The corresponding dataset class.
-
muspy.
list_datasets
()[source]¶ Return all supported dataset classes as a list.
Returns: Return type: A list of all supported dataset classes.
-
muspy.
download_bravura_font
(overwrite: bool = False)[source]¶ Download the Bravura font.
Parameters: overwrite (bool, default: False) – Whether to overwrite an existing file.
-
muspy.
download_musescore_soundfont
(overwrite: bool = False)[source]¶ Download the MuseScore General soundfont.
Parameters: overwrite (bool, default: False) – Whether to overwrite an existing file.
-
muspy.
get_bravura_font_dir
() → pathlib.Path[source]¶ Return path to the directory of the Bravura font.
-
muspy.
get_musescore_soundfont_dir
() → pathlib.Path[source]¶ Return path to the MuseScore General soundfont directory.
-
muspy.
get_musescore_soundfont_path
() → pathlib.Path[source]¶ Return path to the MuseScore General soundfont.
-
muspy.
from_event_representation
(array: numpy.ndarray, resolution: int = 24, program: int = 0, is_drum: bool = False, use_single_note_off_event: bool = False, use_end_of_sequence_event: bool = False, max_time_shift: int = 100, velocity_bins: int = 32, default_velocity: int = 64, duplicate_note_mode: str = 'fifo') → muspy.music.Music[source]¶ Decode event-based representation into a Music object.
Parameters: - array (ndarray) – Array in event-based representation to decode.
- resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
- program (int, default: 0 (Acoustic Grand Piano)) – Program number, according to General MIDI specification [1]. Valid values are 0 to 127.
- is_drum (bool, default: False) – Whether it is a percussion track.
- use_single_note_off_event (bool, default: False) – Whether to use a single note-off event for all the pitches. If True, a note-off event will close all active notes, which can lead to lossy conversion for polyphonic music.
- use_end_of_sequence_event (bool, default: False) – Whether to append an end-of-sequence event to the encoded sequence.
- max_time_shift (int, default: 100) – Maximum time shift (in ticks) to be encoded as an separate event. Time shifts larger than max_time_shift will be decomposed into two or more time-shift events.
- velocity_bins (int, default: 32) – Number of velocity bins to use.
- default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding.
- duplicate_note_mode ({'fifo', 'lifo', 'all'}, default: 'fifo') –
Policy for dealing with duplicate notes. When a note off event is presetned while there are multiple correspoding note on events that have not yet been closed, we need a policy to decide which note on messages to close. This is only effective when use_single_note_off_event is False.
- ’fifo’ (first in first out): close the earliest note on
- ’lifo’ (first in first out): close the latest note on
- ’all’: close all note on messages
Returns: Decoded Music object.
Return type: References
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
-
muspy.
from_mido
(midi: mido.midifiles.midifiles.MidiFile, duplicate_note_mode: str = 'fifo') → muspy.music.Music[source]¶ Return a mido MidiFile object as a Music object.
Parameters: - midi (
mido.MidiFile
) – Mido MidiFile object to convert. - duplicate_note_mode ({'fifo', 'lifo', 'all'}, default: 'fifo') –
Policy for dealing with duplicate notes. When a note off message is presetned while there are multiple correspoding note on messages that have not yet been closed, we need a policy to decide which note on messages to close.
- ’fifo’ (first in first out): close the earliest note on
- ’lifo’ (first in first out): close the latest note on
- ’all’: close all note on messages
Returns: Converted Music object.
Return type: - midi (
-
muspy.
from_music21
(stream: music21.stream.base.Stream, resolution: int = 24) → Union[muspy.music.Music, List[muspy.music.Music], muspy.classes.Track, List[muspy.classes.Track]][source]¶ Return a music21 Stream object as Music or Track object(s).
Parameters: - stream (music21.stream.Stream) – Stream object to convert.
- resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
Returns: Converted Music or Track object(s).
Return type:
-
muspy.
from_music21_opus
(opus: music21.stream.base.Opus, resolution: int = 24) → List[muspy.music.Music][source]¶ Return a music21 Opus object as a list of Music objects.
Parameters: - opus (music21.stream.Opus) – Opus object to convert.
- resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
Returns: Converted Music object.
Return type:
-
muspy.
from_music21_part
(part: music21.stream.base.Part, resolution: int = 24) → Union[muspy.classes.Track, List[muspy.classes.Track]][source]¶ Return a music21 Part object as Track object(s).
Parameters: - part (music21.stream.Part) – Part object to parse.
- resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
Returns: Parsed track(s).
Return type: muspy.Track
or list ofmuspy.Track
-
muspy.
from_music21_score
(score: music21.stream.base.Score, resolution: int = 24) → muspy.music.Music[source]¶ Return a music21 Stream object as a Music object.
Parameters: - score (music21.stream.Score) – Score object to convert.
- resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
Returns: Converted Music object.
Return type:
-
muspy.
from_note_representation
(array: numpy.ndarray, resolution: int = 24, program: int = 0, is_drum: bool = False, use_start_end: bool = False, encode_velocity: bool = True, default_velocity: int = 64) → muspy.music.Music[source]¶ Decode note-based representation into a Music object.
Parameters: - array (ndarray) – Array in note-based representation to decode.
- resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
- program (int, default: 0 (Acoustic Grand Piano)) – Program number, according to General MIDI specification [1]. Valid values are 0 to 127.
- is_drum (bool, default: False) – Whether it is a percussion track.
- use_start_end (bool, default: False) – Whether to use ‘start’ and ‘end’ to encode the timing rather than ‘time’ and ‘duration’.
- encode_velocity (bool, default: True) – Whether to encode note velocities.
- default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding. Only used when encode_velocity is True.
Returns: Decoded Music object.
Return type: References
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
-
muspy.
from_object
(obj: Union[music21.stream.base.Stream, mido.midifiles.midifiles.MidiFile, pretty_midi.pretty_midi.PrettyMIDI, pypianoroll.multitrack.Multitrack], **kwargs) → Union[muspy.music.Music, List[muspy.music.Music], muspy.classes.Track, List[muspy.classes.Track]][source]¶ Return an outside object as a Music object.
Parameters: - obj – Object to convert. Supported objects are music21.Stream,
mido.MidiTrack
,pretty_midi.PrettyMIDI
, andpypianoroll.Multitrack
objects. - **kwargs – Keyword arguments to pass to
muspy.from_music21()
,muspy.from_mido()
,from_pretty_midi()
orfrom_pypianoroll()
.
Returns: Converted Music object.
Return type: - obj – Object to convert. Supported objects are music21.Stream,
-
muspy.
from_pianoroll_representation
(array: numpy.ndarray, resolution: int = 24, program: int = 0, is_drum: bool = False, encode_velocity: bool = True, default_velocity: int = 64) → muspy.music.Music[source]¶ Decode piano-roll representation into a Music object.
Parameters: - array (ndarray) – Array in piano-roll representation to decode.
- resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
- program (int, default: 0 (Acoustic Grand Piano)) – Program number, according to General MIDI specification [1]. Valid values are 0 to 127.
- is_drum (bool, default: False) – Whether it is a percussion track.
- encode_velocity (bool, default: True) – Whether to encode velocities.
- default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding. Only used when encode_velocity is True.
Returns: Decoded Music object.
Return type: References
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
-
muspy.
from_pitch_representation
(array: numpy.ndarray, resolution: int = 24, program: int = 0, is_drum: bool = False, use_hold_state: bool = False, default_velocity: int = 64) → muspy.music.Music[source]¶ Decode pitch-based representation into a Music object.
Parameters: - array (ndarray) – Array in pitch-based representation to decode.
- resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
- program (int, default: 0 (Acoustic Grand Piano)) – Program number, according to General MIDI specification [1]. Valid values are 0 to 127.
- is_drum (bool, default: False) – Whether it is a percussion track.
- use_hold_state (bool, default: False) – Whether to use a special state for holds.
- default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding.
Returns: Decoded Music object.
Return type: References
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
-
muspy.
from_pretty_midi
(midi: pretty_midi.pretty_midi.PrettyMIDI, resolution: int = None) → muspy.music.Music[source]¶ Return a pretty_midi PrettyMIDI object as a Music object.
Parameters: - midi (
pretty_midi.PrettyMIDI
) – PrettyMIDI object to convert. - resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
Returns: Converted Music object.
Return type: - midi (
-
muspy.
from_pypianoroll
(multitrack: pypianoroll.multitrack.Multitrack, default_velocity: int = 64) → muspy.music.Music[source]¶ Return a Pypianoroll Multitrack object as a Music object.
Parameters: - multitrack (
pypianoroll.Multitrack
) – Pypianoroll Multitrack object to convert. - default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding.
Returns: music – Converted MusPy Music object.
Return type: - multitrack (
-
muspy.
from_pypianoroll_track
(track: pypianoroll.track.Track, default_velocity: int = 64) → muspy.classes.Track[source]¶ Return a Pypianoroll Track object as a Track object.
Parameters: - track (
pypianoroll.Track
) – Pypianoroll Track object to convert. - default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding.
Returns: Converted track.
Return type: - track (
-
muspy.
from_representation
(array: numpy.ndarray, kind: str, **kwargs) → muspy.music.Music[source]¶ Update with the given representation.
Parameters: - array (
numpy.ndarray
) – Array in a supported representation. - kind (str, {'pitch', 'pianoroll', 'event', 'note'}) – Data representation.
- **kwargs – Keyword arguments to pass to
muspy.from_pitch_representation()
,muspy.from_pianoroll_representation()
,from_event_representation()
orfrom_note_representation()
.
Returns: Converted Music object.
Return type: - array (
-
muspy.
load
(path: Union[str, pathlib.Path, TextIO], kind: str = None, **kwargs) → muspy.music.Music[source]¶ Load a JSON or a YAML file into a Music object.
This is a wrapper function for
muspy.load_json()
andmuspy.load_yaml()
.Parameters: - path (str, Path or TextIO) – Path to the file or the file to to load.
- kind ({'json', 'yaml'}, optional) – Format to save. Defaults to infer from the extension.
- **kwargs – Keyword arguments to pass to
muspy.load_json()
ormuspy.load_yaml()
.
Returns: Loaded Music object.
Return type: See also
muspy.load_json()
- Load a JSON file into a Music object.
muspy.load_yaml()
- Load a YAML file into a Music object.
muspy.read()
- Read a MIDI/MusicXML/ABC file into a Music object.
-
muspy.
load_json
(path: Union[str, pathlib.Path, TextIO], compressed: bool = None) → muspy.music.Music[source]¶ Load a JSON file into a Music object.
Parameters: Returns: Loaded Music object.
Return type: Notes
When a path is given, assume UTF-8 encoding and gzip compression if compressed=True.
-
muspy.
load_yaml
(path: Union[str, pathlib.Path, TextIO], compressed: bool = None) → muspy.music.Music[source]¶ Load a YAML file into a Music object.
Parameters: Returns: Loaded Music object.
Return type: Notes
When a path is given, assume UTF-8 encoding and gzip compression if compressed=True.
-
muspy.
read
(path: Union[str, pathlib.Path], kind: str = None, **kwargs) → Union[muspy.music.Music, List[muspy.music.Music]][source]¶ Read a MIDI/MusicXML/ABC file into a Music object.
Parameters: - path (str or Path) – Path to the file to read.
- kind ({'midi', 'musicxml', 'abc'}, optional) – Format to save. Defaults to infer from the extension.
- **kwargs – Keyword arguments to pass to
muspy.read_midi()
,muspy.read_musicxml()
orread_abc()
.
Returns: Converted Music object(s).
Return type: muspy.Music
or list ofmuspy.Music
See also
muspy.load()
- Load a JSON or a YAML file into a Music object.
-
muspy.
read_abc
(path: Union[str, pathlib.Path], number: int = None, resolution=24) → Union[muspy.music.Music, List[muspy.music.Music]][source]¶ Return an ABC file into Music object(s) using music21 backend.
Parameters: Returns: Converted Music object(s).
Return type: list of
muspy.Music
-
muspy.
read_abc_string
(data_str: str, number: int = None, resolution=24) → Union[muspy.music.Music, List[muspy.music.Music]][source]¶ Read ABC data into Music object(s) using music21 backend.
Parameters: Returns: Converted Music object(s).
Return type:
-
muspy.
read_midi
(path: Union[str, pathlib.Path], backend: str = 'mido', duplicate_note_mode: str = 'fifo') → muspy.music.Music[source]¶ Read a MIDI file into a Music object.
Parameters: - path (str or Path) – Path to the MIDI file to read.
- backend ({'mido', 'pretty_midi'}, default: 'mido') – Backend to use.
- duplicate_note_mode ({'fifo', 'lifo, 'all'}, default: 'fifo') –
Policy for dealing with duplicate notes. When a note off message is presetned while there are multiple correspoding note on messages that have not yet been closed, we need a policy to decide which note on messages to close. Only used when backend is ‘mido’.
- ’fifo’ (first in first out): close the earliest note on
- ’lifo’ (first in first out):close the latest note on
- ’all’: close all note on messages
Returns: Converted Music object.
Return type:
-
muspy.
read_musicxml
(path: Union[str, pathlib.Path], resolution: int = None, compressed: bool = None) → muspy.music.Music[source]¶ Read a MusicXML file into a Music object.
Parameters: Returns: Converted Music object.
Return type: Notes
Grace notes and unpitched notes are not supported.
-
muspy.
drum_in_pattern_rate
(music: muspy.music.Music, meter: str) → float[source]¶ Return the ratio of drum notes in a certain drum pattern.
The drum-in-pattern rate is defined as the ratio of the number of notes in a certain scale to the total number of notes. Only drum tracks are considered. Return NaN if no drum note is found. This metric is used in [1].
\[drum\_in\_pattern\_rate = \frac{ \#(drum\_notes\_in\_pattern)}{\#(drum\_notes)}\]Parameters: - music (
muspy.Music
) – Music object to evaluate. - meter (str, {'duple', 'triple'}) – Meter of the drum pattern.
Returns: Drum-in-pattern rate.
Return type: See also
muspy.drum_pattern_consistency()
- Compute the largest drum-in-pattern rate.
References
- Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- music (
-
muspy.
drum_pattern_consistency
(music: muspy.music.Music) → float[source]¶ Return the largest drum-in-pattern rate.
The drum pattern consistency is defined as the largest drum-in-pattern rate over duple and triple meters. Only drum tracks are considered. Return NaN if no drum note is found.
\[drum\_pattern\_consistency = \max_{meter}{ drum\_in\_pattern\_rate(meter)}\]Parameters: music ( muspy.Music
) – Music object to evaluate.Returns: Drum pattern consistency. Return type: float See also
muspy.drum_in_pattern_rate()
- Compute the ratio of drum notes in a certain drum pattern.
-
muspy.
empty_beat_rate
(music: muspy.music.Music) → float[source]¶ Return the ratio of empty beats.
The empty-beat rate is defined as the ratio of the number of empty beats (where no note is played) to the total number of beats. Return NaN if song length is zero. This metric is also implemented in Pypianoroll [1].
\[empty\_beat\_rate = \frac{\#(empty\_beats)}{\#(beats)}\]Parameters: music ( muspy.Music
) – Music object to evaluate.Returns: Empty-beat rate. Return type: float See also
muspy.empty_measure_rate()
- Compute the ratio of empty measures.
References
- Hao-Wen Dong, Wen-Yi Hsiao, and Yi-Hsuan Yang, “Pypianoroll: Open Source Python Package for Handling Multitrack Pianorolls,” in Late-Breaking Demos of the 18th International Society for Music Information Retrieval Conference (ISMIR), 2018.
-
muspy.
empty_measure_rate
(music: muspy.music.Music, measure_resolution: int) → float[source]¶ Return the ratio of empty measures.
The empty-measure rate is defined as the ratio of the number of empty measures (where no note is played) to the total number of measures. Note that this metric only works for songs with a constant time signature. Return NaN if song length is zero. This metric is used in [1].
\[empty\_measure\_rate = \frac{\#(empty\_measures)}{\#(measures)}\]Parameters: - music (
muspy.Music
) – Music object to evaluate. - measure_resolution (int) – Time steps per measure.
Returns: Empty-measure rate.
Return type: See also
muspy.empty_beat_rate()
- Compute the ratio of empty beats.
References
- Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- music (
-
muspy.
groove_consistency
(music: muspy.music.Music, measure_resolution: int) → float[source]¶ Return the groove consistency.
The groove consistency is defined as the mean hamming distance of the neighboring measures.
\[groove\_consistency = 1 - \frac{1}{T - 1} \sum_{i = 1}^{T - 1}{ d(G_i, G_{i + 1})}\]Here, \(T\) is the number of measures, \(G_i\) is the binary onset vector of the \(i\)-th measure (a one at position that has an onset, otherwise a zero), and \(d(G, G')\) is the hamming distance between two vectors \(G\) and \(G'\). Note that this metric only works for songs with a constant time signature. Return NaN if the number of measures is less than two. This metric is used in [1].
Parameters: - music (
muspy.Music
) – Music object to evaluate. - measure_resolution (int) – Time steps per measure.
Returns: Groove consistency.
Return type: References
- Shih-Lun Wu and Yi-Hsuan Yang, “The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures”, in Proceedings of the 21st International Society for Music Information Retrieval Conference, 2020.
- music (
-
muspy.
n_pitch_classes_used
(music: muspy.music.Music) → int[source]¶ Return the number of unique pitch classes used.
Drum tracks are ignored.
Parameters: music ( muspy.Music
) – Music object to evaluate.Returns: Number of unique pitch classes used. Return type: int See also
muspy.n_pitches_used()
- Compute the number of unique pitches used.
-
muspy.
n_pitches_used
(music: muspy.music.Music) → int[source]¶ Return the number of unique pitches used.
Drum tracks are ignored.
Parameters: music ( muspy.Music
) – Music object to evaluate.Returns: Number of unique pitch used. Return type: int See also
muspy.n_pitch_class_used()
- Compute the number of unique pitch classes used.
-
muspy.
pitch_class_entropy
(music: muspy.music.Music) → float[source]¶ Return the entropy of the normalized note pitch class histogram.
The pitch class entropy is defined as the Shannon entropy of the normalized note pitch class histogram. Drum tracks are ignored. Return NaN if no note is found. This metric is used in [1].
\[pitch\_class\_entropy = -\sum_{i = 0}^{11}{ P(pitch\_class=i) \times \log_2 P(pitch\_class=i)}\]Parameters: music ( muspy.Music
) – Music object to evaluate.Returns: Pitch class entropy. Return type: float See also
muspy.pitch_entropy()
- Compute the entropy of the normalized pitch histogram.
References
- Shih-Lun Wu and Yi-Hsuan Yang, “The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures”, in Proceedings of the 21st International Society for Music Information Retrieval Conference, 2020.
-
muspy.
pitch_entropy
(music: muspy.music.Music) → float[source]¶ Return the entropy of the normalized note pitch histogram.
The pitch entropy is defined as the Shannon entropy of the normalized note pitch histogram. Drum tracks are ignored. Return NaN if no note is found.
\[pitch\_entropy = -\sum_{i = 0}^{127}{ P(pitch=i) \log_2 P(pitch=i)}\]Parameters: music ( muspy.Music
) – Music object to evaluate.Returns: Pitch entropy. Return type: float See also
muspy.pitch_class_entropy()
- Compute the entropy of the normalized pitch class histogram.
-
muspy.
pitch_in_scale_rate
(music: muspy.music.Music, root: int, mode: str) → float[source]¶ Return the ratio of pitches in a certain musical scale.
The pitch-in-scale rate is defined as the ratio of the number of notes in a certain scale to the total number of notes. Drum tracks are ignored. Return NaN if no note is found. This metric is used in [1].
\[pitch\_in\_scale\_rate = \frac{\#(notes\_in\_scale)}{\#(notes)}\]Parameters: - music (
muspy.Music
) – Music object to evaluate. - root (int) – Root of the scale.
- mode (str, {'major', 'minor'}) – Mode of the scale.
Returns: Pitch-in-scale rate.
Return type: See also
muspy.scale_consistency()
- Compute the largest pitch-in-class rate.
References
- Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- music (
-
muspy.
pitch_range
(music: muspy.music.Music) → int[source]¶ Return the pitch range.
Drum tracks are ignored. Return zero if no note is found.
Parameters: music ( muspy.Music
) – Music object to evaluate.Returns: Pitch range. Return type: int
-
muspy.
polyphony
(music: muspy.music.Music) → float[source]¶ Return the average number of pitches being played concurrently.
The polyphony is defined as the average number of pitches being played at the same time, evaluated only at time steps where at least one pitch is on. Drum tracks are ignored. Return NaN if no note is found.
\[polyphony = \frac{ \#(pitches\_when\_at\_least\_one\_pitch\_is\_on) }{ \#(time\_steps\_where\_at\_least\_one\_pitch\_is\_on) }\]Parameters: music ( muspy.Music
) – Music object to evaluate.Returns: Polyphony. Return type: float See also
muspy.polyphony_rate()
- Compute the ratio of time steps where multiple pitches are on.
-
muspy.
polyphony_rate
(music: muspy.music.Music, threshold: int = 2) → float[source]¶ Return the ratio of time steps where multiple pitches are on.
The polyphony rate is defined as the ratio of the number of time steps where multiple pitches are on to the total number of time steps. Drum tracks are ignored. Return NaN if song length is zero. This metric is used in [1], where it is called polyphonicity.
\[polyphony\_rate = \frac{ \#(time\_steps\_where\_multiple\_pitches\_are\_on) }{ \#(time\_steps) }\]Parameters: - music (
muspy.Music
) – Music object to evaluate. - threshold (int, default: 2) – Threshold of number of pitches to count into the numerator.
Returns: Polyphony rate.
Return type: See also
muspy.polyphony()
- Compute the average number of pitches being played at the same time.
References
- Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- music (
-
muspy.
scale_consistency
(music: muspy.music.Music) → float[source]¶ Return the largest pitch-in-scale rate.
The scale consistency is defined as the largest pitch-in-scale rate over all major and minor scales. Drum tracks are ignored. Return NaN if no note is found. This metric is used in [1].
\[scale\_consistency = \max_{root, mode}{ pitch\_in\_scale\_rate(root, mode)}\]Parameters: music ( muspy.Music
) – Music object to evaluate.Returns: Scale consistency. Return type: float See also
muspy.pitch_in_scale_rate()
- Compute the ratio of pitches in a certain musical scale.
References
- Olof Mogren, “C-RNN-GAN: Continuous recurrent neural networks with adversarial training,” in NeuIPS Workshop on Constructive Machine Learning, 2016.
-
class
muspy.
Music
(metadata: muspy.classes.Metadata = None, resolution: int = None, tempos: List[muspy.classes.Tempo] = None, key_signatures: List[muspy.classes.KeySignature] = None, time_signatures: List[muspy.classes.TimeSignature] = None, beats: List[muspy.classes.Beat] = None, lyrics: List[muspy.classes.Lyric] = None, annotations: List[muspy.classes.Annotation] = None, tracks: List[muspy.classes.Track] = None)[source]¶ A universal container for symbolic music.
This is the core class of MusPy. A Music object can be constructed in the following ways.
muspy.Music()
: Construct by setting values for attributes.muspy.Music.from_dict()
: Construct from a dictionary that stores the attributes and their values as key-value pairs.muspy.read()
: Read from a MIDI, a MusicXML or an ABC file.muspy.load()
: Load from a JSON or a YAML file saved bymuspy.save()
.muspy.from_object()
: Convert from a music21.Stream, amido.MidiFile
, apretty_midi.PrettyMIDI
or apypianoroll.Multitrack
object.
-
metadata
¶ Metadata.
Type: muspy.Metadata
, default: Metadata()
-
resolution
¶ Time steps per quarter note.
Type: int, default: muspy.DEFAULT_RESOLUTION (24)
-
tempos
¶ Tempo changes.
Type: list of muspy.Tempo
, default: []
-
key_signatures
¶ Key signatures changes.
Type: list of muspy.KeySignature
, default: []
-
time_signatures
¶ Time signature changes.
Type: list of muspy.TimeSignature
, default: []
-
beats
¶ Beats.
Type: list of muspy.Beat
, default: []
-
lyrics
¶ Lyrics.
Type: list of muspy.Lyric
, default: []
-
annotations
¶ Annotations.
Type: list of muspy.Annotation
, default: []
-
tracks
¶ Music tracks.
Type: list of muspy.Track
, default: []
Note
Indexing a Music object returns the track of a certain index. That is,
music[idx]
returnsmusic.tracks[idx]
. Length of a Music object is the number of tracks. That is,len(music)
returnslen(music.tracks)
.-
adjust_resolution
(target: int = None, factor: float = None, rounding: Union[str, Callable] = 'round') → muspy.music.Music[source]¶ Adjust resolution and timing of all time-stamped objects.
Parameters: - target (int, optional) – Target resolution.
- factor (int or float, optional) – Factor used to adjust the resolution based on the formula: new_resolution = old_resolution * factor. For example, a factor of 2 double the resolution, and a factor of 0.5 halve the resolution.
- rounding ({'round', 'ceil', 'floor'} or callable, default:) –
- 'round' – Rounding mode.
Returns: Return type: Object itself.
-
clip
(lower: int = 0, upper: int = 127) → muspy.music.Music[source]¶ Clip the velocity of each note for each track.
Parameters: Returns: Return type: Object itself.
-
get_end_time
(is_sorted: bool = False) → int[source]¶ Return the the time of the last event in all tracks.
This includes tempos, key signatures, time signatures, note offsets, lyrics and annotations.
Parameters: is_sorted (bool, default: False) – Whether all the list attributes are sorted.
-
get_real_end_time
(is_sorted: bool = False) → float[source]¶ Return the end time in realtime.
This includes tempos, key signatures, time signatures, note offsets, lyrics and annotations. Assume 120 qpm (quarter notes per minute) if no tempo information is available.
Parameters: is_sorted (bool, default: False) – Whether all the list attributes are sorted.
-
infer_beats
() → List[muspy.classes.Beat][source]¶ Infer beats from the time signature changes.
This assumes that there is a downbeat at each time signature change (this is not always true, e.g., for a pickup measure).
Returns: List of beats inferred from the time signature changes. Return an empty list if no time signature is found. Return type: list of muspy.Beat
-
save
(path: Union[str, pathlib.Path], kind: str = None, **kwargs)[source]¶ Save loselessly to a JSON or a YAML file.
Refer to
muspy.save()
for full documentation.
-
save_json
(path: Union[str, pathlib.Path], **kwargs)[source]¶ Save loselessly to a JSON file.
Refer to
muspy.save_json()
for full documentation.
-
save_yaml
(path: Union[str, pathlib.Path])[source]¶ Save loselessly to a YAML file.
Refer to
muspy.save_yaml()
for full documentation.
-
show
(kind: str, **kwargs)[source]¶ Show visualization.
Refer to
muspy.show()
for full documentation.
-
show_pianoroll
(**kwargs)[source]¶ Show pianoroll visualization.
Refer to
muspy.show_pianoroll()
for full documentation.
-
show_score
(**kwargs)[source]¶ Show score visualization.
Refer to
muspy.show_score()
for full documentation.
-
synthesize
(**kwargs) → numpy.ndarray[source]¶ Synthesize a Music object to raw audio.
Refer to
muspy.synthesize()
for full documentation.
-
to_event_representation
(**kwargs) → numpy.ndarray[source]¶ Return in event-based representation.
Refer to
muspy.to_event_representation()
for full documentation.
-
to_mido
(**kwargs) → mido.midifiles.midifiles.MidiFile[source]¶ Return as a MidiFile object.
Refer to
muspy.to_mido()
for full documentation.
-
to_music21
(**kwargs) → music21.stream.base.Stream[source]¶ Return as a Stream object.
Refer to
muspy.to_music21()
for full documentation.
-
to_note_representation
(**kwargs) → numpy.ndarray[source]¶ Return in note-based representation.
Refer to
muspy.to_note_representation()
for full documentation.
-
to_object
(kind: str, **kwargs)[source]¶ Return as an object in other libraries.
Refer to
muspy.to_object()
for full documentation.
-
to_pianoroll_representation
(**kwargs) → numpy.ndarray[source]¶ Return in piano-roll representation.
Refer to
muspy.to_pianoroll_representation()
for full documentation.
-
to_pitch_representation
(**kwargs) → numpy.ndarray[source]¶ Return in pitch-based representation.
Refer to
muspy.to_pitch_representation()
for full documentation.
-
to_pretty_midi
(**kwargs) → pretty_midi.pretty_midi.PrettyMIDI[source]¶ Return as a PrettyMIDI object.
Refer to
muspy.to_pretty_midi()
for full documentation.
-
to_pypianoroll
(**kwargs) → pypianoroll.multitrack.Multitrack[source]¶ Return as a Multitrack object.
Refer to
muspy.to_pypianoroll()
for full documentation.
-
to_representation
(kind: str, **kwargs) → numpy.ndarray[source]¶ Return in a specific representation.
Refer to
muspy.to_representation()
for full documentation.
-
transpose
(semitone: int) → muspy.music.Music[source]¶ Transpose all the notes by a number of semitones.
Parameters: semitone (int) – Number of semitones to transpose the notes. A positive value raises the pitches, while a negative value lowers the pitches. Returns: Return type: Object itself. Notes
Drum tracks are skipped.
-
write
(path: Union[str, pathlib.Path], kind: str = None, **kwargs)[source]¶ Write to a MIDI, a MusicXML, an ABC or an audio file.
Refer to
muspy.write()
for full documentation.
-
write_abc
(path: Union[str, pathlib.Path], **kwargs)[source]¶ Write to an ABC file.
Refer to
muspy.write_abc()
for full documentation.
-
write_audio
(path: Union[str, pathlib.Path], **kwargs)[source]¶ Write to an audio file.
Refer to
muspy.write_audio()
for full documentation.
-
write_midi
(path: Union[str, pathlib.Path], **kwargs)[source]¶ Write to a MIDI file.
Refer to
muspy.write_midi()
for full documentation.
-
write_musicxml
(path: Union[str, pathlib.Path], **kwargs)[source]¶ Write to a MusicXML file.
Refer to
muspy.write_musicxml()
for full documentation.
-
muspy.
save
(path: Union[str, pathlib.Path, TextIO], music: Music, kind: str = None, **kwargs)[source]¶ Save a Music object loselessly to a JSON or a YAML file.
This is a wrapper function for
muspy.save_json()
andmuspy.save_yaml()
.Parameters: - path (str, Path or TextIO) – Path or file to save the data.
- music (
muspy.Music
) – Music object to save. - kind ({'json', 'yaml'}, optional) – Format to save. Defaults to infer from the extension.
- **kwargs – Keyword arguments to pass to
muspy.save_json()
ormuspy.save_yaml()
.
See also
muspy.save_json()
- Save a Music object to a JSON file.
muspy.save_yaml()
- Save a Music object to a YAML file.
muspy.write()
- Write a Music object to a MIDI/MusicXML/ABC/audio file.
Notes
The conversion can be lossy if any nonserializable object is used (for example, an Annotation object, which can store data of any type).
-
muspy.
save_json
(path: Union[str, pathlib.Path, TextIO], music: Music, skip_missing: bool = True, ensure_ascii: bool = False, compressed: bool = None, **kwargs)[source]¶ Save a Music object to a JSON file.
Parameters: - path (str, Path or TextIO) – Path or file to save the JSON data.
- music (
muspy.Music
) – Music object to save. - skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists.
- ensure_ascii (bool, default: False) – Whether to escape non-ASCII characters. Will be passed to PyYAML’s yaml.dump.
- compressed (bool, optional) – Whether to save as a compressed JSON file (.json.gz). Has no effect when path is a file object. Defaults to infer from the extension (.gz).
- **kwargs – Keyword arguments to pass to
json.dumps()
.
Notes
When a path is given, use UTF-8 encoding and gzip compression if compressed=True.
-
muspy.
save_yaml
(path: Union[str, pathlib.Path, TextIO], music: Music, skip_missing: bool = True, allow_unicode: bool = True, compressed: bool = None, **kwargs)[source]¶ Save a Music object to a YAML file.
Parameters: - path (str, Path or TextIO) – Path or file to save the YAML data.
- music (
muspy.Music
) – Music object to save. - skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists.
- allow_unicode (bool, default: False) – Whether to escape non-ASCII characters. Will be passed to
json.dumps()
. - compressed (bool, optional) – Whether to save as a compressed YAML file (.yaml.gz). Has no effect when path is a file object. Defaults to infer from the extension (.gz).
- **kwargs – Keyword arguments to pass to yaml.dump.
Notes
When a path is given, use UTF-8 encoding and gzip compression if compressed=True.
-
muspy.
synthesize
(music: Music, soundfont_path: Union[str, pathlib.Path] = None, rate: int = 44100, gain: float = None) → numpy.ndarray[source]¶ Synthesize a Music object to raw audio.
Parameters: - music (
muspy.Music
) – Music object to write. - soundfont_path (str or Path, optional) – Path to the soundfount file. Defaults to the path to the downloaded MuseScore General soundfont.
- rate (int, default: 44100) – Sample rate (in samples per sec).
- gain (float, optional) – Master gain (-g option) for Fluidsynth. Defaults to 1/n, where n is the number of tracks. This can be used to prevent distortions caused by clipping.
Returns: Synthesized waveform.
Return type: ndarray, dtype=int16, shape=(?, 2)
- music (
-
muspy.
to_default_event_representation
(music: Music, dtype=<class 'int'>) → numpy.ndarray[source]¶ Encode a Music object into the default event representation.
-
muspy.
to_event_representation
(music: Music, use_single_note_off_event: bool = False, use_end_of_sequence_event: bool = False, encode_velocity: bool = False, force_velocity_event: bool = True, max_time_shift: int = 100, velocity_bins: int = 32, dtype=<class 'int'>) → numpy.ndarray[source]¶ Encode a Music object into event-based representation.
The event-based represetantion represents music as a sequence of events, including note-on, note-off, time-shift and velocity events. The output shape is M x 1, where M is the number of events. The values encode the events. The default configuration uses 0-127 to encode note-on events, 128-255 for note-off events, 256-355 for time-shift events, and 356 to 387 for velocity events.
Parameters: - music (
muspy.Music
) – Music object to encode. - use_single_note_off_event (bool, default: False) – Whether to use a single note-off event for all the pitches. If True, the note-off event will close all active notes, which can lead to lossy conversion for polyphonic music.
- use_end_of_sequence_event (bool, default: False) – Whether to append an end-of-sequence event to the encoded sequence.
- encode_velocity (bool, default: False) – Whether to encode velocities.
- force_velocity_event (bool, default: True) – Whether to add a velocity event before every note-on event. If False, velocity events are only used when the note velocity is changed (i.e., different from the previous one).
- max_time_shift (int, default: 100) – Maximum time shift (in ticks) to be encoded as an separate event. Time shifts larger than max_time_shift will be decomposed into two or more time-shift events.
- velocity_bins (int, default: 32) – Number of velocity bins to use.
- dtype (np.dtype, type or str, default: int) – Data type of the return array.
Returns: Encoded array in event-based representation.
Return type: ndarray, shape=(?, 1)
- music (
-
muspy.
to_mido
(music: Music, use_note_off_message: bool = False)[source]¶ Return a Music object as a MidiFile object.
Parameters: - music (
muspy.Music
object) – Music object to convert. - use_note_off_message (bool, default: False) – Whether to use note-off messages. If False, note-on messages with zero velocity are used instead. The advantage to using note-on messages at zero velocity is that it can avoid sending additional status bytes when Running Status is employed.
Returns: Converted MidiFile object.
Return type: - music (
-
muspy.
to_music21
(music: Music) → music21.stream.base.Score[source]¶ Return a Music object as a music21 Score object.
Parameters: music ( muspy.Music
) – Music object to convert.Returns: Converted music21 Score object. Return type: music21.stream.Score
-
muspy.
to_note_representation
(music: Music, use_start_end: bool = False, encode_velocity: bool = True, dtype: Union[numpy.dtype, type, str] = <class 'int'>) → numpy.ndarray[source]¶ Encode a Music object into note-based representation.
The note-based represetantion represents music as a sequence of (time, pitch, duration, velocity) tuples. For example, a note Note(time=0, duration=4, pitch=60, velocity=64) will be encoded as a tuple (0, 60, 4, 64). The output shape is N * D, where N is the number of notes and D is 4 when encode_velocity is True, otherwise D is 3. The values of the second dimension represent time, pitch, duration and velocity (discarded when encode_velocity is False).
Parameters: - music (
muspy.Music
) – Music object to encode. - use_start_end (bool, default: False) – Whether to use ‘start’ and ‘end’ to encode the timing rather than ‘time’ and ‘duration’.
- encode_velocity (bool, default: True) – Whether to encode note velocities.
- dtype (np.dtype, type or str, default: int) – Data type of the return array.
Returns: Encoded array in note-based representation.
Return type: ndarray, shape=(?, 3 or 4)
- music (
-
muspy.
to_object
(music: Music, kind: str, **kwargs) → Union[music21.stream.base.Stream, mido.midifiles.midifiles.MidiFile, pretty_midi.pretty_midi.PrettyMIDI, pypianoroll.multitrack.Multitrack][source]¶ Return a Music object as an object in other libraries.
Supported classes are music21.Stream,
mido.MidiTrack
,pretty_midi.PrettyMIDI
andpypianoroll.Multitrack
.Parameters: - music (
muspy.Music
) – Music object to convert. - kind (str, {'music21', 'mido', 'pretty_midi', 'pypianoroll'}) – Target class.
Returns: Converted object.
Return type: music21.Stream,
mido.MidiTrack
,pretty_midi.PrettyMIDI
orpypianoroll.Multitrack
- music (
-
muspy.
to_performance_event_representation
(music: Music, dtype=<class 'int'>) → numpy.ndarray[source]¶ Encode a Music object into the performance event representation.
-
muspy.
to_pianoroll_representation
(music: Music, encode_velocity: bool = True, dtype: Union[numpy.dtype, type, str] = None) → numpy.ndarray[source]¶ Encode notes into piano-roll representation.
Parameters: - music (
muspy.Music
) – Music object to encode. - encode_velocity (bool, default: True) – Whether to encode velocities. If True, a binary-valued array will be return. Otherwise, an integer array will be return.
- dtype (np.dtype, type or str, optional) – Data type of the return array. Defaults to uint8 if encode_velocity is True, otherwise bool.
Returns: Encoded array in piano-roll representation.
Return type: ndarray, shape=(?, 128)
- music (
-
muspy.
to_pitch_representation
(music: Music, use_hold_state: bool = False, dtype: Union[numpy.dtype, type, str] = <class 'int'>) → numpy.ndarray[source]¶ Encode a Music object into pitch-based representation.
The pitch-based represetantion represents music as a sequence of pitch, rest and (optional) hold tokens. Only monophonic melodies are compatible with this representation. The output shape is T x 1, where T is the number of time steps. The values indicate whether the current time step is a pitch (0-127), a rest (128) or, optionally, a hold (129).
Parameters: - music (
muspy.Music
) – Music object to encode. - use_hold_state (bool, default: False) – Whether to use a special state for holds.
- dtype (np.dtype, type or str, default: int) – Data type of the return array.
Returns: Encoded array in pitch-based representation.
Return type: ndarray, shape=(?, 1)
- music (
-
muspy.
to_pretty_midi
(music: Music) → pretty_midi.pretty_midi.PrettyMIDI[source]¶ Return a Music object as a PrettyMIDI object.
Tempo changes are not supported yet.
Parameters: music ( muspy.Music
object) – Music object to convert.Returns: Converted PrettyMIDI object. Return type: pretty_midi.PrettyMIDI
Notes
Tempo information will not be included in the output.
-
muspy.
to_pypianoroll
(music: Music) → pypianoroll.multitrack.Multitrack[source]¶ Return a Music object as a Multitrack object.
Parameters: music ( muspy.Music
) – Music object to convert.Returns: multitrack – Converted Multitrack object. Return type: pypianoroll.Multitrack
-
muspy.
to_remi_event_representation
(music: Music, dtype=<class 'int'>) → numpy.ndarray[source]¶ Encode a Music object into the remi event representation.
-
muspy.
to_representation
(music: Music, kind: str, **kwargs) → numpy.ndarray[source]¶ Return a Music object in a specific representation.
Parameters: - music (
muspy.Music
) – Music object to convert. - kind (str, {'pitch', 'piano-roll', 'event', 'note'}) – Target representation.
Returns: array – Converted representation.
Return type: ndarray
- music (
-
muspy.
write
(path: Union[str, pathlib.Path], music: Music, kind: str = None, **kwargs)[source]¶ Write a Music object to a MIDI/MusicXML/ABC/audio file.
Parameters: - path (str or Path) – Path to write the file.
- music (
muspy.Music
) – Music object to convert. - kind ({'midi', 'musicxml', 'abc', 'audio'}, optional) – Format to save. Defaults to infer from the extension.
See also
muspy.save()
- Save a Music object loselessly to a JSON or a YAML file.
-
muspy.
write_audio
(path: Union[str, pathlib.Path], music: Music, audio_format: str = None, soundfont_path: Union[str, pathlib.Path] = None, rate: int = 44100, gain: float = None)[source]¶ Write a Music object to an audio file.
Supported formats include WAV, AIFF, FLAC and OGA.
Parameters: - path (str or Path) – Path to write the audio file.
- music (
muspy.Music
) – Music object to write. - audio_format (str, {'wav', 'aiff', 'flac', 'oga'}, optional) – File format to write. Defaults to infer from the extension.
- soundfont_path (str or Path, optional) – Path to the soundfount file. Defaults to the path to the downloaded MuseScore General soundfont.
- rate (int, default: 44100) – Sample rate (in samples per sec).
- gain (float, optional) – Master gain (-g option) for Fluidsynth. Defaults to 1/n, where n is the number of tracks. This can be used to prevent distortions caused by clipping.
-
muspy.
write_midi
(path: Union[str, pathlib.Path], music: Music, backend: str = 'mido', **kwargs)[source]¶ Write a Music object to a MIDI file.
Parameters: - path (str or Path) – Path to write the MIDI file.
- music (
muspy.Music
) – Music object to write. - backend ({'mido', 'pretty_midi'}, default: 'mido') – Backend to use.
See also
write_midi_mido()
- Write a Music object to a MIDI file using mido as backend.
write_midi_pretty_midi()
- Write a Music object to a MIDI file using pretty_midi as backend.
-
muspy.
write_musicxml
(path: Union[str, pathlib.Path], music: Music, compressed: bool = None)[source]¶ Write a Music object to a MusicXML file.
Parameters: - path (str or Path) – Path to write the MusicXML file.
- music (
muspy.Music
) – Music object to write. - compressed (bool, optional) – Whether to write to a compressed MusicXML file. If None, infer from the extension of the filename (‘.xml’ and ‘.musicxml’ for an uncompressed file, ‘.mxl’ for a compressed file).
-
class
muspy.
NoteRepresentationProcessor
(use_start_end: bool = False, encode_velocity: bool = True, dtype: Union[numpy.dtype, type, str] = <class 'int'>, default_velocity: int = 64)[source]¶ Note-based representation processor.
The note-based represetantion represents music as a sequence of (pitch, time, duration, velocity) tuples. For example, a note Note(time=0, duration=4, pitch=60, velocity=64) will be encoded as a tuple (0, 4, 60, 64). The output shape is L * D, where L is th number of notes and D is 4 when encode_velocity is True, otherwise D is 3. The values of the second dimension represent pitch, time, duration and velocity (discarded when encode_velocity is False).
-
use_start_end
¶ Whether to use ‘start’ and ‘end’ to encode the timing rather than ‘time’ and ‘duration’.
Type: bool, default: False
-
default_velocity
¶ Default velocity value to use when decoding if encode_velocity is False.
Type: int, default: 64
-
decode
(array: numpy.ndarray) → muspy.music.Music[source]¶ Decode note-based representation into a Music object.
Parameters: array (ndarray) – Array in note-based representation to decode. Cast to integer if not of integer type. Returns: Decoded Music object. Return type: muspy.Music
objectSee also
muspy.from_note_representation()
- Return a Music object converted from note-based representation.
-
encode
(music: muspy.music.Music) → numpy.ndarray[source]¶ Encode a Music object into note-based representation.
Parameters: music ( muspy.Music
object) – Music object to encode.Returns: Encoded array in note-based representation. Return type: ndarray (np.uint8) See also
muspy.to_note_representation()
- Convert a Music object into note-based representation.
-
-
class
muspy.
EventRepresentationProcessor
(use_single_note_off_event: bool = False, use_end_of_sequence_event: bool = False, encode_velocity: bool = False, force_velocity_event: bool = True, max_time_shift: int = 100, velocity_bins: int = 32, default_velocity: int = 64)[source]¶ Event-based representation processor.
The event-based represetantion represents music as a sequence of events, including note-on, note-off, time-shift and velocity events. The output shape is M x 1, where M is the number of events. The values encode the events. The default configuration uses 0-127 to encode note-one events, 128-255 for note-off events, 256-355 for time-shift events, and 356 to 387 for velocity events.
-
use_single_note_off_event
¶ Whether to use a single note-off event for all the pitches. If True, the note-off event will close all active notes, which can lead to lossy conversion for polyphonic music.
Type: bool, default: False
-
use_end_of_sequence_event
¶ Whether to append an end-of-sequence event to the encoded sequence.
Type: bool, default: False
-
force_velocity_event
¶ Whether to add a velocity event before every note-on event. If False, velocity events are only used when the note velocity is changed (i.e., different from the previous one).
Type: bool, default: True
-
max_time_shift
¶ Maximum time shift (in ticks) to be encoded as an separate event. Time shifts larger than max_time_shift will be decomposed into two or more time-shift events.
Type: int, default: 100
-
decode
(array: numpy.ndarray) → muspy.music.Music[source]¶ Decode event-based representation into a Music object.
Parameters: array (ndarray) – Array in event-based representation to decode. Cast to integer if not of integer type. Returns: Decoded Music object. Return type: muspy.Music
objectSee also
muspy.from_event_representation()
- Return a Music object converted from event-based representation.
-
encode
(music: muspy.music.Music) → numpy.ndarray[source]¶ Encode a Music object into event-based representation.
Parameters: music ( muspy.Music
object) – Music object to encode.Returns: Encoded array in event-based representation. Return type: ndarray (np.uint16) See also
muspy.to_event_representation()
- Convert a Music object into event-based representation.
-
-
class
muspy.
PianoRollRepresentationProcessor
(encode_velocity: bool = True, default_velocity: int = 64)[source]¶ Piano-roll representation processor.
The piano-roll represetantion represents music as a time-pitch matrix, where the columns are the time steps and the rows are the pitches. The values indicate the presence of pitches at different time steps. The output shape is T x 128, where T is the number of time steps.
-
encode_velocity
¶ Whether to encode velocities. If True, a binary-valued array will be return. Otherwise, an integer array will be return.
Type: bool, default: True
-
default_velocity
¶ Default velocity value to use when decoding if encode_velocity is False.
Type: int, default: 64
-
decode
(array: numpy.ndarray) → muspy.music.Music[source]¶ Decode piano-roll representation into a Music object.
Parameters: array (ndarray) – Array in piano-roll representation to decode. Cast to integer if not of integer type. If encode_velocity is True, casted to boolean if not of boolean type. Returns: Decoded Music object. Return type: muspy.Music
objectSee also
muspy.from_pianoroll_representation()
- Return a Music object converted from piano-roll representation.
-
encode
(music: muspy.music.Music) → numpy.ndarray[source]¶ Encode a Music object into piano-roll representation.
Parameters: music ( muspy.Music
object) – Music object to encode.Returns: Encoded array in piano-roll representation. Return type: ndarray (np.uint8) See also
muspy.to_pianoroll_representation()
- Convert a Music object into piano-roll representation.
-
-
class
muspy.
PitchRepresentationProcessor
(use_hold_state: bool = False, default_velocity: int = 64)[source]¶ Pitch-based representation processor.
The pitch-based represetantion represents music as a sequence of pitch, rest and (optional) hold tokens. Only monophonic melodies are compatible with this representation. The output shape is T x 1, where T is the number of time steps. The values indicate whether the current time step is a pitch (0-127), a rest (128) or, optionally, a hold (129).
-
decode
(array: numpy.ndarray) → muspy.music.Music[source]¶ Decode pitch-based representation into a Music object.
Parameters: array (ndarray) – Array in pitch-based representation to decode. Cast to integer if not of integer type. Returns: Decoded Music object. Return type: muspy.Music
objectSee also
muspy.from_pitch_representation()
- Return a Music object converted from pitch-based representation.
-
encode
(music: muspy.music.Music) → numpy.ndarray[source]¶ Encode a Music object into pitch-based representation.
Parameters: music ( muspy.Music
object) – Music object to encode.Returns: Encoded array in pitch-based representation. Return type: ndarray (np.uint8) See also
muspy.to_pitch_representation()
- Convert a Music object into pitch-based representation.
-
-
muspy.
validate_json
(path: Union[str, pathlib.Path])[source]¶ Validate a file against the JSON schema.
Parameters: path (str or Path) – Path to the file to validate.
-
muspy.
validate_musicxml
(path: Union[str, pathlib.Path])[source]¶ Validate a file against the MusicXML schema.
Parameters: path (str or Path) – Path to the file to validate.
-
muspy.
validate_yaml
(path: Union[str, pathlib.Path])[source]¶ Validate a file against the YAML schema.
Parameters: path (str or Path) – Path to the file to validate.
-
muspy.
show
(music: Music, kind: str, **kwargs)[source]¶ Show visualization.
Parameters: - music (
muspy.Music
) – Music object to convert. - kind ({'piano-roll', 'score'}) – Target representation.
- music (
-
muspy.
show_score
(music: Music, figsize: Tuple[float, float] = None, clef: str = 'treble', clef_octave: int = 0, note_spacing: int = None, font_path: Union[str, pathlib.Path] = None, font_scale: float = None) → muspy.visualization.score.ScorePlotter[source]¶ Show score visualization.
Parameters: - music (
muspy.Music
) – Music object to show. - figsize ((float, float), optional) – Width and height in inches. Defaults to Matplotlib configuration.
- clef ({'treble', 'alto', 'bass'}, default: 'treble') – Clef type.
- clef_octave (int, default: 0) – Clef octave.
- note_spacing (int, default: 4) – Spacing of notes.
- font_path (str or Path, optional) – Path to the music font. Defaults to the path to the downloaded Bravura font.
- font_scale (float, default: 140) – Font scaling factor for finetuning. The default value of 140 is optimized for the default Bravura font.
Returns: A ScorePlotter object that handles the score.
Return type: - music (
-
class
muspy.
ScorePlotter
(fig: matplotlib.figure.Figure, ax: matplotlib.axes._axes.Axes, resolution: int, note_spacing: int = None, font_path: Union[str, pathlib.Path] = None, font_scale: float = None)[source]¶ A plotter that handles the score visualization.
-
fig
¶ Figure object to plot the score on.
Type: matplotlib.figure.Figure
-
axes
¶ Axes object to plot the score on.
Type: matplotlib.axes.Axes
-
font_path
¶ Path to the music font. Defaults to the path to the downloaded Bravura font.
Type: str or Path, optional
-
font_scale
¶ Font scaling factor for finetuning. The default value of 140 is optimized for the default Bravura font.
Type: float, default: 140
-
plot_key_signature
(root: int, mode: str)[source]¶ Plot a key signature. Supports only major and minor keys.
-
plot_note
(time, duration, pitch) → Optional[Tuple[List[matplotlib.text.Text], List[matplotlib.patches.Arc]]][source]¶ Plot a note.
-
plot_staffs
(start: float = None, end: float = None) → List[matplotlib.lines.Line2D][source]¶ Plot the staffs.
-