muspy¶
A toolkit for symbolic music generation.
MusPy is an open source Python library for symbolic music generation. It provides essential tools for developing a music generation system, including dataset management, data I/O, data preprocessing and model evaluation.
Features¶
- Dataset management system for commonly used datasets with interfaces to PyTorch and TensorFlow.
- Data I/O for common symbolic music formats (e.g., MIDI, MusicXML and ABC) and interfaces to other symbolic music libraries (e.g., music21, mido, pretty_midi and Pypianoroll).
- Implementations of common music representations for music generation, including the pitch-based, the event-based, the piano-roll and the note-based representations.
- Model evaluation tools for music generation systems, including audio rendering, score and piano-roll visualizations and objective metrics.
-
class
muspy.
Base
(**kwargs)[source]¶ The base class for MusPy classes.
This is the base class for MusPy classes. It provides two handy I/O methods—from_dict and to_ordered_dict. It also provides intuitive __repr__ as well as methods pretty_str and print for beautifully printing the content.
Hint
To implement a new class in MusPy, please inherit from this class and set the following class variables properly.
- _attributes: An OrderedDict with attribute names as keys and their types as values.
- _optional_attributes: A list of optional attribute names.
- _list_attributes: A list of attributes that are lists.
- _sort_attributes: A list of attributes used when being sorted, which will be passed to operator.attrgetter.
Take
muspy.Note
for example.:_attributes = OrderedDict( [ ("time", int), ("duration", int), ("pitch", int), ("velocity", int), ("pitch_str", str), ] ) _optional_attributes = ["pitch_str"] _sort_attributes = ["time", "duration", "pitch"]
See also
muspy.ComplexBase
- A base class that supports advanced operations on list attributes.
-
adjust_time
(func: Callable[[int], int], attr: Optional[str] = None) → BaseType[source]¶ Adjust the timing of time-stamped objects.
This will apply recursively to an attribute’s attributes.
Parameters: - func (callable) – The function used to compute the new timing from the old timing, i.e., new_time = func(old_time).
- attr (str) – Attribute to adjust. If None, adjust all attributes. Defaults to None.
Returns: Return type: Object itself.
-
classmethod
from_dict
(dict_: Mapping[KT, VT_co]) → BaseType[source]¶ Return an instance constructed from a dictionary.
Instantiate an object whose attributes and the corresponding values are given as a dictionary.
Parameters: dict (dict or mapping) – A dictionary that stores the attributes and their values as key-value pairs, e.g., {“attr1”: value1, “attr2”: value2}. Returns: Return type: Constructed object.
-
is_valid
(attr: Optional[str] = None) → bool[source]¶ Return True if an attribute is valid, otherwise False.
This will recursively apply to an attribute’s attributes.
Parameters: attr (str) – Attribute to validate. Defaults to validate all attributes. Returns: Whether the attribute has a valid type and value. Return type: bool See also
muspy.Base.validate()
- Raise an error if a certain attribute has an invalid type or value.
muspy.Base.is_valid_type()
- Return True if an attribute has a valid type, otherwise False.
-
is_valid_type
(attr: Optional[str] = None) → bool[source]¶ Return True if an attribute has a valid type, otherwise False.
This will apply recursively to an attribute’s attributes.
Parameters: attr (str) – Attribute to validate. Defaults to validate all attributes. Returns: Whether the attribute has a valid type. Return type: bool See also
muspy.Base.validate_type()
- Raise an error if a certain attribute has an invalid type.
muspy.Base.is_valid()
- Return True if an attribute is valid, otherwise False.
-
pretty_str
() → str[source]¶ Return the stored data as a string in a beautiful YAML-like format.
Returns: Stored data as a string in pretty YAML-like format. Return type: str See also
muspy.Base.print()
- Print the stored data in a beautiful YAML-like format.
-
print
()[source]¶ Print the stored data in a beautiful YAML-like format.
See also
muspy.Base.pretty_str()
- Return the stored data as a string in a beautiful YAML-like format.
-
to_ordered_dict
(skip_none: bool = True) → collections.OrderedDict[source]¶ Return the object as an OrderedDict.
Return an ordered dictionary that stores the attributes and their values as key-value pairs.
Parameters: skip_none (bool) – Whether to skip attributes with value None or those that are empty lists. Returns: A dictionary that stores the attributes and their values as key-value pairs, e.g., {“attr1”: value1, “attr2”: value2}. Return type: OrderedDict
-
validate
(attr: Optional[str] = None) → BaseType[source]¶ Raise an error if a certain attribute has an invalid type or value.
This will apply recursively to an attribute’s attributes.
Parameters: attr (str) – Attribute to validate. Defaults to validate all attributes. Returns: Return type: Object itself. See also
muspy.Base.is_valid()
- Return True if an attribute is valid, otherwise False.
muspy.Base.validate_type()
- Raise an error if a certain attribute has an invalid type.
-
validate_type
(attr: Optional[str] = None) → BaseType[source]¶ Raise an error if a certain attribute has an invalid type.
This will apply recursively to an attribute’s attributes.
Parameters: attr (str) – Attribute to validate. Defaults to validate all attributes. Returns: Return type: Object itself. See also
muspy.Base.is_valid_type()
- Return True if an attribute has a valid type, otherwise False.
muspy.Base.validate()
- Raise an error if a certain attribute has an invalid type or value.
-
class
muspy.
ComplexBase
(**kwargs)[source]¶ A base class that supports advanced operations on list attributes.
This class extend the Base class with advanced operations on list attributes, including append, remove_invalid, remove_duplicate and sort.
See also
muspy.Base
- The base class for MusPy classes.
-
append
(obj) → ComplexBaseType[source]¶ Append an object to the correseponding list.
This will automatically determine the list attributes to append based on the type of the object.
Parameters: obj – Object to append.
-
remove_duplicate
(attr: Optional[str] = None, recursive: bool = True) → ComplexBaseType[source]¶ Remove duplicate items.
Parameters: Returns: Return type: Object itself.
-
remove_invalid
(attr: Optional[str] = None, recursive: bool = True) → ComplexBaseType[source]¶ Remove invalid items from list attributes, others left unchanged.
Parameters: Returns: Return type: Object itself.
-
class
muspy.
Annotation
(time: int, annotation: Any, group: Optional[str] = None)[source]¶ A container for annotation.
-
annotation
¶ Annotation of any type.
Type: any object
-
-
class
muspy.
Chord
(time: int, duration: int, pitches: List[int], velocity: Optional[int] = None, pitches_str: Optional[List[int]] = None)[source]¶ A container for chord.
-
pitches
¶ Note pitches, as MIDI note numbers.
Type: list of int
-
pitches_str
¶ Note pitches as strings, useful for distinguishing C# and Db.
Type: list of str
-
clip
(lower: int = 0, upper: int = 127) → muspy.classes.Chord[source]¶ Clip the velocity of the chord.
Parameters: Returns: Return type: Object itself.
-
end
¶ End time of the chord.
-
start
¶ Start time of the chord.
-
-
class
muspy.
KeySignature
(time: int, root: Optional[int] = None, mode: Optional[str] = None, fifths: Optional[int] = None, root_str: Optional[str] = None)[source]¶ A container for key signature.
-
class
muspy.
Metadata
(schema_version: str = '0.0', title: Optional[str] = None, creators: Optional[List[str]] = None, copyright: Optional[str] = None, collection: Optional[str] = None, source_filename: Optional[str] = None, source_format: Optional[str] = None)[source]¶ A container for metadata.
-
creators
¶ Creator(s) of the song.
Type: list of str, optional
-
-
class
muspy.
Note
(time: int, duration: int, pitch: int, velocity: Optional[int] = None, pitch_str: Optional[str] = None)[source]¶ A container for note.
-
clip
(lower: int = 0, upper: int = 127) → muspy.classes.Note[source]¶ Clip the velocity of the note.
Parameters: Returns: Return type: Object itself.
-
end
¶ End time of the note.
-
start
¶ Start time of the note.
-
-
class
muspy.
TimeSignature
(time: int, numerator: int, denominator: int)[source]¶ A container for time signature.
-
class
muspy.
Track
(program: int = 0, is_drum: bool = False, name: Optional[str] = None, notes: Optional[List[muspy.classes.Note]] = None, chords: Optional[List[muspy.classes.Chord]] = None, lyrics: Optional[List[muspy.classes.Lyric]] = None, annotations: Optional[List[muspy.classes.Annotation]] = None)[source]¶ A container for music track.
Indexing a Track object gives the note of a certain index. That is, track[idx] is equivalent to track.notes[idx], while the latter is recommended for readability.
-
program
¶ Program number according to General MIDI specification [1]. Defaults to 0 (Acoustic Grand Piano).
Type: int, 0-127, optional
-
notes
¶ Musical notes. Defaults to an empty list.
Type: list of muspy.Note
objects, optional
-
chords
¶ Chords. Defaults to an empty list.
Type: list of muspy.Chord
objects, optional
-
annotations
¶ Annotations. Defaults to an empty list.
Type: list of muspy.Annotation
objects, optional
-
lyrics
¶ Lyrics. Defaults to an empty list.
Type: list of muspy.Lyric
objects, optional
Tip
Indexing a Track object gives the note of a certain index. That is, track[idx] is equivalent to track.notes[idx]. Length of a Track object is the number of notes. That is, len(track) is equivalent to len(track.notes).
References
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set -
clip
(lower: int = 0, upper: int = 127) → muspy.classes.Track[source]¶ Clip the velocity of each note.
Parameters: Returns: Return type: Object itself.
-
get_end_time
(is_sorted: bool = False) → int[source]¶ Return the time of the last event.
This includes notes, chords, lyrics and annotations.
Parameters: is_sorted (bool) – Whether all the list attributes are sorted. Defaults to False.
-
-
muspy.
adjust_resolution
(music: muspy.music.Music, target: Optional[int] = None, factor: Optional[float] = None) → muspy.music.Music[source]¶ Adjust resolution and update the timing of time-stamped objects.
Parameters: - music (
muspy.Music
object) – MusPy music object to adjust. - target (int, optional) – Target resolution.
- factor (int or float, optional) – Factor used to adjust the resolution based on the formula: new_resolution = old_resolution * factor. For example, a factor of 2 double the resolution, and a factor of 0.5 halve the resolution.
- music (
-
muspy.
adjust_time
(obj: muspy.base.Base, func: Callable[[int], int]) → muspy.base.Base[source]¶ Adjust the timing of time-stamped objects.
Parameters: - obj (
muspy.Music
ormuspy.Track
object) – Object to adjust. - func (callable) – The function used to compute the new timing from the old timing, i.e., new_time = func(old_time).
See also
muspy.adjust_resolution()
- Adjust the resolution and the timing of time-stamped objects.
Note
The resolution are left unchanged.
- obj (
-
muspy.
append
(obj1: muspy.base.ComplexBase, obj2) → muspy.base.ComplexBase[source]¶ Append an object to the correseponding list.
- If obj1 is of type
muspy.Music
, obj2 can bemuspy.KeySignature
,muspy.TimeSignature
,muspy.Lyric
,muspy.Annotation
ormuspy.Track
. - If obj1 is of type
muspy.Track
, obj2 can bemuspy.Note
,muspy.Lyric
ormuspy.Annotation
. - If obj1 is of type
muspy.Timing
, obj2 can bemuspy.Tempo
.
Parameters: - obj1 (
muspy.Music
,muspy.Track
or) –muspy.Tempo
object Object to which obj2 to append. - obj2 (MusPy objects (see below)) – Object to be appended to obj1.
- If obj1 is of type
-
muspy.
clip
(obj: Union[muspy.music.Music, muspy.classes.Track, muspy.classes.Note], lower: int = 0, upper: int = 127) → Union[muspy.music.Music, muspy.classes.Track, muspy.classes.Note][source]¶ Clip the velocity of each note.
Parameters: - obj (
muspy.Music
,muspy.Track
ormuspy.Note
) – object Object to clip. - lower (int or float, optional) – Lower bound. Defaults to 0.
- upper (int or float, optional) – Upper bound. Defaults to 127.
- obj (
-
muspy.
get_end_time
(obj: Union[muspy.music.Music, muspy.classes.Track], is_sorted: bool = False) → int[source]¶ Return the end time, i.e., the time of the last event in all tracks.
- This includes tempos, key signatures, time signatures, notes offsets,
- lyrics and annotations.
Parameters: - obj (
muspy.Music
ormuspy.Track
object) – Object to inspect. - is_sorted (bool) – Whether all the list attributes are sorted. Defaults to False.
-
muspy.
get_real_end_time
(music: muspy.music.Music, is_sorted: bool = False) → float[source]¶ Return the end time in realtime.
- This includes tempos, key signatures, time signatures, notes offsets,
- lyrics and annotations. Assume 120 qpm (quarter notes per minute) if no tempo information is available.
Parameters: - music (
muspy.Music
object) – Object to inspect. - is_sorted (bool) – Whether all the list attributes are sorted. Defaults to False.
-
muspy.
remove_duplicate
(obj: muspy.base.ComplexBase) → muspy.base.ComplexBase[source]¶ Remove duplicate change events.
Parameters: obj ( muspy.Music
object) – Object to process.
-
muspy.
sort
(obj: muspy.base.ComplexBase) → muspy.base.ComplexBase[source]¶ Sort all the time-stamped objects with respect to event time.
- If a
muspy.Music
is given, this will sort key signatures, time signatures, lyrics and annotations, along with notes, lyrics and annotations for each track. - If a
muspy.Track
is given, this will sort notes, lyrics and annotations.
Parameters: obj ( muspy.ComplexBase
object) – Object to sort.- If a
-
muspy.
to_ordered_dict
(obj: muspy.base.Base, ignore_null: bool = True) → collections.OrderedDict[source]¶ Return an OrderedDict converted from a Music object.
Parameters: obj ( muspy.Base
object) – MusPy object to convert.Returns: Converted OrderedDict. Return type: OrderedDict
-
muspy.
transpose
(obj: Union[muspy.music.Music, muspy.classes.Track, muspy.classes.Note], semitone: int) → Union[muspy.music.Music, muspy.classes.Track, muspy.classes.Note][source]¶ Transpose all the notes by a number of semitones.
Parameters: - obj (
muspy.Music
,muspy.Track
ormuspy.Note
) – - object – Object to transpose.
- semitone (int) – The number of semitones to transpose the notes. A positive value raises the pitches, while a negative value lowers the pitches.
- obj (
-
class
muspy.
ABCFolderDataset
(root: Union[str, pathlib.Path], convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ A class of local datasets containing ABC files in a folder.
-
class
muspy.
Dataset
[source]¶ Base class for all MusPy datasets.
To build a custom dataset, it should inherit this class and overide the methods
__getitem__
and__len__
as well as the class attribute_info
.__getitem__
should return thei
-th data sample as amuspy.Music
object.__len__
should return the size of the dataset._info
should be amuspy.DatasetInfo
instance containing the dataset information.-
save
(root: Union[str, pathlib.Path], kind: Optional[str] = 'json', n_jobs: int = 1, ignore_exceptions: bool = True)[source]¶ Save all the music objects to a directory.
The converted files will be named by its index and saved to
root/
.Parameters: - root (str or Path) – Root directory to save the data.
- kind ({'json', 'yaml'}, optional) – File format to save the data. Defaults to ‘json’.
- n_jobs (int, optional) – Maximum number of concurrently running jobs in multiprocessing. If equal to 1, disable multiprocessing. Defaults to 1.
- ignore_exceptions (bool, optional) – Whether to ignore errors and skip failed conversions. This can be helpful if some of the source files is known to be corrupted. Defaults to False.
Notes
The original filenames can be found in the
filenames
attribute. For example, the file atfilenames[i]
will be converted and saved to{i}.json
.
-
split
(filename: Union[str, pathlib.Path, None] = None, splits: Optional[Sequence[float]] = None, random_state: Any = None) → Dict[str, List[int]][source]¶ Return the dataset as a PyTorch dataset.
Parameters: - filename (str or Path, optional) – If given and exists, path to the file to read the split from. If None or not exists, path to save the split.
- splits (float or list of float, optional) – Ratios for train-test-validation splits. If None, return the full dataset as a whole. If float, return train and test splits. If list of two floats, return train and test splits. If list of three floats, return train, test and validation splits.
- random_state (int, array_like or RandomState, optional) – Random state used to create the splits. If int or array_like,
the value is passed to
numpy.random.RandomState
, and the create RandomState object is used to create the splits. If RandomState, it will be used to create the splits.
-
to_pytorch_dataset
(factory: Optional[Callable] = None, representation: Optional[str] = None, split_filename: Union[str, pathlib.Path, None] = None, splits: Optional[Sequence[float]] = None, random_state: Any = None, **kwargs) → Union[TorchDataset, Dict[str, TorchDataset]][source]¶ Return the dataset as a PyTorch dataset.
Parameters: - factory (Callable, optional) – Function to be applied to the Music objects. The input is a Music object, and the output is an array or a tensor.
- representation ({'pitch', 'piano-roll', 'event', 'note'}, optional) – Target representation.
- split_filename (str or Path, optional) – If given and exists, path to the file to read the split from. If None or not exists, path to save the split.
- splits (float or list of float, optional) – Ratios for train-test-validation splits. If None, return the full dataset as a whole. If float, return train and test splits. If list of two floats, return train and test splits. If list of three floats, return train, test and validation splits.
- random_state (int, array_like or RandomState, optional) – Random state used to create the splits. If int or array_like,
the value is passed to
numpy.random.RandomState
, and the create RandomState object is used to create the splits. If RandomState, it will be used to create the splits.
Returns: - class:torch.utils.data.Dataset` or Dict of
- class:torch.utils.data.Dataset` – Converted PyTorch dataset(s).
-
to_tensorflow_dataset
(factory: Optional[Callable] = None, representation: Optional[str] = None, split_filename: Union[str, pathlib.Path, None] = None, splits: Optional[Sequence[float]] = None, random_state: Any = None, **kwargs) → Union[TFDataset, Dict[str, TFDataset]][source]¶ Return the dataset as a TensorFlow dataset.
Parameters: - factory (Callable, optional) – Function to be applied to the Music objects. The input is a Music object, and the output is an array or a tensor.
- representation ({'pitch', 'piano-roll', 'event', 'note'}, optional) – Target representation.
- split_filename (str or Path, optional) – If given and exists, path to the file to read the split from. If None or not exists, path to save the split.
- splits (float or list of float, optional) – Ratios for train-test-validation splits. If None, return the full dataset as a whole. If float, return train and test splits. If list of two floats, return train and test splits. If list of three floats, return train, test and validation splits.
- random_state (int, array_like or RandomState, optional) – Random state used to create the splits. If int or array_like,
the value is passed to
numpy.random.RandomState
, and the create RandomState object is used to create the splits. If RandomState, it will be used to create the splits.
Returns: - class:tensorflow.data.Dataset` or Dict of
- class:tensorflow.data.dataset` – Converted TensorFlow dataset(s).
-
-
class
muspy.
DatasetInfo
(name: Optional[str] = None, description: Optional[str] = None, homepage: Optional[str] = None, license: Optional[str] = None)[source]¶ A container for dataset information.
-
class
muspy.
EssenFolkSongDatabase
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ Essen Folk Song Database.
-
class
muspy.
FolderDataset
(root: Union[str, pathlib.Path], convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ A class of datasets containing files in a folder.
Two modes are available for this dataset. When the on-the-fly mode is enabled, a data sample is converted to a music object on the fly when being indexed. When the on-the-fly mode is disabled, a data sample is loaded from the precomputed converted data.
Parameters: - convert (bool, optional) – Whether to convert the dataset to MusPy JSON/YAML files. If False, will check if converted data exists. If so, disable on-the-fly mode. If not, enable on-the-fly mode and warns. Defaults to False.
- kind ({'json', 'yaml'}, optional) – File format to save the data. Defaults to ‘json’.
- n_jobs (int, optional) – Maximum number of concurrently running jobs in multiprocessing. If equal to 1, disable multiprocessing. Defaults to 1.
- ignore_exceptions (bool, optional) – Whether to ignore errors and skip failed conversions. This can be helpful if some of the source files is known to be corrupted. Defaults to True.
- use_converted (bool, optional) – Force to disable on-the-fly mode and use stored converted data
Important
muspy.FolderDataset.converted_exists()
depends solely on a special file named.muspy.success
in the folder{root}/_converted/
, which serves as an indicator for the existence and integrity of the converted dataset. If the converted dataset is built bymuspy.FolderDataset.convert()
, the.muspy.success
file will be created as well. If the converted dataset is created manually, make sure to create the.muspy.success
file in the folder{root}/_converted/
to prevent errors.Notes
This class is extended from
muspy.Dataset
. To build a custom dataset based on this class, please refer tomuspy.Dataset
for the docmentation of the methods__getitem__
and__len__
, and the class attribute_info
.In addition, the attribute
_extension
and methodread
should be properly set._extension
is the extension to look for when building the dataset. All files with the given extension will be included as source files.read
is a callable that takes as inputs a filename of a source file and return the converted Music object.See also
muspy.Dataset
- The base class for all MusPy datasets.
-
convert
(kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True) → FolderDatasetType[source]¶ Convert and save the Music objects.
The converted files will be named by its index and saved to
root/_converted
. The original filenames can be found in thefilenames
attribute. For example, the file atfilenames[i]
will be converted and saved to{i}.json
.Parameters: - kind ({'json', 'yaml'}, optional) – File format to save the data. Defaults to ‘json’.
- n_jobs (int, optional) – Maximum number of concurrently running jobs in multiprocessing. If equal to 1, disable multiprocessing. Defaults to 1.
- ignore_exceptions (bool, optional) – Whether to ignore errors and skip failed conversions. This can be helpful if some of the source files is known to be corrupted. Defaults to True.
Returns: Return type: Object itself.
-
converted_dir
¶ Return the path to the root directory of the converted dataset.
-
load
(filename: Union[str, pathlib.Path]) → muspy.music.Music[source]¶ Read a file into a Music object.
-
class
muspy.
HymnalDataset
(root: Union[str, pathlib.Path], download: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ Hymnal Dataset.
-
class
muspy.
HymnalTuneDataset
(root: Union[str, pathlib.Path], download: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ Hymnal Dataset (tune only).
-
class
muspy.
JSBChoralesDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ Johann Sebastian Bach Chorales Dataset.
-
class
muspy.
LakhMIDIAlignedDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ Lakh MIDI Dataset - aligned subset.
-
class
muspy.
LakhMIDIDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ Lakh MIDI Dataset.
-
class
muspy.
LakhMIDIMatchedDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ Lakh MIDI Dataset - matched subset.
-
class
muspy.
MAESTRODatasetV1
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ MAESTRO Dataset (MIDI only).
-
class
muspy.
MAESTRODatasetV2
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ MAESTRO Dataset (MIDI only).
-
class
muspy.
Music21Dataset
(composer: Optional[str] = None)[source]¶ A class of datasets containing files in music21 corpus.
Parameters: - composer (str) – Name of a composer or a collection.
- extensions (list of str) – File extensions of desired files.
Notes
Please refer to the music21 corpus reference page for a full list [1].
[1] https://web.mit.edu/music21/doc/about/referenceCorpus.html
-
convert
(root: Union[str, pathlib.Path], kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True) → muspy.datasets.base.MusicDataset[source]¶ Convert and save the Music objects; return a MusicDataset instance.
Parameters: - root (str or Path) – Root directory to save the data.
- kind ({'json', 'yaml'}, optional) – File format to save the data. Defaults to ‘json’.
- n_jobs (int, optional) – Maximum number of concurrently running jobs in multiprocessing. If equal to 1, disable multiprocessing. Defaults to 1.
- ignore_exceptions (bool, optional) – Whether to ignore errors and skip failed conversions. This can be helpful if some of the source files is known to be corrupted. Defaults to True.
-
class
muspy.
MusicDataset
(root: Union[str, pathlib.Path], kind: str = 'json')[source]¶ A local dataset containing MusPy JSON/YAML files in a folder.
-
kind
¶ File format of the data. Defaults to ‘json’.
Type: {‘json’, ‘yaml’}, optional
-
-
class
muspy.
NESMusicDatabase
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ NES Music Database.
-
class
muspy.
NottinghamDatabase
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ Nottingham Database.
-
class
muspy.
RemoteABCFolderDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ A class of remote datasets containing ABC files in a folder.
-
class
muspy.
RemoteDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False)[source]¶ Base class for remote MusPy datasets.
This class is extended from
muspy.Dataset
to support remote datasets. To build a custom dataset based on this class, please refer tomuspy.Dataset
for the docmentation of the methods__getitem__
and__len__
, and the class attribute_info
. In addition, the class attribute_sources
containing the URLs to the source files should be properly set (see Notes).Parameters: Raises: RuntimeError: – If
download_and_extract
is False but file{root}/.muspy.success
does not exist (see below).Important
muspy.Dataset.exists()
depends solely on a special file named.muspy.success
in the folder{root}/
, which serves as an indicator for the existence and integrity of the dataset. This file will automatically be created if the dataset is successfully downloaded and extracted bymuspy.Dataset.download_and_extract()
.If the dataset is downloaded manually, make sure to create the
.muspy.success
file in the folder{root}/
to prevent errors.Notes
The class attribute
_sources
is a dictionary containing the following information of each source file.- filename (str): Name to save the file.
- url (str): URL to the file.
- archive (bool): Whether the file is an archive.
- md5 (str, optional): Expected MD5 checksum of the file.
Here is an example.:
_sources = { "example": { "filename": "example.tar.gz", "url": "https://www.example.com/example.tar.gz", "archive": True, "md5": None, } }
See also
muspy.Dataset
- The base class for all MusPy datasets.
-
download
() → RemoteDatasetType[source]¶ Download the source datasets.
Returns: Return type: Object itself.
-
download_and_extract
(cleanup: bool = False) → RemoteDatasetType[source]¶ Extract the downloaded archives.
This is equivalent to
RemoteDataset.download().extract(cleanup)
.Parameters: cleanup (bool, optional) – Whether to remove the original archive. Defaults to False. Returns: Return type: Object itself.
-
class
muspy.
RemoteFolderDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ A class of remote datasets containing files in a folder.
This class extended
muspy.RemoteDataset
andmuspy.FolderDataset
. Please refer to their documentation for details.Parameters: - download_and_extract (bool, optional) – Whether to download and extract the dataset. Defaults to False.
- cleanup (bool, optional) – Whether to remove the original archive(s). Defaults to False.
- convert (bool, optional) – Whether to convert the dataset to MusPy JSON/YAML files. If False, will check if converted data exists. If so, disable on-the-fly mode. If not, enable on-the-fly mode and warns. Defaults to False.
- kind ({'json', 'yaml'}, optional) – File format to save the data. Defaults to ‘json’.
- n_jobs (int, optional) – Maximum number of concurrently running jobs in multiprocessing. If equal to 1, disable multiprocessing. Defaults to 1.
- ignore_exceptions (bool, optional) – Whether to ignore errors and skip failed conversions. This can be helpful if some of the source files is known to be corrupted. Defaults to True.
- use_converted (bool, optional) – Force to disable on-the-fly mode and use stored converted data
See also
muspy.RemoteDataset
- Base class for remote MusPy datasets.
muspy.FolderDataset
- A class of datasets containing files in a folder.
-
class
muspy.
RemoteMusicDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, kind: str = 'json')[source]¶ A dataset containing MusPy JSON/YAML files in a folder.
This class extended
muspy.RemoteDataset
andmuspy.FolderDataset
. Please refer to their documentation for details.-
kind
¶ File format of the data. Defaults to ‘json’.
Type: {‘json’, ‘yaml’}, optional
Parameters: -
-
class
muspy.
WikifoniaDataset
(root: Union[str, pathlib.Path], download_and_extract: bool = False, cleanup: bool = False, convert: bool = False, kind: str = 'json', n_jobs: int = 1, ignore_exceptions: bool = True, use_converted: Optional[bool] = None)[source]¶ Wikifonia dataset.
-
muspy.
get_dataset
(key: str) → Type[muspy.datasets.base.Dataset][source]¶ Return a certain dataset class by key.
Parameters: key (str) – Dataset key (case-insensitive). Returns: Return type: The corresponding dataset class.
-
muspy.
list_datasets
()[source]¶ Return all supported dataset classes as a list.
Returns: Return type: A list of all supported dataset classes.
-
muspy.
get_bravura_font_dir
() → pathlib.Path[source]¶ Return path to the directory of the Bravura font.
-
muspy.
get_musescore_soundfont_dir
() → pathlib.Path[source]¶ Return path to the directory of the MuseScore General soundfont.
-
muspy.
get_musescore_soundfont_path
() → pathlib.Path[source]¶ Return path to the MuseScore General soundfont.
-
muspy.
from_event_representation
(array: numpy.ndarray, resolution: int = 24, program: int = 0, is_drum: bool = False, use_single_note_off_event: bool = False, use_end_of_sequence_event: bool = False, max_time_shift: int = 100, velocity_bins: int = 32, default_velocity: int = 64) → muspy.music.Music[source]¶ Decode event-based representation into a Music object.
Parameters: - array (ndarray) – Array in event-based representation to decode. Will be casted to integer if not of integer type.
- resolution (int) – Time steps per quarter note. Defaults to muspy.DEFAULT_RESOLUTION.
- program (int, optional) – Program number according to General MIDI specification [1]. Acceptable values are 0 to 127. Defaults to 0 (Acoustic Grand Piano).
- is_drum (bool, optional) – A boolean indicating if it is a percussion track. Defaults to False.
- use_single_note_off_event (bool) – Whether to use a single note-off event for all the pitches. If True, the note-off event will close all active notes, which can lead to lossy conversion for polyphonic music. Defaults to False.
- use_end_of_sequence_event (bool) – Whether to append an end-of-sequence event to the encoded sequence. Defaults to False.
- max_time_shift (int) – Maximum time shift (in ticks) to be encoded as an separate event. Time shifts larger than max_time_shift will be decomposed into two or more time-shift events. Defaults to 100.
- velocity_bins (int) – Number of velocity bins to use. Defaults to 32.
- default_velocity (int) – Default velocity value to use when decoding. Defaults to 64.
Returns: Decoded Music object.
Return type: muspy.Music
objectReferences
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
-
muspy.
from_mido
(midi: mido.midifiles.midifiles.MidiFile, duplicate_note_mode: str = 'fifo') → muspy.music.Music[source]¶ Return a Music object converted from a mido MidiFile object.
Parameters: - midi (
mido.MidiFile
object) – MidiFile object to convert. - duplicate_note_mode ({'fifo', 'lifo, 'close_all'}) –
Policy for dealing with duplicate notes. When a note off message is presetned while there are multiple correspoding note on messages that have not yet been closed, we need a policy to decide which note on messages to close. Defaults to ‘fifo’.
- ’fifo’ (first in first out): close the earliest note on
- ’lifo’ (first in first out):close the latest note on
- ’close_all’: close all note on messages
Returns: Converted Music object.
Return type: muspy.Music
object- midi (
-
muspy.
from_music21
(stream: music21.stream.Stream, resolution=24) → Union[muspy.music.Music, List[muspy.music.Music], muspy.classes.Track, List[muspy.classes.Track]][source]¶ Return a Music object converted from a music21 Stream object.
Parameters: - stream (music21.stream.Stream object) – Stream object to convert.
- resolution (int, optional) – Time steps per quarter note. Defaults to muspy.DEFAULT_RESOLUTION.
Returns: Converted Music object(s) or Track object(s).
Return type: muspy.Music
object(s) ormuspy.Track
object(s)
-
muspy.
from_music21_opus
(opus: music21.stream.Opus, resolution=24) → List[muspy.music.Music][source]¶ Return a list of Music objects converted from a music21 Opus object.
Parameters: - opus (music21.stream.Opus object) – Opus object to convert.
- resolution (int, optional) – Time steps per quarter note. Defaults to muspy.DEFAULT_RESOLUTION.
Returns: Converted MusPy Music object.
Return type: muspy.Music
object
-
muspy.
from_note_representation
(array: numpy.ndarray, resolution: int = 24, program: int = 0, is_drum: bool = False, use_start_end: bool = False, encode_velocity: bool = True, default_velocity: int = 64) → muspy.music.Music[source]¶ Decode note-based representation into a Music object.
Parameters: - array (ndarray) – Array in note-based representation to decode. Will be casted to integer if not of integer type.
- resolution (int) – Time steps per quarter note. Defaults to muspy.DEFAULT_RESOLUTION.
- program (int, optional) – Program number according to General MIDI specification [1]. Acceptable values are 0 to 127. Defaults to 0 (Acoustic Grand Piano).
- is_drum (bool, optional) – A boolean indicating if it is a percussion track. Defaults to False.
- use_start_end (bool) – Whether to use ‘start’ and ‘end’ to encode the timing rather than ‘time’ and ‘duration’. Defaults to False.
- encode_velocity (bool) – Whether to encode note velocities. Defaults to True.
- default_velocity (int) – Default velocity value to use when decoding if encode_velocity is False. Defaults to 64.
Returns: Decoded Music object.
Return type: muspy.Music
objectReferences
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
-
muspy.
from_object
(obj: Union[music21.stream.Stream, mido.midifiles.midifiles.MidiFile, pretty_midi.pretty_midi.PrettyMIDI, pypianoroll.multitrack.Multitrack], **kwargs) → Union[muspy.music.Music, List[muspy.music.Music], muspy.classes.Track, List[muspy.classes.Track]][source]¶ Return a Music object converted from an outside object.
Parameters: obj – Object to convert. Supported objects are music21.Stream, mido.MidiTrack
,pretty_midi.PrettyMIDI
, andpypianoroll.Multitrack
objects.Returns: music – Converted Music object. Return type: muspy.Music
object
-
muspy.
from_pianoroll_representation
(array: numpy.ndarray, resolution: int = 24, program: int = 0, is_drum: bool = False, encode_velocity: bool = True, default_velocity: int = 64) → muspy.music.Music[source]¶ Decode pitch-based representation into a Music object.
Parameters: - array (ndarray) – Array in piano-roll representation to decode. Will be casted to integer if not of integer type. If encode_velocity is True, will be casted to boolean if not of boolean type.
- resolution (int) – Time steps per quarter note. Defaults to muspy.DEFAULT_RESOLUTION.
- program (int, optional) – Program number according to General MIDI specification [1]. Acceptable values are 0 to 127. Defaults to 0 (Acoustic Grand Piano).
- is_drum (bool, optional) – A boolean indicating if it is a percussion track. Defaults to False.
- encode_velocity (bool) – Whether to encode velocities. Defaults to True.
- default_velocity (int) – Default velocity value to use when decoding. Defaults to 64.
Returns: Decoded Music object.
Return type: muspy.Music
objectReferences
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
-
muspy.
from_pitch_representation
(array: numpy.ndarray, resolution: int = 24, program: int = 0, is_drum: bool = False, use_hold_state: bool = False, default_velocity: int = 64) → muspy.music.Music[source]¶ Decode pitch-based representation into a Music object.
Parameters: - array (ndarray) – Array in pitch-based representation to decode. Will be casted to integer if not of integer type.
- resolution (int) – Time steps per quarter note. Defaults to muspy.DEFAULT_RESOLUTION.
- program (int, optional) – Program number according to General MIDI specification [1]. Acceptable values are 0 to 127. Defaults to 0 (Acoustic Grand Piano).
- is_drum (bool, optional) – A boolean indicating if it is a percussion track. Defaults to False.
- use_hold_state (bool) – Whether to use a special state for holds. Defaults to False.
- default_velocity (int) – Default velocity value to use when decoding. Defaults to 64.
Returns: Decoded Music object.
Return type: muspy.Music
objectReferences
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
-
muspy.
from_pretty_midi
(midi: pretty_midi.pretty_midi.PrettyMIDI) → muspy.music.Music[source]¶ Return a Music object converted from a pretty_midi PrettyMIDI object.
Parameters: midi ( pretty_midi.PrettyMIDI
object) – PrettyMIDI object to convert.Returns: Converted Music object. Return type: muspy.Music
object
-
muspy.
from_pypianoroll
(multitrack: pypianoroll.multitrack.Multitrack, default_velocity: int = 64) → muspy.music.Music[source]¶ Return a Music object converted from a Pypianoroll Multitrack object.
Parameters: - multitrack (
pypianoroll.Multitrack
object) – Multitrack object to convert. - default_velocity (int) – Default velocity value to use when decoding. Defaults to 64.
Returns: music – Converted MusPy Music object.
Return type: muspy.Music
object- multitrack (
-
muspy.
from_representation
(array: numpy.ndarray, kind: str, **kwargs) → muspy.music.Music[source]¶ Update with the given representation.
Parameters: - array (
numpy.ndarray
) – Array in a supported representation. - kind (str, {'pitch', 'pianoroll', 'event', 'note'}) – Data representation type (case-insensitive).
Returns: music – Converted Music object.
Return type: muspy.Music
object- array (
-
muspy.
load
(path: Union[str, pathlib.Path], kind: Optional[str] = None, **kwargs) → muspy.music.Music[source]¶ Return a Music object loaded from a JSON or a YAML file.
Parameters: - path (str or Path) – Path to the file to load.
- kind ({'json', 'yaml'}, optional) – Format to save (case-insensitive). Defaults to infer the format from the extension.
- **kwargs (dict) – Keyword arguments to pass to the target function. See
muspy.load_json()
ormuspy.load_yaml()
for available arguments.
Returns: Loaded Music object.
Return type: muspy.Music
objectSee also
muspy.read()
- Read from other formats such as MIDI and MusicXML.
-
muspy.
load_json
(path: Union[str, pathlib.Path]) → muspy.music.Music[source]¶ Return a Music object loaded from a JSON file.
Parameters: path (str or Path) – Path to the file to load. Returns: Loaded Music object. Return type: muspy.Music
object
-
muspy.
load_yaml
(path: Union[str, pathlib.Path]) → muspy.music.Music[source]¶ Return a Music object loaded from a YAML file.
Parameters: path (str or Path) – Path to the file to load. Returns: Loaded Music object. Return type: muspy.Music
object
-
muspy.
read
(path: Union[str, pathlib.Path], kind: Optional[str] = None, **kwargs) → Union[muspy.music.Music, List[muspy.music.Music]][source]¶ Read a MIDI or a MusicXML file into a Music object.
Parameters: - path (str or Path) – Path to the file to read.
- kind ({'midi', 'musicxml', 'abc'}, optional) – Format to save (case-insensitive). Defaults to infer the format from the extension.
Returns: Converted Music object(s).
Return type: muspy.Music
object or list ofmuspy.Music
objectsSee also
muspy.load()
- Load from a JSON or a YAML file.
-
muspy.
read_abc
(path: Union[str, pathlib.Path], number: Optional[int] = None, resolution=24) → List[muspy.music.Music][source]¶ Return an ABC file into Music object(s) using music21 as backend.
Parameters: Returns: Converted MusPy Music object(s).
Return type: list of
muspy.Music
objects
-
muspy.
read_abc_string
(data_str: str, number: Optional[int] = None, resolution=24)[source]¶ Read ABC data into Music object(s) using music21 as backend.
Parameters: Returns: Converted MusPy Music object(s).
Return type: muspy.Music
object
-
muspy.
read_midi
(path: Union[str, pathlib.Path], backend: str = 'mido', duplicate_note_mode: str = 'fifo') → muspy.music.Music[source]¶ Read a MIDI file into a Music object.
Parameters: - path (str or Path) – Path to the MIDI file to read.
- backend ({'mido', 'pretty_midi'}) – Backend to use.
- duplicate_note_mode ({'fifo', 'lifo, 'close_all'}) –
Policy for dealing with duplicate notes. When a note off message is presetned while there are multiple correspoding note on messages that have not yet been closed, we need a policy to decide which note on messages to close. Defaults to ‘fifo’. Only used when backend=’mido’.
- ’fifo’ (first in first out): close the earliest note on
- ’lifo’ (first in first out):close the latest note on
- ’close_all’: close all note on messages
Returns: Converted Music object.
Return type: muspy.Music
object
-
muspy.
read_musicxml
(path: Union[str, pathlib.Path], compressed: Optional[bool] = None) → muspy.music.Music[source]¶ Read a MusicXML file into a Music object.
Parameters: path (str or Path) – Path to the MusicXML file to read. Returns: Converted Music object. Return type: muspy.Music
objectNotes
Grace notes and unpitched notes are not supported.
-
muspy.
drum_in_pattern_rate
(music: muspy.music.Music, meter: str) → float[source]¶ Return the ratio of drum notes in a certain drum pattern.
The drum-in-pattern rate is defined as the ratio of the number of notes in a certain scale to the total number of notes. Only drum tracks are considered. Return NaN if no drum note is found. This metric is used in [1].
\[drum\_in\_pattern\_rate = \frac{ \#(drum\_notes\_in\_pattern)}{\#(drum\_notes)}\]Parameters: - music (
muspy.Music
object) – Music object to evaluate. - meter (str, {'duple', 'triple'}) – Meter of the drum pattern.
Returns: Drum-in-pattern rate.
Return type: See also
muspy.drum_pattern_consistency()
- Compute the largest drum-in-pattern rate.
References
- [1] Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang,
- “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- music (
-
muspy.
drum_pattern_consistency
(music: muspy.music.Music) → float[source]¶ Return the largest drum-in-pattern rate.
The drum pattern consistency is defined as the largest drum-in-pattern rate over duple and triple meters. Only drum tracks are considered. Return NaN if no drum note is found.
\[drum\_pattern\_consistency = \max_{meter}{ drum\_in\_pattern\_rate(meter)}\]Parameters: music ( muspy.Music
object) – Music object to evaluate.Returns: Drum pattern consistency. Return type: float See also
muspy.drum_in_pattern_rate()
- Compute the ratio of drum notes in a certain drum pattern.
-
muspy.
empty_beat_rate
(music: muspy.music.Music) → float[source]¶ Return the ratio of empty beats.
The empty-beat rate is defined as the ratio of the number of empty beats (where no note is played) to the total number of beats. Return NaN if song length is zero. This metric is also implemented in Pypianoroll [1].
\[empty\_beat\_rate = \frac{\#(empty\_beats)}{\#(beats)}\]Parameters: music ( muspy.Music
object) – Music object to evaluate.Returns: Empty-beat rate. Return type: float See also
muspy.empty_measure_rate()
- Compute the ratio of empty measures.
References
- [1] Hao-Wen Dong, Wen-Yi Hsiao, and Yi-Hsuan Yang, “Pypianoroll: Open
- Source Python Package for Handling Multitrack Pianorolls,” in Late-Breaking Demos of the 18th International Society for Music Information Retrieval Conference (ISMIR), 2018.
-
muspy.
empty_measure_rate
(music: muspy.music.Music, measure_resolution: int) → float[source]¶ Return the ratio of empty measures.
The empty-measure rate is defined as the ratio of the number of empty measures (where no note is played) to the total number of measures. Note that this metric only works for songs with a constant time signature. Return NaN if song length is zero. This metric is used in [1].
\[empty\_measure\_rate = \frac{\#(empty\_measures)}{\#(measures)}\]Parameters: - music (
muspy.Music
object) – Music object to evaluate. - measure_resolution (int) – Time steps per measure.
Returns: Empty-measure rate.
Return type: See also
muspy.empty_beat_rate()
- Compute the ratio of empty beats.
References
- [1] Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang,
- “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- music (
-
muspy.
groove_consistency
(music: muspy.music.Music, measure_resolution: int) → float[source]¶ Return the groove consistency.
The groove consistency is defined as the mean hamming distance of the neighboring measures.
\[groove\_consistency = 1 - \frac{1}{T - 1} \sum_{i = 1}^{T - 1}{ d(G_i, G_{i + 1})}\]Here, \(T\) is the number of measures, \(G_i\) is the binary onset vector of the \(i\)-th measure (a one at position that has an onset, otherwise a zero), and \(d(G, G')\) is the hamming distance between two vectors \(G\) and \(G'\). Note that this metric only works for songs with a constant time signature. Return NaN if the number of measures is less than two. This metric is used in [1].
Parameters: - music (
muspy.Music
object) – Music object to evaluate. - measure_resolution (int) – Time steps per measure.
Returns: Groove consistency.
Return type: References
- [1] Shih-Lun Wu and Yi-Hsuan Yang, “The Jazz Transformer on the Front
- Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures”, in Proceedings of the 21st International Society for Music Information Retrieval Conference, 2020.
- music (
-
muspy.
n_pitch_classes_used
(music: muspy.music.Music) → int[source]¶ Return the number of unique pitch classes used.
Drum tracks are ignored.
Parameters: music ( muspy.Music
object) – Music object to evaluate.Returns: Number of unique pitch classes used. Return type: int See also
muspy.n_pitches_used()
- Compute the number of unique pitches used.
-
muspy.
n_pitches_used
(music: muspy.music.Music) → int[source]¶ Return the number of unique pitches used.
Drum tracks are ignored.
Parameters: music ( muspy.Music
object) – Music object to evaluate.Returns: Number of unique pitch used. Return type: int See also
muspy.n_pitch_class_used()
- Compute the number of unique pitch classes used.
-
muspy.
pitch_class_entropy
(music: muspy.music.Music) → float[source]¶ Return the entropy of the normalized note pitch class histogram.
The pitch class entropy is defined as the Shannon entropy of the normalized note pitch class histogram. Drum tracks are ignored. Return NaN if no note is found. This metric is used in [1].
\[pitch\_class\_entropy = -\sum_{i = 0}^{11}{ P(pitch\_class=i) \times \log_2 P(pitch\_class=i)}\]Parameters: music ( muspy.Music
object) – Music object to evaluate.Returns: Pitch class entropy. Return type: float See also
muspy.pitch_entropy()
- Compute the entropy of the normalized pitch histogram.
References
- [1] Shih-Lun Wu and Yi-Hsuan Yang, “The Jazz Transformer on the Front
- Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures”, in Proceedings of the 21st International Society for Music Information Retrieval Conference, 2020.
-
muspy.
pitch_entropy
(music: muspy.music.Music) → float[source]¶ Return the entropy of the normalized note pitch histogram.
The pitch entropy is defined as the Shannon entropy of the normalized note pitch histogram. Drum tracks are ignored. Return NaN if no note is found.
\[pitch\_entropy = -\sum_{i = 0}^{127}{P(pitch=i) \log_2 P(pitch=i)}\]Parameters: music ( muspy.Music
object) – Music object to evaluate.Returns: Pitch entropy. Return type: float See also
muspy.pitch_class_entropy()
- Compute the entropy of the normalized pitch class histogram.
-
muspy.
pitch_in_scale_rate
(music: muspy.music.Music, root: int, mode: str) → float[source]¶ Return the ratio of pitches in a certain musical scale.
The pitch-in-scale rate is defined as the ratio of the number of notes in a certain scale to the total number of notes. Drum tracks are ignored. Return NaN if no note is found. This metric is used in [1].
\[pitch\_in\_scale\_rate = \frac{\#(notes\_in\_scale)}{\#(notes)}\]Parameters: - music (
muspy.Music
object) – Music object to evaluate. - root (int) – Root of the scale.
- mode (str, {'major', 'minor'}) – Mode of the scale.
Returns: Pitch-in-scale rate.
Return type: See also
muspy.scale_consistency()
- Compute the largest pitch-in-class rate.
References
- [1] Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang,
- “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- music (
-
muspy.
pitch_range
(music: muspy.music.Music) → int[source]¶ Return the pitch range.
Drum tracks are ignored. Return zero if no note is found.
Parameters: music ( muspy.Music
object) – Music object to evaluate.Returns: Pitch range. Return type: int
-
muspy.
polyphony
(music: muspy.music.Music) → float[source]¶ Return the average number of pitches being played at the same time.
The polyphony is defined as the average number of pitches being played at the same time, evaluated only at time steps where at least one pitch is on. Drum tracks are ignored. Return NaN if no note is found.
\[polyphony = \frac{ \#(pitches\_when\_at\_least\_one\_pitch\_is\_on) }{ \#(time\_steps\_where\_at\_least\_one\_pitch\_is\_on) }\]Parameters: music ( muspy.Music
object) – Music object to evaluate.Returns: Polyphony. Return type: float See also
muspy.polyphony_rate()
- Compute the ratio of time steps where multiple pitches are on.
-
muspy.
polyphony_rate
(music: muspy.music.Music, threshold: int = 2) → float[source]¶ Return the ratio of time steps where multiple pitches are on.
The polyphony rate is defined as the ratio of the number of time steps where multiple pitches are on to the total number of time steps. Drum tracks are ignored. Return NaN if song length is zero. This metric is used in [1], where it is called polyphonicity.
\[polyphony\_rate = \frac{ \#(time\_steps\_where\_multiple\_pitches\_are\_on) }{ \#(time\_steps) }\]Parameters: - music (
muspy.Music
object) – Music object to evaluate. - threshold (int) – The threshold of number of pitches to count into the numerator.
Returns: Polyphony rate.
Return type: See also
muspy.polyphony()
- Compute the average number of pitches being played at the same time.
References
- [1] Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang,
- “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- music (
-
muspy.
scale_consistency
(music: muspy.music.Music) → float[source]¶ Return the largest pitch-in-scale rate.
The scale consistency is defined as the largest pitch-in-scale rate over all major and minor scales. Drum tracks are ignored. Return NaN if no note is found. This metric is used in [1].
\[scale\_consistency = \max_{root, mode}{ pitch\_in\_scale\_rate(root, mode)}\]Parameters: music ( muspy.Music
object) – Music object to evaluate.Returns: Scale consistency. Return type: float See also
muspy.pitch_in_scale_rate()
- Compute the ratio of pitches in a certain musical scale.
References
- [1] Olof Mogren, “C-RNN-GAN: Continuous recurrent neural networks with
- adversarial training,” in NeuIPS Workshop on Constructive Machine Learning, 2016.
-
class
muspy.
Music
(metadata: Optional[muspy.classes.Metadata] = None, resolution: Optional[int] = None, tempos: Optional[List[muspy.classes.Tempo]] = None, key_signatures: Optional[List[muspy.classes.KeySignature]] = None, time_signatures: Optional[List[muspy.classes.TimeSignature]] = None, downbeats: Optional[List[int]] = None, lyrics: Optional[List[muspy.classes.Lyric]] = None, annotations: Optional[List[muspy.classes.Annotation]] = None, tracks: Optional[List[muspy.classes.Track]] = None)[source]¶ A universal container for symbolic music.
This is the core class of MusPy. A Music object can be constructed in the following ways.
muspy.Music()
: Construct by setting values for attributes.muspy.Music.from_dict()
: Construct from a dictionary that stores the attributes and their values as key-value pairs.muspy.read()
: Read from a MIDI, a MusicXML or an ABC file.muspy.load()
: Load from a JSON or a YAML file saved bymuspy.save()
.muspy.from_object()
: Convert from a music21.Stream, amido.MidiFile
, apretty_midi.PrettyMIDI
or apypianoroll.Multitrack
object.
-
metadata
¶ Metadata.
Type: muspy.Metadata
object
-
tempos
¶ Tempo changes.
Type: list of muspy.Tempo
-
key_signatures
¶ Key signatures changes.
Type: list of muspy.KeySignature
object
-
time_signatures
¶ Time signature changes.
Type: list of muspy.TimeSignature
object
-
downbeats
¶ Downbeat positions.
Type: list of int
-
lyrics
¶ Lyrics.
Type: list of muspy.Lyric
-
annotations
¶ Annotations.
Type: list of muspy.Annotation
-
tracks
¶ Music tracks.
Type: list of muspy.Track
Tip
Indexing a Music object gives the track of a certain index. That is, music[idx] is equivalent to music.tracks[idx]. Length of a Music object is the number of tracks. That is, len(music) is equivalent to len(music.tracks).
-
adjust_resolution
(target: Optional[int] = None, factor: Optional[float] = None) → muspy.music.Music[source]¶ Adjust resolution and update the timing of time-stamped objects.
Parameters: Returns: Return type: Object itself.
-
clip
(lower: int = 0, upper: int = 127) → muspy.music.Music[source]¶ Clip the velocity of each note for each track.
Parameters: Returns: Return type: Object itself.
-
get_end_time
(is_sorted: bool = False) → int[source]¶ Return the end time, i.e., the time of the last event in all tracks.
This includes tempos, key signatures, time signatures, notes offsets, lyrics and annotations.
Parameters: is_sorted (bool) – Whether all the list attributes are sorted. Defaults to False.
-
get_real_end_time
(is_sorted: bool = False) → float[source]¶ Return the end time in realtime.
This includes tempos, key signatures, time signatures, notes offsets, lyrics and annotations. Assume 120 qpm (quarter notes per minute) if no tempo information is available.
Parameters: is_sorted (bool) – Whether all the list attributes are sorted. Defaults to False.
-
save
(path: Union[str, pathlib.Path], kind: Optional[str] = None, **kwargs)[source]¶ Save loselessly to a JSON or a YAML file.
Refer to
muspy.save()
for full documentation.
-
save_json
(path: Union[str, pathlib.Path], **kwargs)[source]¶ Save loselessly to a JSON file.
Refer to
muspy.save_json()
for full documentation.
-
save_yaml
(path: Union[str, pathlib.Path])[source]¶ Save loselessly to a YAML file.
Refer to
muspy.save_yaml()
for full documentation.
-
show
(kind: str, **kwargs)[source]¶ Show visualization.
Refer to
muspy.show()
for full documentation.
-
show_pianoroll
(**kwargs)[source]¶ Show pianoroll visualization.
Refer to
muspy.show_pianoroll()
for full documentation.
-
show_score
(**kwargs)[source]¶ Show score visualization.
Refer to
muspy.show_score()
for full documentation.
-
synthesize
(**kwargs) → numpy.ndarray[source]¶ Synthesize a Music object to raw audio.
Refer to
muspy.synthesize()
for full documentation.
-
to_event_representation
(**kwargs) → numpy.ndarray[source]¶ Return in event-based representation.
Refer to
muspy.to_event_representation()
for full documentation.
-
to_music21
(**kwargs) → music21.stream.Stream[source]¶ Return as a Stream object.
Refer to
muspy.to_music21()
for full documentation.
-
to_note_representation
(**kwargs) → numpy.ndarray[source]¶ Return in note-based representation.
Refer to
muspy.to_note_representation()
for full documentation.
-
to_object
(target: str, **kwargs)[source]¶ Convert to a target class.
Refer to
muspy.to_object()
for full documentation.
-
to_pianoroll_representation
(**kwargs) → numpy.ndarray[source]¶ Return in piano-roll representation.
Refer to
muspy.to_pianoroll_representation()
for full documentation.
-
to_pitch_representation
(**kwargs) → numpy.ndarray[source]¶ Return in pitch-based representation.
Refer to
muspy.to_pitch_representation()
for full documentation.
-
to_pretty_midi
(**kwargs) → pretty_midi.pretty_midi.PrettyMIDI[source]¶ Return as a PrettyMIDI object.
Refer to
muspy.to_pretty_midi()
for full documentation.
-
to_pypianoroll
(**kwargs) → pypianoroll.multitrack.Multitrack[source]¶ Return as a Multitrack object.
Refer to
muspy.to_pypianoroll()
for full documentation.
-
to_representation
(kind: str, **kwargs) → numpy.ndarray[source]¶ Return in a specific representation.
Refer to
muspy.to_representation()
for full documentation.
-
transpose
(semitone: int) → muspy.music.Music[source]¶ Transpose all the notes for all tracks by a number of semitones.
Parameters: semitone (int) – Number of semitones to transpose the notes. A positive value raises the pitches, while a negative value lowers the pitches. Returns: Return type: Object itself.
-
write
(path: Union[str, pathlib.Path], kind: Optional[str] = None, **kwargs)[source]¶ Write to a MIDI, a MusicXML, an ABC or an audio file.
Refer to
muspy.write()
for full documentation.
-
write_abc
(path: Union[str, pathlib.Path], **kwargs)[source]¶ Write to an ABC file.
Refer to
muspy.write_abc()
for full documentation.
-
write_audio
(path: Union[str, pathlib.Path], **kwargs)[source]¶ Write to an audio file.
Refer to
muspy.write_audio()
for full documentation.
-
write_midi
(path: Union[str, pathlib.Path], **kwargs)[source]¶ Write to a MIDI file.
Refer to
muspy.write_midi()
for full documentation.
-
write_musicxml
(path: Union[str, pathlib.Path], **kwargs)[source]¶ Write to a MusicXML file.
Refer to
muspy.write_musicxml()
for full documentation.
-
muspy.
save
(path: Union[str, pathlib.Path], music: Music, kind: Optional[str] = None, **kwargs)[source]¶ Save a Music object loselessly to a JSON or a YAML file.
Parameters: - path (str or Path) – Path to save the file.
- music (
muspy.Music
object) – Music object to save. - kind ({'json', 'yaml'}, optional) – Format to save (case-insensitive). Defaults to infer the format from the extension.
See also
muspy.write()
- Write to other formats such as MIDI and MusicXML.
Notes
The conversion can be lossy if any nonserializable object is used (for example, in an Annotation object, which can store data of any type).
-
muspy.
save_json
(path: Union[str, pathlib.Path], music: Music)[source]¶ Save a Music object to a JSON file.
Parameters: - path (str or Path) – Path to save the JSON file.
- music (
muspy.Music
object) – Music object to save.
-
muspy.
save_yaml
(path: Union[str, pathlib.Path], music: Music)[source]¶ Save a Music object to a YAML file.
Parameters: - path (str or Path) – Path to save the YAML file.
- music (
muspy.Music
object) – Music object to save.
-
muspy.
to_event_representation
(music: Music, use_single_note_off_event: bool = False, use_end_of_sequence_event: bool = False, force_velocity_event: bool = True, max_time_shift: int = 100, velocity_bins: int = 32) → numpy.ndarray[source]¶ Encode a Music object into event-based representation.
The event-based represetantion represents music as a sequence of events, including note-on, note-off, time-shift and velocity events. The output shape is M x 1, where M is the number of events. The values encode the events. The default configuration uses 0-127 to encode note-one events, 128-255 for note-off events, 256-355 for time-shift events, and 356 to 387 for velocity events.
Parameters: - music (
muspy.Music
object) – Music object to encode. - use_single_note_off_event (bool) – Whether to use a single note-off event for all the pitches. If True, the note-off event will close all active notes, which can lead to lossy conversion for polyphonic music. Defaults to False.
- use_end_of_sequence_event (bool) – Whether to append an end-of-sequence event to the encoded sequence. Defaults to False.
- force_velocity_event (bool) – Whether to add a velocity event before every note-on event. If False, velocity events are only used when the note velocity is changed (i.e., different from the previous one). Defaults to True.
- max_time_shift (int) – Maximum time shift (in ticks) to be encoded as an separate event. Time shifts larger than max_time_shift will be decomposed into two or more time-shift events. Defaults to 100.
- velocity_bins (int) – Number of velocity bins to use. Defaults to 32.
Returns: Encoded array in event-based representation.
Return type: ndarray, dtype=uint16, shape=(?, 1)
- music (
-
muspy.
to_mido
(music: Music, use_note_on_as_note_off: bool = True)[source]¶ Return a Music object as a MidiFile object.
Parameters: - music (
muspy.Music
object) – Music object to convert. - use_note_on_as_note_off (bool) – Whether to use a note on message with zero velocity instead of a note off message.
Returns: Converted MidiFile object.
Return type: - music (
-
muspy.
to_music21
(music: Music) → music21.stream.Score[source]¶ Convert a Music object to a music21 Score object.
Parameters: music ( muspy.Music
object) – Music object to convert.Returns: Converted music21 Score object. Return type: music21.stream.Score object
-
muspy.
to_note_representation
(music: Music, use_start_end: bool = False, encode_velocity: bool = True) → numpy.ndarray[source]¶ Encode a Music object into note-based representation.
The note-based represetantion represents music as a sequence of (pitch, time, duration, velocity) tuples. For example, a note Note(time=0, duration=4, pitch=60, velocity=64) will be encoded as a tuple (0, 4, 60, 64). The output shape is N * D, where N is the number of notes and D is 4 when encode_velocity is True, otherwise D is 3. The values of the second dimension represent pitch, time, duration and velocity (discarded when encode_velocity is False).
Parameters: - music (
muspy.Music
object) – Music object to encode. - use_start_end (bool) – Whether to use ‘start’ and ‘end’ to encode the timing rather than ‘time’ and ‘duration’. Defaults to False.
- encode_velocity (bool) – Whether to encode note velocities. Defaults to True.
Returns: Encoded array in note-based representation.
Return type: ndarray, dtype=uint8, shape=(?, 3 or 4)
- music (
-
muspy.
to_object
(music: Music, target: str, **kwargs) → Union[music21.stream.Stream, mido.midifiles.midifiles.MidiFile, pretty_midi.pretty_midi.PrettyMIDI, pypianoroll.multitrack.Multitrack][source]¶ Return a Music object as a PrettyMIDI or a Multitrack object.
Parameters: - music (
muspy.Music
object) – Music object to convert. - target (str, {'music21', 'mido', 'pretty_midi', 'pypianoroll'}) – Target class (case-insensitive).
Returns: - music21.Stream or
mido.MidiTrack
or pretty_midi.PrettyMIDI
orpypianoroll.Multitrack
- object – Converted object.
- music (
-
muspy.
to_pianoroll_representation
(music: Music, encode_velocity: bool = True) → numpy.ndarray[source]¶ Encode notes into piano-roll representation.
Parameters: - music (
muspy.Music
object) – Music object to encode. - encode_velocity (bool) – Whether to encode velocities. If True, a binary-valued array will be return. Otherwise, an integer array will be return. Defaults to True.
Returns: Encoded array in piano-roll representation.
Return type: ndarray, dtype=uint8 or bool, shape=(?, 128)
- music (
-
muspy.
to_pitch_representation
(music: Music, use_hold_state: bool = False) → numpy.ndarray[source]¶ Encode a Music object into pitch-based representation.
The pitch-based represetantion represents music as a sequence of pitch, rest and (optional) hold tokens. Only monophonic melodies are compatible with this representation. The output shape is T x 1, where T is the number of time steps. The values indicate whether the current time step is a pitch (0-127), a rest (128) or (optionally) a hold (129).
Parameters: - music (
muspy.Music
object) – Music object to encode. - use_hold_state (bool) – Whether to use a special state for holds. Defaults to False.
Returns: Encoded array in pitch-based representation.
Return type: ndarray, dtype=uint8, shape=(?, 1)
- music (
-
muspy.
to_pretty_midi
(music: Music) → pretty_midi.pretty_midi.PrettyMIDI[source]¶ Return a Music object as a PrettyMIDI object.
Tempo changes are not supported yet.
Parameters: music ( muspy.Music
object) – Music object to convert.Returns: Converted PrettyMIDI object. Return type: pretty_midi.PrettyMIDI
-
muspy.
to_pypianoroll
(music: Music) → pypianoroll.multitrack.Multitrack[source]¶ Return a Music object as a Multitrack object.
Parameters: music ( muspy.Music
) – MusPy Music object to convert.Returns: multitrack – Converted Multitrack object. Return type: pypianoroll.Multitrack
object
-
muspy.
to_representation
(music: Music, kind: str, **kwargs) → numpy.ndarray[source]¶ Return a Music object in a specific representation.
Parameters: - music (
muspy.Music
object) – Music object to convert. - kind (str, {'pitch', 'piano-roll', 'event', 'note'}) – Target representation (case-insensitive).
Returns: array – Converted representation.
Return type: ndarray
- music (
-
muspy.
synthesize
(music: Music, soundfont_path: Union[str, pathlib.Path, None] = None, rate: int = 44100) → numpy.ndarray[source]¶ Synthesize a Music object to raw audio.
Parameters: - music (
muspy.Music
object) – Music object to write. - soundfont_path (str or Path, optional) – Path to the soundfount file. Defaults to the path to the downloaded MuseScore General soundfont.
- rate (int) – Sample rate (in samples per sec). Defaults to 44100.
Returns: Synthesized waveform.
Return type: ndarray, dtype=int16, shape=(?, 2)
- music (
-
muspy.
write
(path: Union[str, pathlib.Path], music: Music, kind: Optional[str] = None, **kwargs)[source]¶ Write a Music object to a MIDI, a MusicXML, an ABC or an audio file.
Parameters: - path (str or Path) – Path to write the file.
- music (
muspy.Music
object) – Music object to convert. - kind ({'midi', 'musicxml', 'abc', 'audio'}, optional) – Format to save (case-insensitive). Defaults to infer the format from the extension.
See also
muspy.save()
- Losslessly save to a JSON or a YAML file.
-
muspy.
write_abc
(path: Union[str, pathlib.Path], music: Music)[source]¶ Write a Music object to a ABC file.
Parameters: - path (str or Path) – Path to write the ABC file.
- music (
muspy.Music
object) – Music object to write.
-
muspy.
write_audio
(path: Union[str, pathlib.Path], music: Music, soundfont_path: Union[str, pathlib.Path, None] = None, rate: int = 44100, audio_format: Optional[str] = None)[source]¶ Write a Music object to an audio file.
Supported formats include WAV, AIFF, FLAC and OGA.
Parameters: - path (str or Path) – Path to write the audio file.
- music (
muspy.Music
object) – Music object to write. - soundfont_path (str or Path, optional) – Path to the soundfount file. Defaults to the path to the downloaded MuseScore General soundfont.
- rate (int) – Sample rate (in samples per sec). Defaults to 44100.
- audio_format (str, {'wav', 'aiff', 'flac', 'oga'}, optional) – File format to write. If None, infer it from the extension.
-
muspy.
write_midi
(path: Union[str, pathlib.Path], music: Music, backend: str = 'mido', **kwargs)[source]¶ Write a Music object to a MIDI file.
Parameters: - path (str or Path) – Path to write the MIDI file.
- music (
muspy.Music
object) – Music object to write. - backend ({'mido', 'pretty_midi'}) – Backend to use. Defaults to ‘mido’.
-
muspy.
write_musicxml
(path: Union[str, pathlib.Path], music: Music, compressed: Optional[bool] = None)[source]¶ Write a Music object to a MusicXML file.
Parameters: - path (str or Path) – Path to write the MusicXML file.
- music (
muspy.Music
object) – Music object to write. - compressed (bool, optional) – Whether to write to a compressed MusicXML file. If None, infer from the extension of the filename (‘.xml’ and ‘.musicxml’ for an uncompressed file, ‘.mxl’ for a compressed file).
-
class
muspy.
NoteRepresentationProcessor
(use_start_end: bool = False, encode_velocity: bool = True, default_velocity: int = 64)[source]¶ Note-based representation processor.
The note-based represetantion represents music as a sequence of (pitch, time, duration, velocity) tuples. For example, a note Note(time=0, duration=4, pitch=60, velocity=64) will be encoded as a tuple (0, 4, 60, 64). The output shape is L * D, where L is the number of notes and D is 4 when encode_velocity is True, otherwise D is 3. The values of the second dimension represent pitch, time, duration and velocity (discarded when encode_velocity is False).
-
use_start_end
¶ Whether to use ‘start’ and ‘end’ to encode the timing rather than ‘time’ and ‘duration’. Defaults to False.
Type: bool
-
default_velocity
¶ Default velocity value to use when decoding if encode_velocity is False. Defaults to 64.
Type: int
-
decode
(array: numpy.ndarray) → muspy.music.Music[source]¶ Decode note-based representation into a Music object.
Parameters: array (ndarray) – Array in note-based representation to decode. Will be casted to integer if not of integer type. Returns: Decoded Music object. Return type: muspy.Music
objectSee also
muspy.from_note_representation()
- Return a Music object converted from note-based representation.
-
encode
(music: muspy.music.Music) → numpy.ndarray[source]¶ Encode a Music object into note-based representation.
Parameters: music ( muspy.Music
object) – Music object to encode.Returns: Encoded array in note-based representation. Return type: ndarray (np.uint8) See also
muspy.to_note_representation()
- Convert a Music object into note-based representation.
-
-
class
muspy.
EventRepresentationProcessor
(use_single_note_off_event: bool = False, use_end_of_sequence_event: bool = False, force_velocity_event: bool = True, max_time_shift: int = 100, velocity_bins: int = 32, default_velocity: int = 64)[source]¶ Event-based representation processor.
The event-based represetantion represents music as a sequence of events, including note-on, note-off, time-shift and velocity events. The output shape is M x 1, where M is the number of events. The values encode the events. The default configuration uses 0-127 to encode note-one events, 128-255 for note-off events, 256-355 for time-shift events, and 356 to 387 for velocity events.
-
use_single_note_off_event
¶ Whether to use a single note-off event for all the pitches. If True, the note-off event will close all active notes, which can lead to lossy conversion for polyphonic music. Defaults to False.
Type: bool
-
use_end_of_sequence_event
¶ Whether to append an end-of-sequence event to the encoded sequence. Defaults to False.
Type: bool
-
force_velocity_event
¶ Whether to add a velocity event before every note-on event. If False, velocity events are only used when the note velocity is changed (i.e., different from the previous one). Defaults to True.
Type: bool
-
max_time_shift
¶ Maximum time shift (in ticks) to be encoded as an separate event. Time shifts larger than max_time_shift will be decomposed into two or more time-shift events. Defaults to 100.
Type: int
-
decode
(array: numpy.ndarray) → muspy.music.Music[source]¶ Decode event-based representation into a Music object.
Parameters: array (ndarray) – Array in event-based representation to decode. Will be casted to integer if not of integer type. Returns: Decoded Music object. Return type: muspy.Music
objectSee also
muspy.from_event_representation()
- Return a Music object converted from event-based representation.
-
encode
(music: muspy.music.Music) → numpy.ndarray[source]¶ Encode a Music object into event-based representation.
Parameters: music ( muspy.Music
object) – Music object to encode.Returns: Encoded array in event-based representation. Return type: ndarray (np.uint16) See also
muspy.to_event_representation()
- Convert a Music object into event-based representation.
-
-
class
muspy.
PianoRollRepresentationProcessor
(encode_velocity: bool = True, default_velocity: int = 64)[source]¶ Piano-roll representation processor.
The piano-roll represetantion represents music as a time-pitch matrix, where the columns are the time steps and the rows are the pitches. The values indicate the presence of pitches at different time steps. The output shape is T x 128, where T is the number of time steps.
-
encode_velocity
¶ Whether to encode velocities. If True, a binary-valued array will be return. Otherwise, an integer array will be return. Defaults to True.
Type: bool
-
default_velocity
¶ Default velocity value to use when decoding if encode_velocity is False. Defaults to 64.
Type: int
-
decode
(array: numpy.ndarray) → muspy.music.Music[source]¶ Decode piano-roll representation into a Music object.
Parameters: array (ndarray) – Array in piano-roll representation to decode. Will be casted to integer if not of integer type. If encode_velocity is True, will be casted to boolean if not of boolean type. Returns: Decoded Music object. Return type: muspy.Music
objectSee also
muspy.from_pianoroll_representation()
- Return a Music object converted from piano-roll representation.
-
encode
(music: muspy.music.Music) → numpy.ndarray[source]¶ Encode a Music object into piano-roll representation.
Parameters: music ( muspy.Music
object) – Music object to encode.Returns: Encoded array in piano-roll representation. Return type: ndarray (np.uint8) See also
muspy.to_pianoroll_representation()
- Convert a Music object into piano-roll representation.
-
-
class
muspy.
PitchRepresentationProcessor
(use_hold_state: bool = False, default_velocity: int = 64)[source]¶ Pitch-based representation processor.
The pitch-based represetantion represents music as a sequence of pitch, rest and (optional) hold tokens. Only monophonic melodies are compatible with this representation. The output shape is T x 1, where T is the number of time steps. The values indicate whether the current time step is a pitch (0-127), a rest (128) or (optionally) a hold (129).
-
decode
(array: numpy.ndarray) → muspy.music.Music[source]¶ Decode pitch-based representation into a Music object.
Parameters: array (ndarray) – Array in pitch-based representation to decode. Will be casted to integer if not of integer type. Returns: Decoded Music object. Return type: muspy.Music
objectSee also
muspy.from_pitch_representation()
- Return a Music object converted from pitch-based representation.
-
encode
(music: muspy.music.Music) → numpy.ndarray[source]¶ Encode a Music object into pitch-based representation.
Parameters: music ( muspy.Music
object) – Music object to encode.Returns: Encoded array in pitch-based representation. Return type: ndarray (np.uint8) See also
muspy.to_pitch_representation()
- Convert a Music object into pitch-based representation.
-
-
muspy.
validate_json
(path: Union[str, pathlib.Path])[source]¶ Validate a file against the JSON schema.
Parameters: path (str or Path) – Path to the file to validate.
-
muspy.
validate_musicxml
(path: Union[str, pathlib.Path])[source]¶ Validate a file against the MusicXML schema.
Parameters: path (str or Path) – Path to the file to validate.
-
muspy.
validate_yaml
(path: Union[str, pathlib.Path])[source]¶ Validate a file against the YAML schema.
Parameters: path (str or Path) – Path to the file to validate.
-
muspy.
show
(music: Music, kind: str, **kwargs)[source]¶ Show visualization.
Parameters: - music (
muspy.Music
object) – Music object to convert. - kind (str, {'piano-roll', 'score'}) – Target representation.
Returns: array – Converted representation.
Return type: ndarray
- music (
-
muspy.
show_score
(music: Music, figsize: Optional[Tuple[float, float]] = None, clef: str = 'treble', clef_octave: Optional[int] = 0, note_spacing: Optional[int] = None, font_path: Union[str, pathlib.Path, None] = None, font_scale: Optional[float] = None) → muspy.visualization.score.ScorePlotter[source]¶ Show score visualization.
Parameters: - music (
muspy.Music
object) – Music object to show. - figsize ((float, float), optional) – Width and height in inches. Defaults to Matplotlib configuration.
- clef (str, {'treble', 'alto', 'bass'}) – Clef type. Defaults to a treble clef.
- clef_octave (int) – Clef octave. Defaults to zero.
- note_spacing (int, optional) – Spacing of notes. Defaults to 4.
- font_path (str or Path, optional) – Path to the music font. Defaults to the path to the built-in Bravura font.
- font_scale (float, optional) – Font scaling factor for finetuning. Defaults to 140, optimized for the Bravura font.
Returns: A ScorePlotter object that handles the score.
Return type: muspy.ScorePlotter
object- music (
-
class
muspy.
ScorePlotter
(fig: matplotlib.figure.Figure, ax: matplotlib.axes._axes.Axes, resolution: int, note_spacing: Optional[int] = None, font_path: Union[str, pathlib.Path, None] = None, font_scale: Optional[float] = None)[source]¶ A plotter that handles the score visualization.
-
fig
¶ Figure object to plot the score on.
Type: matplotlib.figure.Figure
object
-
axes
¶ Axes object to plot the score on.
Type: matplotlib.axes.Axes
object
-
font_path
¶ Path to the music font. Defaults to the path to the downloaded Bravura font.
Type: str or Path, optional
-
font_scale
¶ Font scaling factor for finetuning. Defaults to 140, optimized for the Bravura font.
Type: float, optional
-
plot_key_signature
(root: int, mode: str)[source]¶ Plot a key signature. Only major and minor keys are supported.
-
plot_note
(time, duration, pitch) → Optional[Tuple[List[matplotlib.text.Text], List[matplotlib.patches.Arc]]][source]¶ Plot a note.
-
plot_staffs
(start: Optional[float] = None, end: Optional[float] = None) → List[matplotlib.lines.Line2D][source]¶ Plot the staffs.
-