muspy.outputs

Output interfaces.

This module provides output interfaces for common symbolic music formats, MusPy’s native JSON and YAML formats, other symbolic music libraries and commonly-used representations in music generation.

Functions

  • save
  • save_json
  • save_yaml
  • to_default_event_representation
  • to_event_representation
  • to_mido
  • to_music21
  • to_note_representation
  • to_object
  • to_performance_event_representation
  • to_pianoroll_representation
  • to_pitch_representation
  • to_pretty_midi
  • to_pypianoroll
  • to_remi_event_representation
  • to_representation
  • write
  • write_audio
  • write_midi
  • write_musicxml
muspy.outputs.save(path: Union[str, pathlib.Path, TextIO], music: Music, kind: str = None, **kwargs)[source]

Save a Music object loselessly to a JSON or a YAML file.

This is a wrapper function for muspy.save_json() and muspy.save_yaml().

Parameters:
  • path (str, Path or TextIO) – Path or file to save the data.
  • music (muspy.Music) – Music object to save.
  • kind ({'json', 'yaml'}, optional) – Format to save. Defaults to infer from the extension.
  • **kwargs – Keyword arguments to pass to muspy.save_json() or muspy.save_yaml().

See also

muspy.save_json()
Save a Music object to a JSON file.
muspy.save_yaml()
Save a Music object to a YAML file.
muspy.write()
Write a Music object to a MIDI/MusicXML/ABC/audio file.

Notes

The conversion can be lossy if any nonserializable object is used (for example, an Annotation object, which can store data of any type).

muspy.outputs.save_json(path: Union[str, pathlib.Path, TextIO], music: Music, skip_missing: bool = True, ensure_ascii: bool = False, compressed: bool = None, **kwargs)[source]

Save a Music object to a JSON file.

Parameters:
  • path (str, Path or TextIO) – Path or file to save the JSON data.
  • music (muspy.Music) – Music object to save.
  • skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists.
  • ensure_ascii (bool, default: False) – Whether to escape non-ASCII characters. Will be passed to PyYAML’s yaml.dump.
  • compressed (bool, optional) – Whether to save as a compressed JSON file (.json.gz). Has no effect when path is a file object. Defaults to infer from the extension (.gz).
  • **kwargs – Keyword arguments to pass to json.dumps().

Notes

When a path is given, use UTF-8 encoding and gzip compression if compressed=True.

muspy.outputs.save_yaml(path: Union[str, pathlib.Path, TextIO], music: Music, skip_missing: bool = True, allow_unicode: bool = True, compressed: bool = None, **kwargs)[source]

Save a Music object to a YAML file.

Parameters:
  • path (str, Path or TextIO) – Path or file to save the YAML data.
  • music (muspy.Music) – Music object to save.
  • skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists.
  • allow_unicode (bool, default: False) – Whether to escape non-ASCII characters. Will be passed to json.dumps().
  • compressed (bool, optional) – Whether to save as a compressed YAML file (.yaml.gz). Has no effect when path is a file object. Defaults to infer from the extension (.gz).
  • **kwargs – Keyword arguments to pass to yaml.dump.

Notes

When a path is given, use UTF-8 encoding and gzip compression if compressed=True.

muspy.outputs.synthesize(music: Music, soundfont_path: Union[str, pathlib.Path] = None, rate: int = 44100, gain: float = None) → numpy.ndarray[source]

Synthesize a Music object to raw audio.

Parameters:
  • music (muspy.Music) – Music object to write.
  • soundfont_path (str or Path, optional) – Path to the soundfount file. Defaults to the path to the downloaded MuseScore General soundfont.
  • rate (int, default: 44100) – Sample rate (in samples per sec).
  • gain (float, optional) – Master gain (-g option) for Fluidsynth. Defaults to 1/n, where n is the number of tracks. This can be used to prevent distortions caused by clipping.
Returns:

Synthesized waveform.

Return type:

ndarray, dtype=int16, shape=(?, 2)

muspy.outputs.to_default_event_representation(music: Music, dtype=<class 'int'>) → numpy.ndarray[source]

Encode a Music object into the default event representation.

muspy.outputs.to_event_representation(music: Music, use_single_note_off_event: bool = False, use_end_of_sequence_event: bool = False, encode_velocity: bool = False, force_velocity_event: bool = True, max_time_shift: int = 100, velocity_bins: int = 32, dtype=<class 'int'>) → numpy.ndarray[source]

Encode a Music object into event-based representation.

The event-based represetantion represents music as a sequence of events, including note-on, note-off, time-shift and velocity events. The output shape is M x 1, where M is the number of events. The values encode the events. The default configuration uses 0-127 to encode note-on events, 128-255 for note-off events, 256-355 for time-shift events, and 356 to 387 for velocity events.

Parameters:
  • music (muspy.Music) – Music object to encode.
  • use_single_note_off_event (bool, default: False) – Whether to use a single note-off event for all the pitches. If True, the note-off event will close all active notes, which can lead to lossy conversion for polyphonic music.
  • use_end_of_sequence_event (bool, default: False) – Whether to append an end-of-sequence event to the encoded sequence.
  • encode_velocity (bool, default: False) – Whether to encode velocities.
  • force_velocity_event (bool, default: True) – Whether to add a velocity event before every note-on event. If False, velocity events are only used when the note velocity is changed (i.e., different from the previous one).
  • max_time_shift (int, default: 100) – Maximum time shift (in ticks) to be encoded as an separate event. Time shifts larger than max_time_shift will be decomposed into two or more time-shift events.
  • velocity_bins (int, default: 32) – Number of velocity bins to use.
  • dtype (np.dtype, type or str, default: int) – Data type of the return array.
Returns:

Encoded array in event-based representation.

Return type:

ndarray, shape=(?, 1)

muspy.outputs.to_mido(music: Music, use_note_off_message: bool = False)[source]

Return a Music object as a MidiFile object.

Parameters:
  • music (muspy.Music object) – Music object to convert.
  • use_note_off_message (bool, default: False) – Whether to use note-off messages. If False, note-on messages with zero velocity are used instead. The advantage to using note-on messages at zero velocity is that it can avoid sending additional status bytes when Running Status is employed.
Returns:

Converted MidiFile object.

Return type:

mido.MidiFile

muspy.outputs.to_music21(music: Music) → music21.stream.base.Score[source]

Return a Music object as a music21 Score object.

Parameters:music (muspy.Music) – Music object to convert.
Returns:Converted music21 Score object.
Return type:music21.stream.Score
muspy.outputs.to_note_representation(music: Music, use_start_end: bool = False, encode_velocity: bool = True, dtype: Union[numpy.dtype, type, str] = <class 'int'>) → numpy.ndarray[source]

Encode a Music object into note-based representation.

The note-based represetantion represents music as a sequence of (time, pitch, duration, velocity) tuples. For example, a note Note(time=0, duration=4, pitch=60, velocity=64) will be encoded as a tuple (0, 60, 4, 64). The output shape is N * D, where N is the number of notes and D is 4 when encode_velocity is True, otherwise D is 3. The values of the second dimension represent time, pitch, duration and velocity (discarded when encode_velocity is False).

Parameters:
  • music (muspy.Music) – Music object to encode.
  • use_start_end (bool, default: False) – Whether to use ‘start’ and ‘end’ to encode the timing rather than ‘time’ and ‘duration’.
  • encode_velocity (bool, default: True) – Whether to encode note velocities.
  • dtype (np.dtype, type or str, default: int) – Data type of the return array.
Returns:

Encoded array in note-based representation.

Return type:

ndarray, shape=(?, 3 or 4)

muspy.outputs.to_object(music: Music, kind: str, **kwargs) → Union[music21.stream.base.Stream, mido.midifiles.midifiles.MidiFile, pretty_midi.pretty_midi.PrettyMIDI, pypianoroll.multitrack.Multitrack][source]

Return a Music object as an object in other libraries.

Supported classes are music21.Stream, mido.MidiTrack, pretty_midi.PrettyMIDI and pypianoroll.Multitrack.

Parameters:
  • music (muspy.Music) – Music object to convert.
  • kind (str, {'music21', 'mido', 'pretty_midi', 'pypianoroll'}) – Target class.
Returns:

Converted object.

Return type:

music21.Stream, mido.MidiTrack, pretty_midi.PrettyMIDI or pypianoroll.Multitrack

muspy.outputs.to_performance_event_representation(music: Music, dtype=<class 'int'>) → numpy.ndarray[source]

Encode a Music object into the performance event representation.

muspy.outputs.to_pianoroll_representation(music: Music, encode_velocity: bool = True, dtype: Union[numpy.dtype, type, str] = None) → numpy.ndarray[source]

Encode notes into piano-roll representation.

Parameters:
  • music (muspy.Music) – Music object to encode.
  • encode_velocity (bool, default: True) – Whether to encode velocities. If True, a binary-valued array will be return. Otherwise, an integer array will be return.
  • dtype (np.dtype, type or str, optional) – Data type of the return array. Defaults to uint8 if encode_velocity is True, otherwise bool.
Returns:

Encoded array in piano-roll representation.

Return type:

ndarray, shape=(?, 128)

muspy.outputs.to_pitch_representation(music: Music, use_hold_state: bool = False, dtype: Union[numpy.dtype, type, str] = <class 'int'>) → numpy.ndarray[source]

Encode a Music object into pitch-based representation.

The pitch-based represetantion represents music as a sequence of pitch, rest and (optional) hold tokens. Only monophonic melodies are compatible with this representation. The output shape is T x 1, where T is the number of time steps. The values indicate whether the current time step is a pitch (0-127), a rest (128) or, optionally, a hold (129).

Parameters:
  • music (muspy.Music) – Music object to encode.
  • use_hold_state (bool, default: False) – Whether to use a special state for holds.
  • dtype (np.dtype, type or str, default: int) – Data type of the return array.
Returns:

Encoded array in pitch-based representation.

Return type:

ndarray, shape=(?, 1)

muspy.outputs.to_pretty_midi(music: Music) → pretty_midi.pretty_midi.PrettyMIDI[source]

Return a Music object as a PrettyMIDI object.

Tempo changes are not supported yet.

Parameters:music (muspy.Music object) – Music object to convert.
Returns:Converted PrettyMIDI object.
Return type:pretty_midi.PrettyMIDI

Notes

Tempo information will not be included in the output.

muspy.outputs.to_pypianoroll(music: Music) → pypianoroll.multitrack.Multitrack[source]

Return a Music object as a Multitrack object.

Parameters:music (muspy.Music) – Music object to convert.
Returns:multitrack – Converted Multitrack object.
Return type:pypianoroll.Multitrack
muspy.outputs.to_remi_event_representation(music: Music, dtype=<class 'int'>) → numpy.ndarray[source]

Encode a Music object into the remi event representation.

muspy.outputs.to_representation(music: Music, kind: str, **kwargs) → numpy.ndarray[source]

Return a Music object in a specific representation.

Parameters:
  • music (muspy.Music) – Music object to convert.
  • kind (str, {'pitch', 'piano-roll', 'event', 'note'}) – Target representation.
Returns:

array – Converted representation.

Return type:

ndarray

muspy.outputs.write(path: Union[str, pathlib.Path], music: Music, kind: str = None, **kwargs)[source]

Write a Music object to a MIDI/MusicXML/ABC/audio file.

Parameters:
  • path (str or Path) – Path to write the file.
  • music (muspy.Music) – Music object to convert.
  • kind ({'midi', 'musicxml', 'abc', 'audio'}, optional) – Format to save. Defaults to infer from the extension.

See also

muspy.save()
Save a Music object loselessly to a JSON or a YAML file.
muspy.outputs.write_audio(path: Union[str, pathlib.Path], music: Music, audio_format: str = None, soundfont_path: Union[str, pathlib.Path] = None, rate: int = 44100, gain: float = None)[source]

Write a Music object to an audio file.

Supported formats include WAV, AIFF, FLAC and OGA.

Parameters:
  • path (str or Path) – Path to write the audio file.
  • music (muspy.Music) – Music object to write.
  • audio_format (str, {'wav', 'aiff', 'flac', 'oga'}, optional) – File format to write. Defaults to infer from the extension.
  • soundfont_path (str or Path, optional) – Path to the soundfount file. Defaults to the path to the downloaded MuseScore General soundfont.
  • rate (int, default: 44100) – Sample rate (in samples per sec).
  • gain (float, optional) – Master gain (-g option) for Fluidsynth. Defaults to 1/n, where n is the number of tracks. This can be used to prevent distortions caused by clipping.
muspy.outputs.write_midi(path: Union[str, pathlib.Path], music: Music, backend: str = 'mido', **kwargs)[source]

Write a Music object to a MIDI file.

Parameters:
  • path (str or Path) – Path to write the MIDI file.
  • music (muspy.Music) – Music object to write.
  • backend ({'mido', 'pretty_midi'}, default: 'mido') – Backend to use.

See also

write_midi_mido()
Write a Music object to a MIDI file using mido as backend.
write_midi_pretty_midi()
Write a Music object to a MIDI file using pretty_midi as backend.
muspy.outputs.write_musicxml(path: Union[str, pathlib.Path], music: Music, compressed: bool = None)[source]

Write a Music object to a MusicXML file.

Parameters:
  • path (str or Path) – Path to write the MusicXML file.
  • music (muspy.Music) – Music object to write.
  • compressed (bool, optional) – Whether to write to a compressed MusicXML file. If None, infer from the extension of the filename (‘.xml’ and ‘.musicxml’ for an uncompressed file, ‘.mxl’ for a compressed file).