Pitch-based Representation¶
-
muspy.
to_pitch_representation
(music: Music, use_hold_state: bool = False) → numpy.ndarray[source] Encode a Music object into pitch-based representation.
The pitch-based represetantion represents music as a sequence of pitch, rest and (optional) hold tokens. Only monophonic melodies are compatible with this representation. The output shape is T x 1, where T is the number of time steps. The values indicate whether the current time step is a pitch (0-127), a rest (128) or (optionally) a hold (129).
Parameters: - music (
muspy.Music
object) – Music object to encode. - use_hold_state (bool) – Whether to use a special state for holds. Defaults to False.
Returns: Encoded array in pitch-based representation.
Return type: ndarray, dtype=uint8, shape=(?, 1)
- music (
-
muspy.
from_pitch_representation
(array: numpy.ndarray, resolution: int = 24, program: int = 0, is_drum: bool = False, use_hold_state: bool = False, default_velocity: int = 64) → muspy.music.Music[source] Decode pitch-based representation into a Music object.
Parameters: - array (ndarray) – Array in pitch-based representation to decode. Will be casted to integer if not of integer type.
- resolution (int) – Time steps per quarter note. Defaults to muspy.DEFAULT_RESOLUTION.
- program (int, optional) – Program number according to General MIDI specification [1]. Acceptable values are 0 to 127. Defaults to 0 (Acoustic Grand Piano).
- is_drum (bool, optional) – A boolean indicating if it is a percussion track. Defaults to False.
- use_hold_state (bool) – Whether to use a special state for holds. Defaults to False.
- default_velocity (int) – Default velocity value to use when decoding. Defaults to 64.
Returns: Decoded Music object.
Return type: muspy.Music
objectReferences
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
-
class
muspy.
PitchRepresentationProcessor
(use_hold_state: bool = False, default_velocity: int = 64)[source] Pitch-based representation processor.
The pitch-based represetantion represents music as a sequence of pitch, rest and (optional) hold tokens. Only monophonic melodies are compatible with this representation. The output shape is T x 1, where T is the number of time steps. The values indicate whether the current time step is a pitch (0-127), a rest (128) or (optionally) a hold (129).
-
decode
(array: numpy.ndarray) → muspy.music.Music[source] Decode pitch-based representation into a Music object.
Parameters: array (ndarray) – Array in pitch-based representation to decode. Will be casted to integer if not of integer type. Returns: Decoded Music object. Return type: muspy.Music
objectSee also
muspy.from_pitch_representation()
- Return a Music object converted from pitch-based representation.
-
encode
(music: muspy.music.Music) → numpy.ndarray[source] Encode a Music object into pitch-based representation.
Parameters: music ( muspy.Music
object) – Music object to encode.Returns: Encoded array in pitch-based representation. Return type: ndarray (np.uint8) See also
muspy.to_pitch_representation()
- Convert a Music object into pitch-based representation.
-