Workspace

A workspace contains the current state: the active config, the active scorestrucutre, a playback engine, etc. Many actions, like note playback, notation rendering, etc., use the active workspace to determine tempo, score structure, default playback instrument, etc.

Contents

[22]:
from maelzel.core import *
from IPython.display import display

The active workspace

To customize a Workspace for a specific task there are three slightly different methods:

1. Modify the active Workspace

We modify the workspace by setting a new configuration

[23]:
w = getWorkspace()
config = w.config.clone({
    'play.numChannels': 4,
    'show.pngResolution': 300,
    'quant.complexity': 'high'
})
w.config = config

2. Create a new Workspace with the needed customizations

[15]:
w = Workspace(scorestruct=ScoreStruct(timesig=(3, 4), tempo=72),
              updates={'play.numChannels': 4,
                       'show.pngResolution': 300,
                       'quant.complexity': 'high'})
w.activate()
[15]:
Workspace(scorestruct=ScoreStruct(tempo=72, timesig=(3, 4)), config={'show.pngResolution': 300, 'play.numChannels': 4}, dynamicCurve=DynamicCurve(shape=expon(0.3), mindb=-60.0, maxdb=0.0))

3. Temporary Workspace (as context manager)

[18]:
with Workspace(scorestruct=ScoreStruct(timesig=(3, 4), tempo=72)):
    scale = Chain(Note(m, dur=0.5) for m in range(60, 72))
    display(scale)
    display(scale.rec(instr='.piano', nchnls=1))
Chain([4C:0.5♩, 4C#:0.5♩, 4D:0.5♩, 4D#:0.5♩, 4E:0.5♩, 4F:0.5♩, 4F#:0.5♩, 4G:0.5♩, 4G#:0.5♩, 4A:0.5♩, …], dur=6)
OfflineRenderer(outfile="/home/em/.local/share/maelzel/recordings/rec-2023-03-27T20:46:25.930.wav", 1 channels, 5.02 secs, 44100 Hz)


Parts of a Workspace

The workspace bundles the different elements which determine playback, notation and general behaviour of maelzel.core

  • .config: holds the active configuration

  • .scorestruct: the active score structure

  • .dynamicsCurve: determines the mapping between amplitude and musical dynamic. This is used for playback and transcription

  • .a4: the reference frequency for A4

  • .renderer: used internally when rendering offline. When an object (note, chord, voice, …) is played it uses this attribute to check how to route the generated playback events

The active Config

The .config attribute of the active Workspace holds the active configuration. This is a subclass of dict and holds defaults and customizations regarding playback, notation, etc

[24]:
config = w.config
assert config is getConfig()

config
[24]:

CoreConfig: maelzel:core


KeyValueTypeDescr
A4442between 10 - 10000Freq. of the Kammerton A4. Normal values are 440, 442, 443 or 432 for old tuning, but any 'fantasy' value can be used
splitAcceptableDeviation4type: intWhen splitting notes between staves, notes within this range of the split point will be grouped together if they all fit
chordAdjustGainTruetype: boolAdjust the gain of a chord according to the number of notes, to prevent clipping
reprShowFreqFalsetype: boolShow frequency when printing a Note in the console
semitoneDivisions4{1, 2, 4}The number of divisions per semitone (2=quarter-tones, 4=eighth-tones)
musescorepathtype: strThe command to use when calling MuseScore. For macOS users: it must be an absolute path pointing to the actual binary inside the .app bundle
reprShowFractionsAsFloatTruetype: boolAll time offsets and durations are kept as rational numbers to avoid rounding errors. If this option is True, these fractions are printed as floats in order to make them more readable.
jupyterHtmlReprTruetype: boolIf True, output html inside jupyter as part of the _repr_html_ hook. Under certain circumstances (for example, when generating documentation from a notebook) this html might result in style conflict. Setting in False will just output plain text
fixStringNotenamesFalsetype: boolIf True, pitches given as string notenames are fixed at the spelling given at creation. Otherwise pitches might be respelled to match their context for better readability. Pitches given as midi notes or frequencies are always respelled
openImagesInExternalAppFalsetype: boolForce opening images with an external tool, even when inside a Jupyter notebook
enharmonic.horizontalWeight1type: intThe weight of the horizontal dimension (note sequences) when evaluating an enharmonic variant
enharmonic.verticalWeight0.01type: floatThe weight of the vertical dimension (chords within a voice) when evaluating an enharmonic variant
enharmonic.debugFalsetype: boolIf True, print debug information while calculating automatic enharmonic spelling
enharmonic.threeQuarterMicrotonePenalty20type: intNone
show.arpeggiateChordauto{auto, False, True}Arpeggiate notes of a chord when showing. In auto mode, only arpeggiate when needed
show.lastBreakpointDur0.125between 0.015625 - 1Dur of a note representing the end of a line/gliss, which has no totalDuration per se
show.centsDeviationAsTextAnnotationTruetype: boolshow cents deviation as text when rendering notation
show.centsAnnotationFontSize8type: intFont size used for cents annotations
show.centSep,type: strSeparator used when displaying multiple cents deviation (in a chord)
show.scaleFactor1.0type: floatAffects the size of the generated image when using png format
show.staffSize12.0type: floatThe size of a staff, in points
show.backendlilypond{lilypond, music21}method/backend used when rendering notation
show.formatpng{pdf, png, repr}Used when no explicit format is passed to .show
show.cacheImagesTruetype: boolIf True, cache rendered images. Set it to False for debugging. call `resetImageCache()` to reset manually
show.arpeggioDuration0.5type: floatDuration used for individual notes when rendering a chord as arpeggio
show.labelFontSize10.0type: floatFont size to use for labels
show.pageOrientationportrait{landscape, portrait}Page orientation when rendering to pdf
show.pageSizea4{a2, a3, a4}The page size when rendering to pdf
show.pageMarginMillimeters4between 0 - 1000The page margin in mm
show.glissEndStemlessFalsetype: boolWhen the end pitch of a gliss. is shown as gracenote, make this stemless
show.glissHideTiedNotesTruetype: boolHide tied notes which are part of a glissando
show.glissLineThickness2{1, 2, 3, 4}Line thikness when rendering glissandi. The value is abstract and it isup to the renderer to interpret it
show.lilypondPngStaffsizeScale1.5type: floatA factor applied to the staffsize when rendering to png via lilypond. Useful if rendered images appear too small in a jupyter notebook
show.lilypondGlissandoMinimumLength5type: intThe minimum length of a glissando in points. Increase this value if glissando linesare not shown or are too short (this might be the case within the context of dottednotes or accidentals)
show.pngResolution300{100, 200, 300, 600, 1200} | default: 200DPI used when rendering to png
show.measureAnnotationStylebox=square; fontsize=12type: strNone
show.respellPitchesTruetype: boolIf True, try to find a suitable enharmonic representation of pitches whichhave not been fixed already by the user. Otherwise the canonical form of eachpitch is used, independent of the context
show.horizontalSpacingmedium{default, large, medium, small, xlarge}Hint for the renderer to adjust horizontal spacing. The actual result depends on the backend and the format used.
show.fillDynamicFromAmplitudeFalsetype: boolIf True, when rendering notation, if an object has an amplitude and does not have an explicit dynamic, add a dynamic according to the amplitude
show.jupyterMaxImageWidth1000type: intA max. width in pixels for images displayed in a jupyter notebook
show.hideRedundantDynamicsTruetype: boolHide redundant dynamics within a voice
show.asoluteOffsetForDetachedObjectsFalsetype: boolWhen showing an object which has a parent but is shown detached from it, shouldthe absolute offset be used?
show.voiceMaxStaves1between 1 - 4The maximum number of staves per voice when showing a Voice as notation. A voiceis a sequence of non-simultaneous events (notes, chords, etc.) but these canbe exploded over multiple staves (for example, a chord might expand across awide range and would need multiple extra lines in any clef
show.clipNoteheadShapesquare{, cluster, cross, diamond, harmonic, normal, rectangle, rhombus, slash, square, triangle, xcircle}Notehead shape to use for clips
play.gain1.0between 0 - 1Default gain used when playing/recording
play.engineNamemaelzel.coretype: strName of the play engine used
play.instrsintype: strDefault instrument used for playback. A list of available instruments can be queried via `availableInstrs`. New instrument presets can be defined via `defPreset`
play.fade0.02type: floatdefault fade time
play.fadeShapecos{cos, linear, scurve}Curve-shape used for fading in/out
play.pitchInterpolationlinear{cos, linear}Curve shape for interpolating between pitches
play.numChannels4between 1 - 128 | default: 2Default number of channels (channels can be set explicitely when calling startPlayEngine
play.unschedFadeout0.05type: floatfade out when stopping a note
play.backenddefault{alsa, auhal, default, jack, pa_cb, portaudio, pulse}backend used for playback
play.defaultAmplitude1.0between 0 - 1The amplitude of a Note/Chord when an amplitude is needed and the object has an undefined amplitude. This is only used if play.useDynamics if False
play.defaultDynamicf{f, ff, fff, ffff, mf, mp, p, pp, ppp, pppp}THe dynamic of a Note/Chord when a dynamic is needed. This is only used if play.useDynamics is True. Any event with an amplitude will use that amplitude instead
play.generalMidiSoundfonttype: strPath to a soundfont (sf2 file) with a general midi mapping
play.soundfontAmpDiv16384type: intA divisor used to scale the amplitude of soundfonts to a range 0-1
play.soundfontInterpolationlinear{cubic, linear}Interpolation used when reading sample data from a soundfont.
play.schedLatency0.05type: floatAdded latency when scheduling events to ensure time precission
play.verboseFalsetype: boolIf True, outputs extra debugging information regarding playback
play.useDynamicsTruetype: boolIf True, any note/chord with a set dynamic will use that to modify its playback amplitude if no explicit amplitude is set
play.waitAfterStart0.5type: floatHow much to wait for the sound engine to be operational after starting it
play.gracenoteDuration1/14type: (int, float, str)Duration assigned to a gracenote for playback (in quarternotes)
rec.blockingTruetype: boolShould recording be blocking or should be done async?
rec.sr44100{44100, 48000, 88200, 96000, 144000, 176400, 192000, 352800, 384000}Sample rate used when rendering offline
rec.ksmps64{1, 16, 32, 64, 128, 256}Samples per cycle when rendering offline (passed as ksmps to csound)
rec.numChannels2between 1 - 128The default number of channels when rendering to disk
rec.pathtype: strpath used to save output files when rendering offline. If not given the default can be queried via `recordPath`
rec.quietTruetype: boolSupress debug output when calling csound as a subprocess
rec.compressionBitrate224type: intdefault bitrate to use when encoding to ogg or mp3
htmlThemelight{dark, light}Theme used when displaying html inside jupyter
quant.minBeatFractionAcrossBeats0.5type: floatwhen merging durations across beats, a merged totalDuration cannot be smaller than this totalDuration. This is to prevent joining durations across beats which might result in high rhythmic complexity
quant.nestedTupletsNone{False, None, True}Are nested tuples allowed when quantizing? Not all display backends support nested tuples (musescore, used to render musicxml has no support for nested tuples). If None, this flag is determined based on the complexity preset (quant.complexity)
quant.breakSyncopationsLevelweak{all, none, strong, weak}Level at which to break syncopations, one of "all" (break all syncopations), "weak (break only syncopations over secondary beats)", "strong" (break syncopations at strong beats) or "none" (do not break any syncopations)
quant.complexityhigh{high, highest, low, lowest, medium}Controls the allowed complexity in the notation. The higher the complexity, the more accurate the quantization, at the cost of a more complex notation.
quant.divisionErrorWeightNonetype: NoneTypeA weight (between 0 and 1) applied to the penalty of complex quantization of the beat. The higher this value is, the simpler the subdivision chosen. If set to None, this value is derived from the complexity preset (quant.complexity)
quant.gridErrorWeightNonetype: NoneTypeA weight (between 0 and 1) applied to the deviation of a quantization to the actual attack times and durations during quantization. The higher this value, the more accurate the quantization (possibly resulting in more complex subdivisions of the beat). If None, the value is derived from the complexity preset (quant.complexity)
quant.rhythmComplexityWeightNonetype: NoneTypeA weight (between 0 and 1) applied to the penalty calculated from the complexity of the rhythm during quantization. A higher value results in more complex rhythms being considered for quantization. If None, the value is derived from the complexity (quant.complexity)
quant.gridErrorExpNonetype: NoneTypeAn exponent applied to the grid error. The grid error is a value between 0-1 which indicates how accurate the grid representation is for a given quantization (a value of 0 indicates perfect timing). An exponent betwenn 0 < exp <= 1 will make grid errors weight more dramatically as they diverge from the most accurate solution. If None, the value is derived from the complexity setting (quant.complexity)
quant.debugFalsetype: boolTurns on debugging for the quantization process. This will show how different divisions of the beat are being evaluated by the quantizer in terms of what is contributing more to the ranking. With this information it is possible to adjust the weights (quant.rhythmCompleityWeight, quant.divisionErrorWeight, etc)
quant.debugShowNumRows50type: intWhen quantization debugging is turned on this setting limits the number of different quantization possibilities shown
dynamicCurveShapeexpon(0.3)type: strThe shape used to create the default dynamics curve. The most convenient shape is some variation of an exponential, given as expon(exp), where exp is the exponential used. exp < 1 will result in more resolution for soft dynamics
dynamicCurveMindb-60between -160 - 0The amplitude (in dB) corresponding to the softest dynamic
dynamicCurveMaxdb0between -160 - 0The amplitude (in dB) corresponding to the loudest dynamic
dynamicCurveDynamicsppp pp p mp mf f ff ffftype: strPossible dynamic steps. A string with all dynamic steps, sorted from softest to loudest

Environment

Some aspects of the environment can be queried through the Workspace

  • recordPath(): returns the path where recordings are placed whenever the user does not give an absolute path

  • presetsPath(): presets created via defPreset are saved in this path and loaded in future sessions.

[20]:
w.recordPath()
[20]:
'/home/em/.local/share/maelzel/recordings'
[21]:
w.presetsPath()
[21]:
'/home/em/.local/share/maelzel/core/presets'

Dynamics

Mapping dynamic expressions to amplitudes

The dynamic curve within the active Workspace is used to map dynamics to amplitude for playback, or to transcribe amplitudes as dynamics

[11]:
w.dynamicCurve.plot()
../_images/notebooks_maelzel-core-workspace_14_0.png

Testing dynamics

Luciano Berio, “O King”

image0

[6]:
# Reset any active scorestruct to the default
setScoreStruct()

events = [
    "4F:4:ff",
    "4A:2.5:pp",
    "4F:0",    # dur=0 indicates a grace note
    "4A:1:pp",
    "4B:3",
    "5C#:3",
    "4F:3",
    "4A:2:ff",
    "4F:0:pp",
    "4A:1.5:pp",
    "4Ab:1.5",
    "4Bb:1",
    "5D:.5",
    "5C#:2",
    "4B:1.5:ff",
    "4F:2.5:pp"
]
voice = Chain(events)
voice

[6]:
Chain([4F:4♩, 4A:2.5♩, 4F, 4A:1♩, 4B:3♩, 5C#:3♩, 4F:3♩, 4A:2♩, 4F, 4A:1.5♩, …], dur=29)

Set the score structure to match the original. Either the .scorestruct attribute can be modified directly or the function setScoreStruct can be used

[7]:
w = getWorkspace()
w.scorestruct = ScoreStruct('''
  4/4, 60
  2/4
  3/8
  3/4
  .
  .
  2/4
  3/8
  3/4,,A
  2/4
  .
  3/4
  4/4
  3/4
  4/4
  3/4
  2/4
  4/4
  .
  2/4,,B
''')
voice.show()
../_images/notebooks_maelzel-core-workspace_19_0.png

Play with the default instr (piano, with pedal)

[9]:
# voice.play(instr='piano', sustain=8, gain=2)

r = voice.rec("tmp/oking.ogg", instr='piano', sustain=8, nchnls=1, gain=2)
r
[9]:
OfflineRenderer(outfile="/home/em/dev/python/maelzel/docs/notebooks/tmp/oking.ogg", 1 channels, 34.50 secs, 44100 Hz)

A dynamic curve with less contrast

[10]:
dyncurve = workspace.DynamicCurve.fromdescr(shape='expon(0.25)', mindb=-40)
dyncurve.plot()
../_images/notebooks_maelzel-core-workspace_23_0.png
[13]:
with Workspace(dynamicCurve=dyncurve):
    voice.rec(instr='piano', sustain=8, gain=2, nchnls=1).show()
OfflineRenderer(outfile="/home/em/.local/share/maelzel/recordings/rec-2023-03-27T20:45:16.814.wav", 1 channels, 37.00 secs, 44100 Hz)