Intro | User Interface | Morphing | Editing Workflow | Envelopes | Deformers | Signal Flow | Reference Samples | FAQ | Glossary |
Understanding Morphing
This table shows the dimensionality of mesh vertices in the different editing perspectives:
Curve |
Time |
Key |
Mod* |
Amp |
Phase |
Curve |
---|---|---|---|---|---|---|
Time Surface |
x |
x |
x |
x |
x |
x |
Spectrum Filter |
x |
x |
x |
x |
x |
x |
Envelope |
x |
x |
x |
x |
x |
|
Waveshaper |
x |
x |
x |
|||
Impulse |
x |
x |
x |
|||
Deformer |
x |
x |
x |
Polar dimensions (morphing) |
||
Spatial dimensions |
* The modulation dimension can be mapped to different control inputs,
e.g. mod wheel, key velocity, or aftertouch.
In the Time Surface and Spectrum Filter, meshes have three polar dimensions: time, key, and mod. The structure that defines an intercept path has a vertex at each corner of the cube created by these three dimensions. We call this a vertex cube.
The time line intercepts that form the ends of an intercept path are a weighted average, based on the current key and mod position, of the 4 cube vertices at each of the two time poles.
The intercept that contributes to a waveshape or spectrum curve is a weighted average, based on the current time, of these two line intercepts. So, in total, intercepts are a weighted average of 8 vertices.
When we talk about 'time' in this context, it is the duration starting from the beginning of a note. As each cycle in that voice is rendered, the time is advanced slightly and so the waveshape modulates.
In the Envelope editor, meshes have two polar dimensions: key and mod, so intercepts are a weighted average of 4 vertices.
To understand the process of reduction from a higher-dimension mesh, consider this process:
For each polar input, we chop the higher-dimension form along the polar dimension and get a cross-section. Successive chops reduce the cross-section by a dimension each iteration until we have just the 2D waveshape, envelope, or spectrum curve.
Or working backwards, we can think of a waveshape, for example, as a cross section at time t of a cross-section at key k of a cross-section at mod value m of the 5-dimensional waveform surface.
The mesh itself does not morph at the time of playback; where we cut these cross-sections is what varies.
Discussion
Perhaps you're thinking, um...5 spatial dimensions? Sounds complicated.
Unnecessary complexity is never good, so this is a valid point. We have to consider how sounds usually vary.
Let's think about a trumpet: its tone has a certain quality in lower notes and a different quality in higher notes, irrespective of the note intensity. So we have 1 dimension in which the sound varies: key scale.
Note intensity changes the quality of the sound too; there's another.
The timbre also varies with time, especially at the attack and release of the voice.
This makes three varying dimensions, and typically three is enough to embody the varying tonal qualities of instruments.
These variants, key scale, intensity, and time, can vary independently. There's no single continuum onto which we can map all of the timbral variations we want to replicate. This is why separate dimensions are required.