-
Notifications
You must be signed in to change notification settings - Fork 12
Composing in Blue
under construction
For years I’ve made music using tape recorders, in particular using tape loops. A tape loop is a piece of recording tape of about 20 cm, glued and attached to itself so it becomes an “endless” loop. Short sounds could be recorded on this tape loop, and in different tape speeds. We played this tape for about 10 minutes – not longer because of the wear and the sound degradation and then planned our musical strategy.
Concentrating and hallucinating on this short collection of sounds we often found that, in fact, the total composition was already “hidden” in those repeating few seconds. Now it was a matter of extracting the complete composition from this dense structure and creating a composition of, let’s say, 3 or 10 minutes. We found that there were 2 main possibilities: either it would become a Dance (rythmic piece) or an Atmosphere (allowing for longer, evolving structures).
Later, using the AtariST and midi, we missed the exploration adventure of the sound, as all midi sounds were ready-mades. And the midi controller data were not able to mold the sound in a satisfactory way: midi has its limits and made for musicians who want to make songs! We, on the other hand, were looking for a sound structure, with a Start and an End and a Middle section; the composition that takes you by the hand and takes you places!
Later some more, the computer allowed treatment of samples and own recorded material. If you were lucky and after defragmenting the harddrive several times we could actually run 4 recorded sounds at the same time in parallel!! Running realtime effects with them were too much for the CPU.
Also, in Cubase, we found ourselves copying and pasting the same sample of sounds, just like we did on the tape recorder! It was difficult to NOT put in a drum line and get it over with...how to get back to the creative process? So, here, again, we found new limits...it felt like we ourselves were the limits.
Looking for freedom and a way to surpass my own limits, I found Csound. The flexibility of just a few words of text, no more fiddling around with working with cumbersome Midi Sysex messages trying to change the attack of the sound while playing; instrument and effect, the score - they are all influencing each other all the time. When I thought I had built a decent Csound instrument, it was the effect that made it necessary to re-model this instrument all over again. After the rework on the instrumen, i found i had to alter the effect a bit. Also the note events from the Score; a higher sounding note can have an entire different impact than the same sound in the mid region or lower region. A second instrument that is interesting in conjunction with the first requires re-modelling once again this first instrument and effect. One sound is great, but like in human relations, the composition (= sound structures in time) needs a partner or a rival sound in order to get alive, evolving and interesting.
This is where Blue comes in, as it is an integrated composition program; not like the VSTs I can go back and forth between Instrument and Effects and Score and make adjustments. I use the Jmask Object to quickly find out what kind of timbres are hidden in the instrument/effects. Still possible to quickly tweak or rewrite instruments etc.! The PianoRoll Object is mostly set to a scale division of 48 notes per octave, so small changing timbres can be made by just moving up or down the note a 1/48st step. This is how i mold the clay, how i fill the empty canvas.
In the Blue program several options are present for transforming note events in the Objects of the Score. Note Processors can transform parameters (Pfields) of an Object, applying Jmask and other score generators are at your fingertips.
The option to create and use Automation Lines (ALs) is a unique feature to Blue in that respect. It is not evident to copy this behavior using the Csound program alone. The ALs reminds me of the midi Controller lines of a sequencer like Cubase or Pro Tools.
The Blue Synth Builder (BSB) gives the Csound-coded instrument a VST-like look, and every knob or slider produces an Automation Line (AL); an AL can be drawn, breakpoints can be set and copied/pasted or moved and deleted, a process just like the one in the DAW I was used to. But there is a difference: the ALs in Blue can not be midified or controlled by OSC.
I have found it in my earlier compositional work to be nice to be able to control the midi Controller lines in realtime using a hardware MidiController: a big box with a lot of knobs. The DAW records the changes of the knobs. It was an easy and intuitive way to “play around” with the timbre of the sound. A nice way to get to know the sound and make it your friend. But always, after having gotten acquainted with the sound, I just got back to editing controller values by hand. Exploring the sounds by manipulations turned out to be “just” the first phase of building a composition.
Back to Blue. And what it adds to Csound.
With Csound it is possible to run the same instrument as fractional instruments. Instrument 1 can have a lot of brothers and sisters that exist in fractional numbers, like 1.1, 1.2, 1.3….. This allows for the note events and parameters of the score to, in a way, lead their own life. This feature is often used to run an instrument with tied notes as you can read in an article on the subject from Steven Yi.
But applying and changing an AL of this instrument 1 modifies this parameter for all the fractional instruments 1.1, 1.2 etc. alike. Fractional instruments are not that independent from one another. In fact, other than the tied notes option there is no real advantage in using fractional instruments. Also at the level of saving CPU cycles, there is no advantage in using fractional instruments over multiple copied ones.
The only way to have 2 of the same instruments but totally independent from each other is to copy it and give it a new instrument number.
The Sound SoundObject is an Object where the idea of “one-shot” - a one-note-event - is well presented. For example, making a Sound SoundObject that ATS analyzes (the ATS Creator) a sample with certain fixed parameters only needs to be a “one-shot” action. Another Sound SoundObject, the ATS Player, can play back this ATS analyzed sound but can only be in one way, in one event. The manual says:
The Sound SoundObject allows one to develop solitary sounds by using Csound Orchestra code and graphical user interfaces.
So if you want to hear more note events simultaneously, you have to build a “normal” ATS Player instrument and go from there.
If you want to make use of the JmaskObject or the nGen score Generator, perhaps in combination with some Note Processors, allowing for variations on the theme. It is wise to give these score generators some pfield material to work on. ALs are a nice way to lay down a visual presentation of what will happen with some of the parameters of the instrument, but it also limits the Score to 1 parameter: all score events will follow the values of the AL. This means that you can put in only one note, or, if you like to have more notes in the Object, the AL will do its thing for all of these notes.
Working in the BSB of an instrument, it happens that I try something out but am puzzled why I do not hear any change in sound. I have activated or deactivated something in the BSB but this switch does NOT seem to do anything...why??...no matter how many times I switch the switch - no audible difference! 5 minutes later I see in the Score that for this particular switch I have, at some point, created the according AL and have drawn some AL points on it. This particular AL represents a value or values and this overrides the corresponding setting in the BSB.
The ALs drawn on the TimeLine precede over any settings as done in the BSB GUI of the instrument. But the Pfield in the Score Object is boss. The list of priority is:
- the pfield
- the Automation Line
- the setting made in the GUI of the BSB
I was looking for yet another way to get some diversity between the note events inside the Objects. An AL can be used to change a parameter of the instrument, but when the AL is drawn, it will influence this parameter for all the note events at that point in time. With some rework of the instrument however, it is possible to use the AL on a selection of these note events. The Object containing the note events gets -1 for a Pfield and that value can act as a switch between the AL (Pfield = -1) or the Pfield value in the Object.
If you want to be able to switch between ALs and the pfields from the score in Blue, here is a UDO (k_Pfield.udo) that should do the trick.
opcode k_Pfield,k,ik
ipfield, kvalue xin ; choose pfield & value
if (p(ipfield) == -1) then ; -1 is the switch number
kfield = kvalue ; if -1 then follow the values of the Automation Line
else
kfield = p(ipfield) ; if any other value than -1, take value pfield in the score
endif
xout kfield
endop
What it does:
- the first input 'i' decides to what Pfield number from the score you want to assign the AL or score event value.
- the second input is variable (k-rate). Setting this in the score to -1 bypasses it. In that case the created AL will be active
- any other value bypasses the AL and uses thus the Pfield value will be used instead
Here you can find an example blue file. The example shows how it switches between the AL or the score, on a per note event basis. I have chosen the value -1 as a pfield value in the score to be the switch value because i do not use this value a lot as a Pfield value, so it is pretty safe to use. But if you prefer another number, please change accordingly.
As the AL works at k-rate, the Csound code should reflect this: ipitch = p4
becomes kpitch = p4
etc..The top of my Csound code in my instrument looks like this:
kpitch = p4
kamplitude = p5
ienvelope = p6
;============frequency input P4====================
kpitch k_Pfield <Pfield_a>, <pitch> ; p4 = frequency
;===============volume input P5=====================
kamplitude k_Pfield <Pfield_b>, <amplitude> ; p5 = amplitude setting
;====================================================
And the Generic Score Object for this instrument:
; pitch amp env
i3 0 2 .6 .6 2
i3 0 1.5 2.5 -1 3
i3 0 1.5 -1 .5 3
i3 .1 1 1.4 .6 4
An Automation Line that replaces the Pfield (switch to -1) influences all values for what sound parameter that Pfield represents. The AL and the AL in my example allows for variations inside the Object, while the Pfield are constant values.
Optional Score Module
A blue file that explains it all can be found here.
Where do you start from once you have created an instrument? What are the start settings in the BSB? In other words,what is the zero measurement?
In Blue there is a way to save presets of the instrument. However, it is not possible to switch between these presets like the program change of of a midi instrument, nor is it likely that such a feature will be implemented any time soon.
My solution, after the instrument is more or less ready to be used in the composition, is to set values in the BSB of the instrument so that I like what I hear with just running one note. This is my starting point. This instrument state can be saved as a preset and be giving a name and/or number, like start1.
Then - working from the Score - with a few ALs it is possible to mold those parameters of the note, but not more than 4 or 5 ALs per instrument. More ALs and I loose control! Too many options going on!! Also, with so many ALs the SoundLayer would become too colorful to distinguish AL from another. The Object looses information and overview.
-sounding examples of Partikkulary – how to play several notes at different heights to create a full texture?**