-
Notifications
You must be signed in to change notification settings - Fork 12
CAACs in Blue
Author: Menno Knevel
Blue is my main composition program. Blue is very good at incorporating third-party algorithmic score programs. I like (and sometimes need!) the feedback I get from Computer-Aided Algorithmic Composition programs (CAAC). It triggers ideas like a sparring partner does. So, when using a CAAC, I have to trust my ears and taste to evaluate the auditive feedback of the generated score list. Empirically, I gently manipulate the code of the CAAC in order to understand its effects on the score list. This is the way i like to choose the right generated score as basis material. Not very scientific, but intuitive and clearly driven by my passion for sound.
I have made a serious attempt to master a programming language like Python, but somehow I lost the connection with sound and became unhappy and obstinate. It was too big a detour from the real goal: score production, creating musical events and ideas. Conclusion: I'd better stick to some CAAC programs that already exist and invest my time mastering these. These programs were developed with Csound and/or MIDI data generation in mind, so the musical structuring is already there.
One of the biggest advantages of blue is the possibility to integrate the use of text-based CAACs. Before, when I was working with programs like AthenaCL, I had the program running in a Python interpreter console, I had to copy the data it produced, and paste the data back in the Generic Score Object of Blue. This was not a very intuitive way of working. Today I evoke AthenaCL from within Blue (in an External SoundObject). If I am not entirely happy with its sounding result, I change a bit of the AthenaCL code and try again – and again if necessary – until I am satisfied with the result. At a later stage in the composition work I can easily adjust this code again to match the rest of the composition.
There are several CAACs and they all seem to have their own strengths. AthenaCL, for example, can produce weird and interesting structures, while CMask – even more in the form of JMask in blue – is pretty transparent and can easily produce material for granulation and get movements of events in time. nGen is yet another CAAC that I have always wanted to explore. For the sake of this article I have done so. It seems to me to be able to produce great rhythmic patterns in a musically sensible way.
Common Music3 was on my list too, but I could not get it to work on a 64bit Linux machine. So I had to let go. Maybe sometime in the future I will investigate CM2 - which is text-based. Perhaps the problem for me will be that CM is in fact Lisp - and like Python, yet another language to learn which is not very appealing to me...
Last but not least, there are some Note Processors already available in Blue. Some of them are Python or Clojure based, which means that they are not accessible for me as a non-programmer. But there are other Note Processors in Blue; they are post-processors that can manipulate the generated score even more, so it is easy to adapt the final result at a later stage in the composition work. It is also possible to generate several instances of the same basic material (Object) and create small variations on them, then rescale it etc etc.
In order to get an accurate translation from the events generated from AthenaCL, CMask or nGen as standalone programs to those programs used inside Blue as External SoundObjects/JMask, you should set "scaling" in the SoundObject Properties to "None". This should provide a 1:1 copy in events.
In a recent piece of mine I was using AthenaCL in an External SoundObject. AthenaCl is a program that prints out to screen (stdout), so the command to get the score back in Blue is:
python /to/path/blue/examples/soundObjects/athenaPipe.py $infile
In the piece I transformed and manipulated the name of a species of a fish, a “sprot” in Dutch, is being transformed and manipulated. I liked that word being very "stacatto" - so I used it several times in this composition.
The way to implement AthenaCL in blue:
One possible sounding result of this ExternalObject can be AthenaCL_IF01.ogg.
and on a second run: AthenaCL_IF02.ogg
You clearly hear that the results differ but have the same musical mood. One characteristic of AthenaCL is that it is not possible to set a global seed, so every time you run it, the result will be slightly different. The Texture (as it is called in AthenaCL) however is the same. The only way to keep a sequence is to freeze the External SoundObject (right-click the SoundObject, choose the Freeze/UnFreeze SoundObject option) – and hope you like it when auditioning. If not, unfreeze and try again - until you are satisfied and just keep that one.
A quite different musical Texture, or mood, will emerge when you change the second line: “tmo if” to "tmo lg”:
and the result:AthenaCL_LG.ogg
In AthenaCL there are a dozen TextureModules (like InterpolationFill and LineGroove) and they all have a big impact on the result.
AthenaCL is master in generating crazy and unexpected structures. It is nearly impossible to generate them otherwise (unless your name is Frank Zappa) and can be a good partner for inspiration.
AthenaCL has other options, like making clones or viewing the result in an image. These are options that can not be used inside Blue.
Pro:
- the unexpected factor; you never can predict exactly what kind of structure will be produced
- can be used in Blue (with some effort)
- using athenaPipe.py in Blue allows you to set comments (after # is ignored)
Con:
- there is no seed, so on every run you get different results. The composition will be different every time and the piece is never finished?!
- Python-based, so takes some time to compute the results as Python is not fast
- lot of options. It's not a program that I could master in a day. Took me a week.
From the Blue manual: “JMask is GUI score generating soundObject based on André Bartetzki's CMask. JMask currently supports all features of CMask except field precision.”
The JMask Object is pretty straightforward, mostly because CMask itself is pretty basic. It is logical to have it as a GUI, whereas certainly AthenaCL would lose its transparency in the form of a GUI. This might also be the case for nGen, the CAAC I will discuss later.
Choosing from a lot of options in a GUI like the one of JMask, but after having chosen an option new options for that option appear...in my opinion such a CAAC program would have a much better readability as text and not as a GUI. In JMask, the GUI works very intuitive for quickly modeling events, but I find that the text-based CMask forces you to think more clearly on how you want to coordinate your event streams exactly. Of course, this would mean that you have to install CMask on your machine as well. It's great to have both JMask and CMask available inside Blue.
and the result on a sample that consists of several beeps: JMask.ogg
It is easy and fun tweaking the lines and trying out the different parameters in CMask in the JMask GUI. What You See Is What You Get, but you have to understand that the window presentation of JMask covers the whole length of the JMask Object, whether you have rescaled it or not. Break-points you enter are relative in timing to that rescaled Object. In the occasion you need real precision in timing for your break points, you'd better use the text-based environment of CMask and address it as an External SoundObject, as it is not possible to give exact values in JMask.
Regarding the CMask grammar I do sometimes tend to forget the difference in the results of the [ ] and ( ). Here are examples explaining this difference: xxx mask [.2 5] [.4 5.5] prec 1 and xxx mask (0 10 10 0 15 8) 10 xxx - where the Mask function has (lower boundary)(higher boundary) pairs.
1st example with, using "xxx mask [.2 5] [.4 5.5] xxx" in p3
produces these values:
Translation to JMask for this first example is:
and the second CMask file, using "xxx mask (0 10 10 0 15 8) 10 xxx" in p3:
produces this:
Translation to JMask for the second example would be:
This shows the difference in use of [ ] and ( ) and their translation to JMask. The second example allows break points to be set in seconds, while [ ] just sets the lower and higher boundaries.
Some remarks on JMask:
- after a p-field you can double click and enter a description.
- from version 2.6.0 on, there is an option to give the "random" produced values a seed - CMask does not provide this fantastic feature.
- something else to keep in mind when working with the Tendency Masks in JMask: the upper and lower limits of the Mask are represented with 2 different Table GUIs: one is High Value, the other is Low Value. In Blue it does not matter if your upper limit was (accidentally) lower than your lower limit, it will still produce correct values.
- by pushing a p-field up or down, you change the p-field number without loosing the information.
- there is no precision factor in JMask (in CMask there is), but we get a precision factor using the "round" opcode in the instrument itself OR use the Quantizer option. Imagine, a frequency (440.6396386) was produced by JMask. Now in the instrument:
OR you could use the quantization option and force a precision factor, for example as shown here for p2:
On the left you see the settings of the quantization, on the right the result.
Pro:
- it is easy to generate clouds
- available as a GUI in Blue (Jmask)
- is pretty fast
- you can give a seed value in JMask, so you can keep the values you like
- took me a rainy afternoon to understand the program
Con:
- no seed available in CMask, so no exact repetitions possible
- perhaps too simplistic / limited
According to the creator of the program, Mikel Kuehn: "nGen is similar to Alexander Brinkman's Score11 (available from the Eastman Computer Music Center) and Andre Bartetzki's CMask." Score11 is not freely available and will not be discussed here (I just don't have it).
nGen is copyrighted by Mikel Kuehn, and is available for Intel machines running Linux 32bit, Linux 64bit and OSX and Windows7+ - untested on XP. But it can be used for free. Since nGen is not open source, it is not possible to create a nGenObject in Blue like the JMask Object, and have it distributed with blue.
The $outfile will be the name of a temporary file that nGen will write score data into. After the program is finished, Blue will open up that temp file, read in the score data, and then remove the temp file. In nGen, as a standalone program, ftables are read back in, but in Blue it does not work that way. You have to manually copy the ftables to the Tables section, and there remove the '>'. In Blue, the design for the External SoundObject is that it expects to parse only i-statements and ignores other statements.
Please keep in mind, working with Blue and nGen, it doesn't matter if you choose 'beats' or 'events' - the quantity of the generated lines are the same. as the SoundObject forces the strating times to be scaled.
A big advantage of nGen is the seed - in the example rs(113). You can give a seed between 0 and 65536 and on every run you get the same random value sequence. This gives you full control over your sequences. It is even possible to re-seed for every p-field. Here is the result of the example that uses seed 113: nGen_seed113.ogg and here is another example using the same material but now with seed 10: nGen_seed10.ogg
It is easy to specify progression in time: p2 in the example shows a rhythmic pattern that goes from a whole beat to a random choice between the 8th and 16th beat. The possibility to use note notation in p5 allows you to create chords. It creates values with the same start and end time when you write ":" between the notes.
In nGen, p-fields are calculated one at the time, in the order it is read from top to bottom. This makes it possible to extract information from one p-field to another, as you can read in this example above regarding p5 and p3. The text from Mikel Kuehn explains why p5 is evaluated first.
Pro:
- can produce clear rhythmic patterns
- is pretty fast
- learning it can be done in a day
- has a seed, so controllable in produced values
Con:
- typos happen easy / strict code
- code is not open source (yet)
For sure, I do like its speed, and speed is an important thing if you are working intuitively when composing.
Objects in Blue can been seen as basic material. It is great to be able to make variations on an Object using the Note Processors. Again, a quote from the Blue manual: "NoteProcessors are applied after the notes of a SoundObject, Layer, LayerGroup, or Score are generated and before time behavior is applied. Processing starts with the first NoteProcessor in the chain and the results of that are passed down the chain." The TEST button in the SoundObject Editor will reveal its result after it has been modified by a Note Processor.
Here is an example of the Note Processor, applied to the JMask example above (see JMask/CMask section).
p4 is multiplied by 0.2 - p4 stands for the grain pointer in the Instrument. It produces a nice variation on the JMask.ogg from above. It follows the movements of all the p-fields, but the grain pointer time is 5 x shorter: JMask_Note_Processor.ogg
Pro:
- they are a fundamental part of Blue, so easy to apply
- well documented.
- the random sections of the Note Processors have a seed option
Con:
- none that I can think of, so this is in fact a pro and not a con...
date: december 2014, updated october 2021
Downloads:
CMask (Linux, Windows and old Mac) - http://www.bartetzki.de/en/software.html Author: André Bartetzki
Cmask (OSX) - http://www.anthonykozar.net/ports/cmask/ ported to OSX by: Anthony Kozar
nGen - http://mikelkuehn.com/index.php/ng Author: Mikel Kuehn
AthenaCL - https://github.com/ales-tsurko/athenaCL Author: Christopher Ariza, ported to Python3 by Ales Tsurko