Csound-Flossmanual-8 0 0
Csound-Flossmanual-8 0 0
… is one of the best known and longest established programs in the field of audio programming. It
has been first released in 1986 at the Massachusetts Institute of Technology (MIT) by Barry Vercoe.
But Csound’s history lies even deeper within the roots of computer music as it is a direct descen-
dant of the oldest computer program for sound synthesis, MusicN, by Max Mathews. Csound is
free and open source, distributed under the LGPL licence, and it is maintained and expanded by a
core of developers with support from a wider global community.
In the past decade, thanks to the work of Victor Lazzarini, Steven Yi, John ffitch, Hlöðver Sigurðs-
son, Rory Walsh, and many others, Csound has moved from a somehow archaic audio program-
ming language to a modern audio library. It can not only be used from the command line and the
classical frontends. It can also be used as a VST plugin. It can be used inside the Unity game
engine. It can be used on Android or on any microcomputer like Raspberry Pi or Bela Board. It
can be used via its Application Programming Interface (API) in any other programming language,
like Python, C++ or Java. And it can now also be used inside any browser as a JavaScript library,
just by loading it as Web Assembly module (WASM Csound). Try it here, if you already know some
Csound:
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
1
WELCOME TO CSOUND!
instr TryMe
//some code here ...
endin
schedule("TryMe",0,-1)
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
This textbook cannot cover all these use cases. Its main goal is:
This book is called the Csound FLOSS Manual because it has been first released in 2011 at floss-
manuals.net. It should not be confused with the Csound Reference Manual which can be found
here.
Enjoy reading and coding, and please help improve this textbook by feedbacks and suggestions
on the Github discussions page or elsewhere.
2
ON THIS RELEASE
After two and a half years, it is again time for a new major release.
Since the first release of this textbook in 2011 many things have changed. With csound.com we
now have a good community website. Links and informations about different ways to use Csound
can go there. We don’t need them any more in the Csound Floss Manual, and it is better to collect
links and community information in only one place.
It is also no longer necessary to provide information about the frontends here. All have own web-
sites and introductions. All we do here is to link to them.
But what is really missing, in my opinion, is an up-to-date Getting Started. So I decided to write it,
and let the book begin with it.
It is followed by another new chapter. In this How to … I start to collect brief descriptions which
provide instructions and advices for beginners and also for more experienced Csound users. From
How to install Csound for Windows to How to use my MIDI keyboard with Csound or How to perform
time stretch.
This re-ordering is a big work, and will probably never be finished. This release is somehow a mix
between old and new:
ä New GETTING STARTED is now chapter 01, but only 10 of planned 24 tutorials are there.
ä New HOW TO … is now chapter 02, but only few of the desired decriptions are already there.
ä The BASICS chapter has been moved from chapter 01 to chapter 16.
ä The old GET STARTED (formerly chapter 01) has been removed. Its content is covered in the
new GETTING STARTED for the introduction and the HOW TO … for the installation guide.
ä The old chapters about the frontends are not yet removed although they are not up to date.
Once the HOW TO … descriptions are more extended, they will be removed.
ä Similar for some old content which might be outdated but still useful for some.
ä To end with some new in old: The OSC chapter has been extended by some practical exam-
ples about the communication between Csound and Processing.
Big thanks goes to Hlöðver Sigurðsson who made this new interactive FLOSS manual possible
with all his great and important work on JavaScript-based Csound and much more.
3
ON THIS RELEASE
4
01 Hello Csound
To produce a sine wave, Csound uses an oscillator. An oscillator needs certain input in order to
run:
2. The number of periods (cycles) per second to create. This results in higher or lower pitches.
The unit is Hertz (Hz). 1000 Hz would mean that a sine has 1000 periods in each second.
5
A Sine Oscillator in Csound: Opcode and Arguments 01 Hello Csound
The inputs of an opcode are called arguments and are written in parentheses immediately after the
opcode name. So poscil(0.2,400) means: The opcode poscil gets two input arguments.
The meaning of the input arguments depends on how the opcode is implemented. For poscil, the
first input is the amplitude, and the second input is the frequency. The Csound Reference Manual
contains the information about it. We will give in Tutorial 08 some help how to use it.
This way of writing code is very common in programming languages, like range(13) in Python, or
printf("no no") in C, or Date.now() in JavaScript (in the latter case with empty parentheses
which means: no input arguments).
Note: There is also another way of writing Csound code. See below if you want to learn more about
it.
We will call this signal aSine because it is an audio signal. The character a at the beginning of the
variable name signifies exactly this.
An audio signal is a signal which produces a new value every sample. (Learn more here about
samples and sample rate).
This means: The signal aSine is created by the opcode poscil at audio rate (:a), and the input
for poscil is 0.2 for the amplitude and 400 for the frequency.
To output the signal (so that we can hear it), we put it in the outall opcode. This opcode sends
an audio signal to all available output channels.
outall(aSine)
Note that the signal aSine at first was the output of the oscillator, and then became input of the
outall opcode. This is a typical chain which is well known from modular synthesizers: A cable
6
01 Hello Csound Your first Csound instrument
Figure 3.2: Signal flow and Csound code for sine oscillator and output
In the middle you see the signal flow, with symbols for the oscillator and the output. You can
imagine them as modules of a synthesizer, connected by a cable called aSine.
On the left hand side you see the chain between input, opcode and output. Note that the output of
the first chain, contained in the aSine variable, becomes the input of the second chain.
On the right hand side you see the Csound code. Each line of code represents one input -> opcode
-> output chain, in the form output = opcode(input). The line outall(aSine) does not have an
output in Csound, because it sends audio to the hardware (similar to the “dac~” object in PD or
Max).
After the keyword instr, seperated by a space, we assign a number (1, 2, 3, …) or a name to the
instrument. Let us call our instrument “Hello”, and include the code which we discussed:
instr Hello
aSine = poscil:a(0.2,400)
outall(aSine)
endin
Example
We are now ready to run the code. All we have to do is put the instrument in a complete Csound
file.
7
The Csound Score 01 Hello Csound
Look at the code in the example. Can you find the instrument code?
Push the “Play” button. You should hear two seconds of a 400 Hz sine tone.
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Hello
aSine = poscil:a(0.2,400)
outall(aSine)
endin
</CsInstruments>
<CsScore>
i "Hello" 0 2
</CsScore>
</CsoundSynthesizer>
This is the Score section of the .csd File. it starts with the tag <CsScore> and ends with
</CsScore>. Between these two tags is this score line:
i "Hello" 0 2
Try it yourself
(You can edit the example just by typing in it.)
8
01 Hello Csound Opcodes, Keywords and Terms you have learned in this tutorial
Opcodes
ä poscil:a(Amplitude,Frequency) oscillator at audio rate, with amplitude and fre-
quency input.
ä outall(aSignal) outputs aSignal to all output channels.
Keywords
ä instr ... endin are the keywords to begin and end an instrument definition.
Terms
ä An audio rate or a-rate signal is a signal which is updated sample by sample.
Go on now …
with the next tutorial: 02 Hello Frequency.
9
… or read some more explanations here 01 Hello Csound
From a mathematical point of view it is quite fascinating that we can understand and construct a
sine as constant movement of a point on a circle. This is called simple harmonic motion and is
fundamental for many phenomena in the physical world, including sound.
A sine tone is the only sound which represents only one pitch. All other sounds have two or more
pitches in themselves.
This means: All other sounds can be understood as addition of simple sine tones. These sines
which are inside a periodic sound, like a sung tone or other natural pitched sounds, are called
partials or harmonics.
Although the sounding reality is a bit more complex, it shows that sine waves can somehow be
understood as most elementary sounds, at least in the world of pitches.
You can find more about this subject in the Additive Synthesis chapter and in the Spectral Resyn-
thesis chapter of this book.
You are welcome to continue writing code in this way. The reasons why I use the “functional” way
of writing Csound code in these Tutorials are:
1. We are all familiar with this way to declare a left hand side variable y to be the sum of another
variable x plus two: y = x + 2 Or, to yield a y as function of x: y = f(x) I think it is
good to build upon this familiarity. It is hard enough for a musician to learn a programming
language, never having heard about variables, signal flow, input arguments, or parameters.
What ever let you feel more familiar in this new world is helpful and should be used.
2. As mentioned above, most other programming languages use a similar syntax, in the form
output = function(arguments). So for people who already know any other programming lan-
guage, it helps learning Csound.
3. The functional style of writing Csound code has always been there in expressions like
ampdb(-10) or ftlen(giTable). So it is somehow not completely new but an exten-
sion.
4. Whenever we want to use an expression as argument (you will learn more about it in Tutorial
06) we need this way of writing code. So it is good to use it as consistent as possible.
10
01 Hello Csound … or read some more explanations here
University of Music, from Yarava Music Group Tehran, and elsewhere, who gave me feedback and
contributed in one or another way to this and other texts and contents. Amin and Parham, Marijana
and Betty, Tom and Farhad, Ehsan and Vincent, Julio and Arsalan, to name some of them.
Thanks go also to Wolfgang Fohl, Tarmo Johannes, Rory Walsh and others for their comments
and help.
And of course to the Core Developers whose great work made it possible that we all can use
Csound in the way we do. I hope this Tutorial can show to some more musicians how admirable
and successful the big effort was and is to turn the oldest audio programming language into a
modern one, without losing any composition written in the Csound language decades ago.
Each Tutorial has a first part as must read (well which must is that), followed by an optional part
(in which of course the most interesting things reside). To make at least one thing in the world
reliable, each must read consists of five headings, and each can read of three. Actually I planned
it to be 4+3, but then I asked Csound, and received this answer:
if 4+3 == 7 then
"Write this Tutorial!\n"
else
8!
endif
So I took this as oracle and decided for 5+3 headings, to fulfill also the 8! requirement. It is always
better to satisfy both gods of a conditional branch, in my experience.
Included in the must read part is a must do. At first a central example, yes very central, substantial,
expedient, enjoyable, and of course very instructive. And then a Try it yourself which is kind of the
dark side of the example: As easy it is to just push the “Run” button, as hard will it perhaps be to
solve these damned exercises. But, to quote John ffitch: “We have all been there …”
I guess, no I have run a long series of tests, no it has been proven by serious studies held by the
most notable and prestigious universities in the most important parts of the world, that the average
time needed to go through one Tutorial, is one hour. So once I finished the planned number of 24
Tutorials then finally the goal is reached to learn Csound in one day.
Nevertheless, I must admit that this Getting Started can only be one amongst many. Its focus is
on learning the language: How to think and to program in Csound. I believe that for those who
understand this, and enjoy Csound’s simplicity and clarity, all doors are open to go anywhere in
the endless world of this audio programming language. It be live coding, Bela Board or Raspberry
Pi, it be noise music or the most soft and subtle sounds, it be fixed media or the fastest possible
real-time application, it be using Csound as standalone or as a plugin to your preferred DAW. As
for sounds, please have a look, and in particular an ear, to Iain McCurdy’s examples, either on his
website or inside Cabbage. They are an inexhaustible source of inspiration and a yardstick for
sound quality.
11
… or read some more explanations here 01 Hello Csound
12
02 Hello Frequency
In natural sounds, the frequency is rarely fixed. Usually, for instance when we speak, the pitch of
our voice varies all the time, in a certain range.
The most simple case of such a movement is a line. We need three values to construct a line:
This is a line which moves from 500 to 400 in 0.5 seconds, and then stays at 400 for 1.5 seconds:
13
A Line Drawn with the ‘linseg’ Opcode 02 Hello Frequency
Note: Acoustically this way to apply a glissando is questionable. We will discuss this in Tutorial
05.
Note: Should we not say: “This is a line which moves from 500 Hz to 400 Hz in 0.5 seconds”,
rather than: “This is a line which moves from 500 to 400 in 0.5 seconds”? No. The line outputs
numbers. These numbers can be used for frequencies, but in another context they can have a
different meaning.
You will recognize the structure opcode(arguments) which we already used in the first tutorial.
Here, the opcode is linseg, and the arguments are 500 as first value, 0.5 as duration to move to
the next value, and 400 as target value.
But why is the variable on the left side called kFreq, and why is linseg written as linseg:k?
k-rate Signals
A signal is something whichs values change in time.
The fundamental time is given by the sample rate. It determines how many audio samples we
have in one second. We saw in the first tutorial the signal aSine whichs values change in audio
rate; calculating a new value for every sample.
14
02 Hello Frequency Example
The second possible time resolution in Csound is less fine grained. It does not calculate a new
value for every sample, but only for a group of samples. This time resolution is called control rate.
Variables in Csound which have k as first character, are using the control rate. Their values are
updated in the control rate. This is the reason we write kFreq.
After the opcode, before the parenthesis, we put :k to make clear that this opcode uses the control
rate as its time resolution.
Example
Push the “Play” button. You will hear a tone with a falling pitch.
Can you see how the moving line is fed into the oscillator?
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Hello
kFreq = linseg:k(500,0.5,400)
aSine = poscil:a(0.2,kFreq)
15
Signal flow 02 Hello Frequency
outall(aSine)
endin
</CsInstruments>
<CsScore>
i "Hello" 0 2
</CsScore>
</CsoundSynthesizer>
Signal flow
When you look at our instrument code, you see that there is a common scheme:
The first line produces a signal and stores it in the variable kLine. This variable is then used as
input in the second line.
The second line produces a signal and stores it in the variable aSine. This variable is then used as
input in the third line.
1. The <CsOptions> tag. You see here the statement: -o dac This means: The output (-o)
shall be written to the digital-to-analog converter (dac); in other words: to the sound card.
Because of this, we listen to the result in real time. Otherwise Csound would write a file as
final result of its rendering.
2. The <CsInstruments> tag. Here all instruments are collected. This tag is the place for the
Csound code. It is also called the Csound “Orchestra”.
3. The <CsScore> tag. This we discussed in the previous section.
16
02 Hello Frequency Try it yourself
This tag defines the boundaries for the Csound program you write. In other words: What you write
out of these boundaries will be ignored by Csound.
Try it yourself
ä Let the frequency line move upwards instead of downwards.
ä Use linseg to create a constant frequency of 400 or 500 Hz.
ä Make the time of the glissando ramp longer or shorter.
ä Add another segment by appending one more value for time, and one more value for the
second target value.
ä Restore the original arguments of linseg. Then replace linseg by line and hear what is
different.
Opcodes
ä linseg:k(Value1,Duration1,Value2,...) generates segments of straight lines
Tags
ä <CsoundSynthesizer> … </CsoundSynthesizer> starts and ends a Csound file.
ä <CsOptions> … </CsOptions> starts and ends the Csound Options.
ä <CsInstruments> … </CsInstruments> starts and ends the space for defining Csound
instruments.
ä <CsScore> … </CsScore> starts and ends the Csound score.
Terms
ä A control rate or k-rate signal is a signal which is not updated every sample, but for a group
or block of samples.
17
Go on now … 02 Hello Frequency
Go on now …
with the next tutorial: 03 Hello Amplitude.
When you run this code, you will hear that line has one important difference to linseg: It will
not stop at the target value, but continue its movement, in the same way as before:
Usually we do not want this, so I’d recommend to always use linseg, except some special cases.
Coding conventions
When you press the “Run” button, Csound “reads” the code you have written. Perhaps you already
experienced that you wrote something which results in an error, because it is “illegal”.
18
02 Hello Frequency … or read some more explanations here
What is illegal here? We have seperated the three input arguments for linseg not by commas,
but by spaces. Csound expects commas, and if there is no comma, it returns a syntax error, and
the code cannot be compiled:
error: syntax error, unexpected NUMBER_TOKEN,
expecting ',' or ')' (token "0.5")
This is not a convention, it is a syntax we have to accept when we want our code to be compiled
and executed by Csound.
But inside this syntax we have many ways to write code in one way or another.
(1) This is the way I write code here in these tutorials. I put a space left and right the =, but I put no
space after the comma.
(2) This is possible but perhaps you will agree that it is less clear because the = sign is somehow
hidden.
(3) This is as widely used as (1), or more. I remember when I first read Guido van Rossum’s Python
tutorial in which he recommends to write as in (1), I did not like it at all. It took twenty years to
agree …
(4) This is a common abbreviation which is possible in Csound and some other programming
languages. Rather than 0.5 you can just write .5. I use it privately, but will not use it here in these
tutorials because it is less clear.
(5) You are allowed to use tabs instead of spaces, and each combination of it, and as many tabs
or spaces as you like. But usually we do not want a line to be longer than necessary.
Another convention is to write the keywords instrand endin at the beginning of the line, and
then the code indented by two spaces:
instr Hello
kFreq = linseg:k(500,0.5,400)
aSine = poscil:a(0.2,kFreq)
outall(aSine)
endin
The reason for this convention is again to format the code in favor of maximum clarity. In the
Csound Book we used one space, but I think two spaces are even better.
To summarize: You have a lot of different options to write Csound code. You can do what you like,
but it is wise to accept some conventions which serve for maximal comprehension. The goal is
readable and transparent code.
19
… or read some more explanations here 02 Hello Frequency
As a simple advice:
20
03 Hello Amplitude
21
Back to the Head 03 Hello Amplitude
kAmp = linseg:k(0.3,0.5,0.1)
Note: Acoustically this way to change volume is questionable. We will discuss this in Tutorial 06.
We see four constants which should be written explicitely in every Csound file.
They are called constants because you cannot change them during one run of Csound. Once you
start Csound (by pushing the “Play” button), and Csound compiles, these values remain as they
are for this run.
sr
sr means sample rate. This is the number of audio samples in one second. Common values are
44100 (the CD standard), 48000 (video standard), or higher values like 96000 or 192000.
As the sample rate is measured per second, it is often expressed in Hertz (Hz), or kilo Hertz (kHz).
You should choose the sample rate which fits to your needs. When you produce audio for an mpeg
video which needs 48 kHz, you should set:
sr = 48000
When your sound card runs at 44100 Hz, you should set your sr to the same value; or vice versa
set your sound card to the value you have as sr in Csound.
ksmps
ksmps means: Number of samples in one control period.
As you learned in Tutorial 02, the k-rate is based on the sample rate. A group or block of samples
is collected in a package. Imagine yourself standing besides a belt. Little statues arrive on the belt
in a regular time interval, say once a second. Rather than taking each statue, and throwing it to a
collegue, you collect 64 of the cubes in one package. You throw this package once the 64 statues
are completed. You will throw one package every 64 seconds.
22
03 Hello Amplitude Default Values in the Orchestra Header
This “time interval to throw a package” (as packages per second) is the control rate or k-rate. The
time interval in which the little statues arrive one by one is the audio-rate or a-rate. In our example,
the k-rate is 64 times slower than the a-rate.
The ksmps constant defines how many audio samples are collected for one k-rate package. What
we called “package” here, is in technical terms called “block” or “vector”. So ksmps is exactly the
same as “block size” in PureData or “vector size” in Max.
nchnls
nchnls means number of channels.
When you have a stereo sound card, you will set nchnls = 2.
When you use eight speakers, you must set nchnls = 8. You will need a sound card with at least
eight channels the for real-time output.
Note: Csound assumes that you have the same number of input channels as you have output
channels. If this is not the case, you must use the nchnls_i constant to set the number of input
channels. For instance, in case you have 8 output channels but only 4 input channels on your audio
interface, set:
nchnls = 8
nchnls_i = 4
0dbfs
0dbfs means: What is the amplitude which represents 0 dB full scale.
We should always set this number to 1, as this is what all audio applications reasonably do. 0 is
minimum amplitude (meaning “silence”), and 1 is maximum amplitude.
Always set
0dbfs = 1
(We will explain more in Tutorial 05 about 0 dB which is part of this constant.)
23
Example 03 Hello Amplitude
The default value for sr is 44100. This is acceptable, but it is better to set it explicitely (and perhaps
change it then if necessary).
The default value for ksmps is 10. This is not good because it is not a power of two. (See below
why this is preferred.)
The default value for nchnls is 1. This is not good because usually we want to use at least stereo.
The default value for 0dbfs is 32767. This is not good at all. Please set it to 1.
Example
Look how both, the kAmp and kFreq line, are created by the linseg opcode.
These two lines are fed via the variable names into the poscil oscillator.
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Hello
kAmp = linseg:k(0.3,0.5,0.1)
kFreq = linseg:k(500,0.5,400)
aSine = poscil:a(kAmp,kFreq)
outall(aSine)
endin
</CsInstruments>
<CsScore>
i "Hello" 0 2
</CsScore>
</CsoundSynthesizer>
Try it yourself
ä Change the duration in the kAmp signal from 0.5 to 1 or 2. The frequency and the amplitude
line are now moving independently from each other.
ä Change the values of the kAmp signal so that you get an increasing rather than a decreasing
amplitude.
ä Change 0dbfs to 2. You should hear that it sounds softer now because the “full scale” level
is twice as high.
24
03 Hello Amplitude Signal flow and Order of Execution
ä Change 0dbfs to 0.5. You should hear that it sounds louder now because the “full scale”
level is set to 0.5 now instead of 1.
ä Move the line aSine = poscil:a(kAmp,kFreq) from third position in the instrument to
the first position. Push the “Play” button and look at the error message. Can you understand
why Csound is complaining?
Please compare this version to the signal flow diagram in Tutorial 01. The two inputs for the
poscil oscillator are not any more two numbers, but two signals, as outputs of two linseg
opcodes.
Csound reads our program code line by line. Whenever something is used as input for an opcode,
it must exist at this time.
The poscil oscillator needs kAmp and kFreq as its input. But at this point of the program code,
there is neither kAmp nor kFreq. So Csound will return a “used before defined” error message:
25
Constants and Terms you have learned in this tutorial 03 Hello Amplitude
As you see, Csound stops reading at the first error message. Once you put the kAmp =
linseg:k(0.3,0.5,0.1) on top of the instrument, Csound will not complain any more about
kAmp, but move to the next undefined variable:
error: Variable 'kFreq' used before defined
The error message has changed now. Csound does not complain any more about kAmp but about
kFreq. Once you put the line kFreq = linseg:k(500,0.5,400) on first or second position,
also this error message will disappear.
To summarize:
1. Learning a programming language is not possible without producing errors. This is an es-
sential part of learning. But:
2. You MUST READ the error messages. Figure out where the “Csound console” is shown in
your environment. (It can be hidden or in the background.) And then read until you meet the
first error message.
3. The first! And then try to understand this message and follow the hints to solve it. Don’t
worry if the next one shows up. You will solve them one by one.
Constants
ä sr number of samples per second
ä ksmps number of samples in one control block
ä nchnls number of channels
ä 0dbfs number (amplitude) for zero dB full scale
Terms
ä Block Size or Vector Size is what Csound sets as ksmps: the number of samples in one
control cycle
Go on now …
with the next tutorial: 04 Hello Fade-out.
26
03 Hello Amplitude … or read some more explanations here
Note 2: The advantage of a smaller ksmps is the better time resolution for the control rate. If the
sample rate is 44100 Hz, we have a time resolution of 1/44100 seconds for each sample. This
is about 0.000023 seconds or 0.023 milliseconds between two samples. When we set ksmps =
64 for this sample rate, we get 64/44100 seconds as time resolution for each control value. This
is about 0.00145 seconds or 1.45 milliseconds between two blocks or two control values. When
we set ksmps = 32 for the same sample rate, we get 0.725 milliseconds as time resolution for
each new control value.
Note 3: The advantage of a larger ksmps is the better performance in terms of speed. If you have
a complex and CPU consuming Csound file, and get dropouts, you can try to increase ksmps.
Note 4: Although ksmpsis a constant, we can set a local ksmps in one instrument. The opcode
for this operation is setksmps. Sometimes we wish to run a krate opcode in one instrument
sample by sample. In this case, we can use setksmps(1). We only can split the global ksmps
into smaller parts, not vice versa.
(It is possible, by the way, to embed Csound in PD. Read more here in this book if you are interested.)
It might be good to have a look how the important constants like sample rate and ksmps are set
in PD, and how they are called in PD. This comparision may help to understand what is happening
in any audio application and audio programming language.
All settings can be found in the Audio Settings … which look like this for my PD version:
27
… or read some more explanations here 03 Hello Amplitude
Top left the sample rate is being set, as we did in Csound via
sr = 44100
Beneath the block size can be selected. This is exactly the same as the ksmps in Csound. The
selected block size of 64 is what we did as:
ksmps = 64
Below we can set input and output devices, and the number of input and output channels. The
latter is what we did via
nchnls = 2
(We did not select any audio device because we use Csound in a browser here, so it uses Web
Audio. But of course you can select your input and output device in Csound, too.)
And what about 0dbfs? This is always set to 1 in PD, so no field for it.
You can read more about it for instance in the Wikipedia article about Free and open source soft-
ware. What does it mean for Csound being “free and open source”?
At first, all Csound code is visible to anyone. Currently the source code is on Github. Anyone can
not only read it, but also “fork” it. This means, simply put: To use it, to change it according to own
needs and to distribute it under the same conditions as you forked it.
Most forks are used to send “pull requests” to the main Csound developers. This means: “I im-
proved something; please merge it to the Csound code base.”
No one gets paid for developing Csound source code, for contributing to the Csound website, for
maintaining frontends, mailing lists, forums, API’s, for writing documentation, examples, tutorials.
The motivations are certainly different, but I guess that one is common: To enjoy collaborating and
contributing to a software which is our common property and can be used by anyone, to materialize
28
03 Hello Amplitude … or read some more explanations here
creative musical ideas. If you like to read more about some of my own experiences and views, have
a look here.
Great to see that it works, since nearly four decades, with all highlights and shadows, as in all real
things. If you read this and feel that you might contribute anything: Welcome! Have a look at the
Community page of the Csound Website and join us. There is always work to do, and to have fun
doing it …
29
… or read some more explanations here 03 Hello Amplitude
30
04 Hello Fade-out
Imagine a sine wave which is cut anywhere. Perhaps this will happen:
But even without this click: Usually we will prefer a soft end, similar to how natural tones end.
The Csound opcode linen is very useful for applying simple fades to an audio signal.
31
The Score Parameter Fields 04 Hello Fade-out
We need three numbers to adjust the fades in linen: (1) The time to fade in. (2) The overall
duration. (3) The time to fade out.
For instance, if the note’s duration is 2 seconds, we will want 2 seconds for linen’s overall dura-
tion. If the note’s duration is 3 seconds, we will want 3 seconds for linen’s overall duration.
In Tutorial 01, you have learned how the instrument duration is set in Csound.
After the starting i which indicates an “instrument event”, we have three values:
32
04 Hello Fade-out ‘p3’ in an Instrument
‘p3’ in an Instrument
A Csound instrument is instatiated via a score line.
So each instrument “knows” its p1 (its number or name), its p2 (its start time), and its p3 (its
duration).
And more:
Each instrument can refer to its score parameter fields by just writing p1 or p2 or p3 in the code.
Example
In addition to the three input arguments for linen which we have discussed above, we have as
first input argument the audio signal which we want to modify.
This is the order of the four input arguments for the linen opcode:
Look at the code. Can you see how the code uses p3 to adapt the duration of linen to the duration
of the instrument?
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
33
Volume Change as Multiplication 04 Hello Fade-out
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Hello
kAmp = linseg:k(0.3,0.5,0.1)
kFreq = linseg:k(500,0.5,400)
aSine = poscil:a(kAmp,kFreq)
aOut = linen:a(aSine,0,p3,1)
outall(aOut)
endin
</CsInstruments>
<CsScore>
i "Hello" 0 2
</CsScore>
</CsoundSynthesizer>
We multiply the audio signal with the envelope signal which linen generates.
aEnv = linen:a(1,0.5,p3,0.5)
aSine = poscil:a(0.2,400)
aOut = aEnv * aSine
Try it yourself
Change the code so that
34
04 Hello Fade-out Opcodes and symbols you have learned in this tutorial
ä You insert p3 in the kAmp line so that the amplitude changes over the whole duration of the
instrument.
ä You insert p3 in the kFreq line so that the frequency changes over the whole duration of the
instrument.
Opcodes
ä linen:a(aIn,Fade-in,Duration,Fade-out) linear fade-in and fade-out
Symbols
ä p1 the first parameter of the score line which calls the instrument: number or name of the
instrument
ä p2 the second parameter of the score line which calls the instrument: the start time of the
instrument
ä p3 the third parameter of the score line which calls the instrument: the duration of the in-
strument
Note: p3 refers to the score, but inside the score it has no meaning, and Csound will throw an error
if you write p3 as symbol in the score. It only has a meaning in the instrument code.
Go on now …
with the next tutorial: 05 Hello MIDI Keys.
In the most simple case this amplitude is a fixed number. If it is 1, then we have the basic shape
which we saw at the beginning of this chapter:
35
… or read some more explanations here 04 Hello Fade-out
linen:a(1,0.2,p3,0.5) will perform a fade-in from 0 to 1 in 0.2 seconds, and a fade-out from
1 to 0 in 0.5 seconds, over the whole duration p3 of the instrument event.
In the example, we have used the linen opcode in directly inserting an audio signal in its first
input.
linen:a(aSine,0,p3,1)
As the first input of linen is an amplitude, it can be a constant amplitude, or a signal. This is the
case here for aSine as amplitude input.
As you already know, in a signal the values change over time, either at k-rate or at a-rate.
Indeed this might be slightly more efficient. But for modern computers, this is a neglectable per-
formance gain.
On the other hand, a-rate envelopes are preferable because they are really smooth. They have no
“staircases” as k-rate envelope have.
You can read more here about this subject. I personally recommend to always use a-rate en-
velopes.
36
04 Hello Fade-out … or read some more explanations here
Acoustically linear fades are not the best. We will discuss the reasons in Tutorial 06.
Practically, the linear fades which linen generates are sufficient for many cases. But keep in
mind that there are other possible shapes for fades, and move to them when you are not satisfied
how it sounds. The transeg opcode will be your friend then.
37
… or read some more explanations here 04 Hello Fade-out
38
05 Hello MIDI Keys
1. When we ask a musician to play a certain pitch, we say “Can you please play D”, or “Can you
please play D4”. But what pitch is 500 Hz?
2. When we perform a glissando from 500 Hz to 400 Hz in the way we did, we do it actually not
in the linearity we thought we would.
We will discuss the first issue in the next paragraphs. For the second issue, you find an explanation
below, in the optional part of this tutorial. You may also have a look at the Pitches section in the
Digital Audio Basics chapter of this book.
I recommend using MIDI note numbers, because they are easy to learn and they are used also by
other applications and programming languages.
39
i-rate Variables in Csound 05 Hello MIDI Keys
All you must know about MIDI keys or note numbers: C4 is set to note number 60. And then
each semitone, or each next key on a MIDI keyboard, is plus one for upwards, and minus one for
downwards.
If you want to transform any MIDI key to its related frequency, use the mtof (midi to frequency)
opcode.
When we want Csound to calculate the frequency which is related to D4, and store the result in a
variable, we write:
iFreq = mtof:i(62)
An i-rate variable in Csound is only calculated once: when the instrument in which it occurs is
initialized. That is where the name comes from.
Remember that k-rate and a-rate variables are signals. A signal varies in time. This means that
its values are re-calculated all the time. To summarize:
For a programming language this means to print. Via printing the program shows values in the
console. In the console we see the messages from the program.
40
05 Hello MIDI Keys Example
The print opcode is what we are looking for. Its syntax is simple:
print(iVariable)
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Print
iFreq = mtof:i(62)
print(iFreq)
endin
</CsInstruments>
<CsScore>
i "Print" 0 0
</CsScore>
</CsoundSynthesizer>
You should see this message near the end of the console output:
instr 1: iFreq = 293.665
So we know that the frequency which is related to the MIDI key number 62, or D4, is 293.665 Hz.
Note: The print opcode is for i-rate variables only. You can not use this opcode for printing
k-rate or a-rate variables. We will show opcodes for printing control- or audio-variables later.
Example
We will use now MIDI notes for the glissando, instead of raw frequencies.
We create a line which moves in half a second from MIDI note 72 (C5) to MIDI note 68 (Ab4). This
MIDI note line we store in the Variable kMidi:
kMidi = linseg:k(72,0.5,68)
Note that we use mtof:k here because we apply the midi-to-frequency converter to the k-rate
signal kMidi. The result is a k-rate signal, too.
<CsoundSynthesizer>
<CsOptions>
-o dac
41
The ‘prints’ Opcode and Strings 05 Hello MIDI Keys
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Hello
kAmp = linseg:k(0.3,0.5,0.1)
kMidi = linseg:k(72,0.5,68)
kFreq = mtof:k(kMidi)
aSine = poscil:a(kAmp,kFreq)
aOut = linen:a(aSine,0,p3,1)
outall(aOut)
endin
</CsInstruments>
<CsScore>
i "Hello" 0 2
</CsScore>
</CsoundSynthesizer>
We have discussed that numbers can only be calculated once, at i-rate, or that they are calculated
again and again for a block of samples, so being k-rate, or even are calculated again and again for
each sample, so being a-rate.
But even in an audio application we need something to write text in, for instance when we point to
a sound file as “myfile.wav”.
This data type, which starts and ends with double quotes, is called a string. (The term is from
“character string”, so a chain of characters.)
The prints opcode is similar to the print opcode, but it prints a string, and not a number. Try
this:
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Prints
prints("Hello String!\n")
42
05 Hello MIDI Keys Try it yourself
endin
</CsInstruments>
<CsScore>
i "Prints" 0 0
</CsScore>
</CsoundSynthesizer>
Please compare: Remove the two characters \n and run the code again. You will see that the
console printout is now immediately followed by the next Csound message.
Or add another \n to the existing one, and you will see an empty line after “Hello String!”.
In fact, there are much mor format specifiers. We will get back to them and to prints in Tutorial
09.
Try it yourself
Change the kMidi signal so that
5. Create two variables iFreqStart and iFreqEnd for the the two MIDI notes. (You need to convert
the MIDI notes at i-rate for it.) Insert then these i-variables in the kFreq = linseg(...)
line. Compare the result to the one in the example.
6. Code a “chromatic scale” (= using always the next MIDI note) which falls from D5 to A4. Each
MIDI key stays for one second, and then immediately moves to the next step. You can get
this with linseg by using zero as duration between two notes. This is the start: kMidi =
linseg:k(74,1,74,0,73,...). Don’t forget to adjust the overall duration in the score;
otherwise you will not hear the series of pitches although you created it …
Opcodes
ä mtof:i(MIDI_note) MIDI-to-frequency converter for one MIDI note
ä mtof:k(kMIDI_notes) MIDI-to-frequency converter for a k-rate signal
43
Go on now … 05 Hello MIDI Keys
Terms
ä i-rate is the “time” (as point, not as duration) in which an instrument is initialized
ä i-rate variable or i-variable is a variable which gets a value only once, at the initialization of an
instrument.
ä A string is a sequence of characters, enclosed by double quotes. I’d recommend to only use
ASCII characters in Csound to avoid problems.
Go on now …
with the next tutorial: 06 Hello Decibel.
1. We first create the line between the two MIDI notes. Afterwards we convert this line to the
frequencies. This is what we did in the example code:
kMidi = linseg:k(72,0.5,68)
kFreq = mtof:k(kMidi)
2. We first convert the two MIDI notes to frequencies. Afterwards we create a line. This would
be the code:
iFreqStart = mtof:i(72)
iFreqEnd = mtof:i(68)
kFreq = linseg:k(iFreqStart,0.5,iFreqEnd)
We choose A5 (= 880 Hz or MIDI note 81) and A3 (= 220 Hz or MIDI note 57) as start and end. And
we create one variable for each way.
44
05 Hello MIDI Keys … or read some more explanations here
kMidiLine_1 = linseg:k(81,12,57)
kFreqLine_1 = mtof:k(kMidi)
iFreqStart = mtof:i(81)
iFreqEnd = mtof:i(57)
kFreqLine_2 = linseg:k(iFreqStart,12,iFreqEnd)
The kFreqLine_1 variable contains the frequency signal which derives from a linear transition in the
MIDI note domain. The kFreqLine_2 variable contains the frequency signal which derives from a
linear transition in the frequency domain.
When we use one oscillator for each frequency line, we can listen to both versions at the same
time. We will output the kFreqLine_1 in the left channel, and the kFreqLine_2 in the right channel.
For more comparison, we also want to look at the MIDI note number version of both signals. For
kFreqLine_1, this is the kMidiLine_1 signal which we created. But which MIDI pitches comply with
the frequencies of the kFreqLine_2?
We can get these pitches via the ftom (frequency to midi) opcode. This opcode is the reverse of
the mtof opcode. For ftom we have frequency as input, and get MIDI note numbers as output.
So to get the corresponding MIDI pitches to kFreqLine_2, we write:
kMidiLine_2 = ftom:k(kFreqLine_2)
Here comes the code which plays both lines, and prints out MIDI and frequency values of both
lines, once a second. Don’t worry about the opcodes which you do not know yet. It is mostly about
“pretty printing”.
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 1
nchnls = 2
0dbfs = 1
instr Compare
kMidi = linseg:k(81,12,57)
kFreqLine_1 = mtof:k(kMidi)
iFreqStart = mtof:i(81)
iFreqEnd = mtof:i(57)
kFreqLine_2 = linseg:k(iFreqStart,12,iFreqEnd)
kMidiLine_2 = ftom:k(kFreqLine_2)
aOut_1 = poscil:a(0.2,kFreqLine_1)
aOut_2 = poscil:a(0.2,kFreqLine_2)
aFadeOut = linen:a(1,0,p3,1)
out(aOut_1*aFadeOut,aOut_2*aFadeOut)
endin
45
… or read some more explanations here 05 Hello MIDI Keys
</CsInstruments>
<CsScore>
i "Compare" 0 13
</CsScore>
</CsoundSynthesizer>
When we plot the two frequency lines, we see that the first one looks like a concave curve, whilst
the second one is a straight line:
We see that Freqs_1 reaches 440 Hz after half of the duration, whilst Freqs_2 reaches 550 Hz after
half of the duration.
The Freqs_1 line on the other hand has the same ratio between the frequency values of two sub-
sequent seconds:
880.000 / 783.991 = 1.12246...
783.991 / 698.456 = 1.12246...
46
05 Hello MIDI Keys … or read some more explanations here
...
246.942 / 220.000 = 1.12246...
Our perception follows ratios. We will get back to this in the next tutorial.
We hear that the first line is constantly falling, whilst the second one is too slow at the beginning
and too fast at the end.
The dotted lines point to MIDI note 69 which is one octave lower than the note at start. This point is
reached after 6 seconds for the Pitches_1 line. This is correct for a constant pitch decay. We have
two octaves which are crossed in 12 seconds, so each octave needs 6 seconds. The Pitches_2
line, however, reaches MIDI note 69 after 8 seconds, therefore needing 2/3 of the overall time to
cross the first octave, and only 1/3 to cross the second octave.
To be written in muscial notation, this is how this “too slow” and “too fast” look like:
To adapt the MIDI notes to traditional notation, I have written the cent deviation from the semitones
above. The MIDI note number 79.88 for the second note is expressed as A flat minus 12 Cent. If
the deviations are larger than 14 Cent, I have added an arrow to the accidentals.
It can be seen clearly how the interval of the first step is only slightly bigger than a semitone,
wheras the last step is nearly a major third.
If you enjoy this way of glissando, no problem. Just be conscious that it is not as linear as its
frequencies suggest.
47
… or read some more explanations here 05 Hello MIDI Keys
This was not always the case in european music tradition, and as far as I know also in music
traditions in other cultures. There was mostly a certain range in which the standard pitch could
vary. Even in the 19th century in which scientific standardization prevailed more and more, this
process continued. The first international fixation of a standard pitch took place on a conference
in Vienna in 1885.
This standard pitch was 435 Hz. But orchestras have a tendency to rise the standard pitch because
the sound is more brilliant then. So finally 1939 on the conference of the International Federation
of the National Standardizing Association (ISA) in London 440 Hz was fixed. This is valid until
today, although most orchestras play a bit higher, in 443 Hz.
Csound offers a nice possibility to change the standard pitch for MIDI. In the orchestra header, you
can, for instance, set the standard pitch to 443 Hz via this statement:
A4 = 443
Once the standard pitch is set, all other pitches are calculated related to it. The tuning system
which MIDI uses, is the “equal temperament”. This means that from one semitone to the next the
√
frequency ratio is always the same: 12 2 which can also be written as 21/12 .
So if A4, which is MIDI note number 69, has 440 Hz, note number 70 will have 440 ⋅ 21/12 Hz. We
can use Csound to calculate this:
iFreq = 440 * 2^(1/12)
print(iFreq)
This prints:
iFreq = 466.164
And indeed this complies with the result of the mtof opcode:
iFreq = mtof:i(70)
print(iFreq)
More in general: The score is not Csound code. The score is basically a list of instrument calls,
with some simple conventions. No programming language at all.
48
05 Hello MIDI Keys … or read some more explanations here
This is sometimes confusing for beginners. But in modern Csound the score often remains empty.
We will show in Tutorial 07 how this works.
49
… or read some more explanations here 05 Hello MIDI Keys
50
06 Hello Decibel
There is a similar issue in working with raw amplitude values. Human perception of both, pitch
and volume, follows ratios. We hear the frequencies in the left column as octaves because they
all have the ratio 2 over 1:
Frequency Ratio
8000 Hz
> 8000 Hz / 4000 Hz = 2:1
4000 Hz
> 4000 Hz / 2000 Hz = 2:1
2000 Hz
> 2000 Hz / 1000 Hz = 2:1
1000 Hz
> 1000 Hz / 500 Hz = 2:1
500 Hz
> 500 Hz / 250 Hz = 2:1
250 Hz
> 250 Hz / 125 Hz = 2:1
125 Hz
In the same way, we perceive these amplitudes as having equal loss in volume, because they all
follow the same ratio:
Amplitude Ratio
1
> 1 / 0.5 = 2
51
Decibel 06 Hello Decibel
0.5
> 0.5 / 0.25 = 2
0.25
> 0.25 / 0.125 = 2
0.125
> 0.125 / 0.0625 = 2
0.0625
As you see, already after four “intensity octaves” we get to an amplitude smaller than 0.1. But
human hearing is capable of about fifteen of these intensity octaves!
Decibel
It is the Decibel (dB) scale which reflects this. As you already know from Tutorial 2, we set the
amplitude 1 as reference value to zero dB by this statement in the orchestra header:
0dbfs = 1
Zero dB means here: The highest possible amplitude. Each amplitude ratio of one over two is then
a loss of about 6 dB. This yields the following relations between amplitudes and decibels:
Amplitude dB
1 0
0.5 -6
0.25 -12
0.125 -18
0.063 -24
0.0316 -30
0.01585 -36
0.00794 -42
0.00398 -48
0.001995 -54
0.001 -60
Note 1: To be precise, for an amplitude ratio of 1/2 the difference is -6.0206 dB rather than -6 dB.
So the amplitude column is not following precisely the ratio 1/2.
Note 2: You can find more about sound intensities here in this book.
Note 3: For the general context you may have a look at the Weber-Fechner law.
52
06 Hello Decibel Inserting an Expression as Input Argument
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Convert
iAmp = ampdb:i(-6)
print(iAmp)
endin
</CsInstruments>
<CsScore>
i "Convert" 0 0
</CsScore>
</CsoundSynthesizer>
Similar to mtof, the ampdb opcode can run at i-rate or at k-rate. Here we use i-rate, so ampdb:i
because we have a number as input, and not a signal.
We will use ampdb:k when we have time varying decibel values as input. In the most simple case,
it is a linear rise or decay. We can create this input signal as usual with the linseg opcode. Here
is a signal which moves from -10 dB to -20 dB in half a second:
kDb = linseg:k(-10,0.5,-20)
So far we have always stored the output of an opcode in a variable; The output of an opcode gets
a name, and this name is then used as input for the next opcode in the chain. We have currently
four chain links. These chain links are written as numbers right hand side in the next figure.
53
Example 06 Hello Decibel
It is possible to omit the variable names and to directly pass one expression as input argument
into the next chain link. This is the code to skip the variable names for chain link 2:
aSine = poscil:a(ampdb:k(kDb),mtof:k(kMidi))
Example
This version is used in the example code.
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
54
06 Hello Decibel Short or readable?
0dbfs = 1
instr Hello
kDb = linseg:k(-10,0.5,-20)
kMidi = linseg:k(72,0.5,68)
aSine = poscil:a(ampdb:k(kDb),mtof:k(kMidi))
aOut = linen:a(aSine,0,p3,1)
outall(aOut)
endin
</CsInstruments>
<CsScore>
i "Hello" 0 2
</CsScore>
</CsoundSynthesizer>
Short or readable?
The possibility of directly inserting the output of one opcode in another is potenially endless. It
leads to shorter code.
On the other hand, if many of these expressions are put inside each other, the code can become a
stony desert of (, :, , and )).
Variables are actually not a necessary evil. They can be a big help in understanding what happens
in the code, if they carry a meaningful name. This is what I tried as setting kMidi, aSine, … Perhaps
you will find better names; that is what we should try.
In case we insert all four chain links into each other, the code would look like this:
outall(linen:a(poscil:a(ampdb:k(linseg:k(-10,.5,-20),...)
I don’t think this makes sense; in particular not for beginners. In the Csound FLOSS Manual we
have a hard limit because of layout: No code line can be longer than 78 characters. I think this is
a good orientation.
The most valuable qualities of code are clarity and readability. If direct insertion of an expression
helps for it, then use it. But better to avoid code abbreviations which more derive from laziness
than from decision.
You yourself are the best judge. Read your code again after one week, and then make up your mind
…
Try it yourself
ä Create a cresendo (a rise in volume) rather than a diminuendo (a decay in volume) as we did.
ä Change the values so that the crescendo becomes more extreme.
ä Again change to diminuendo but also more extreme than in the example.
55
Opcodes you have learned in this tutorial 06 Hello Decibel
ä Change the code so that you create the variable names kAmp and kFreq first, as shown in
the first figure.
ä Play with omitting the variable names also in chain link 1, 3 or 4. Which version do you like
most?
Go on now …
with the next tutorial: 07 Hello p-Fields.
What is 0 dB?
We have stressed some similarities between working with pitch and with volumes. The similarities
derive from the fact that we perceive in a proportional way by our senses. Both, the MIDI scale for
pitch and the Decibel scale for volume, reflect this.
But there is one big difference between both. The MIDI scale is an absolute scale. MIDI note
number 69 is 440 Hz. (Or slightly more cautious: MIDI note number 69 is set to the standard pitch
which is usually 440 Hz.)
But the Decibel scale is a relative scale. It does not mean anything to say “this is -6 dB” unless we
have set something as 0 dB.
In acoustics, 0 dB is set to a very small value. To put it in a non-scientific way: The softest sound
which we can hear.
This means that all common Decibel values are then positive, because they are (much) more than
this minimum. For instance around 60 dB for a normal conversation.
But as explained above, in digital audio it is the other way round. Here our 0 dB setting points to
the maximum, to the highest possible amplitude.
In digital audio, we have a certain number of bits for each sample: 16 bit, 24 bit, 32 bit. Whatever
it be, there is a maximum. Imagine a 16 bit digital number in which each bit can either be 0 or 1.
The maximum possible amplitude is when all bits are 1.
56
06 Hello Decibel … or read some more explanations here
And so for the other resolutions. (They do not add something on top. They offer a finer resolution
between the maximum and the minimum.)
So it makes perfect sense to set this maximum possible amplitude as 0 dB. But this means that
in digital audio we only have negative dB values.
As we saw in Tutorial 04, to amplify a signal means to multiply it by a value larger than 1. It makes
perfect sense to express this in decibel rather than as a multiplier.
We can say: “I amplify this signal by 6 dB”, rather than: “I amplify it by factor 2.” And: “I amplify
this signal by 12 dB” should be better than “I amplify it by factor 4”.
In this context we will use a positive number as input for the ampdb opcode. Here is a simple
example:
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Amplify
//get dB value from fourth score parameter
iDb = p4
//create a very soft pink noise
aNoise = pinkish(0.01)
//amplify
aOut = aNoise * ampdb(iDb)
outall(aOut)
endin
</CsInstruments>
<CsScore>
i "Amplify" 0 2 0 //soft sound as it is
i "Amplify" 2 2 10 //amplification by 10 dB
i "Amplify" 4 2 20 //amplification by 20 dB
</CsScore>
</CsoundSynthesizer>
57
… or read some more explanations here 06 Hello Decibel
What we discussed in this and the previous tutorial about linear versus proportional transitions in
both, frequency and amplitude, has been described by ancient greek mathematicians as arithmetic
versus geometric series.
If we have two numbers, or lengths, and look after the one “in between”, the arithmetic mean
searches for the equal distance between the smaller and the larger one. Or in the words of Archytas
of Tarentum (early 4th century B.C.):
The arithmetic mean is when there are three terms showing successively the same
excess: the second exceeds the third by the same amount as the first exceeds the
second. In this proportion, the ratio of the larger numbers is less, that of the smaller
1
numbers greater.
If the first number is 8, and the third number is 2, we look for the second number as arithmetic
mean 𝐴 and the “excess” 𝑥 like this:
𝐴 = 𝑥 + 28 = 𝑥 + 𝐴2 + 𝑥 + 𝑥 = 8𝑥 + 𝑥 = 6𝑥 = 3
But as Archytas states, “the ratio of the larger numbers is less, that of the smaller numbers greater”.
Here:
This is what we described as “at first too slow then too fast” in the previous tutorial.
The geometric mean is when the second is to the third as the first is to the second; in
this, the greater numbers have the same ratio as the smaller numbers.
1 Ancilla to the Pre-Socratic Philosophers. A complete translation of the Fragments in Diels, Fragmente der
Vorsokratiker by Kathleen Freeman. Cambridge, Massachusetts: Harvard University Press [1948], quoted after
http://demonax.info/doku.php?id=text:archytas_fragments
58
06 Hello Decibel … or read some more explanations here
The geometric mean of 8 and 2 is 4 because the ratio of the larger number to the mean, and the
ratio of the mean to the smaller number is the same: 8/4 = 2 and 4/2 = 2.
It is quite interesting to look at the “geometric” way to construct this mean which is shown in
Euclid’s Elements (VI.8):
Euclid describes how the two triangles which are left and right to this perpendicular have the same
angles, and that this is also the case when we look at the large triangle. It establishes the perfect
similarity.
The length of this perpendicular is the geometric mean of the two parts on the base. According to
the right triangle altitude theorem, the square of this altitude equals the product of the base parts:
59
… or read some more explanations here 06 Hello Decibel
√
𝑏2 = 𝑎 ⋅ 𝑐𝑏 = 𝑎⋅𝑐
There is also a close relationship to the golden ratio which is famous for its usage in art and nature.
In terms of the triangle which Euclid describes it means: Find a triangle so that the smaller base
part plus the height equal the larger base part:
Bad luck: The golden ration can be easily constructed geometrically, but it is an irrational number.
We are close though when we choose higher Fibonacci numbers. For instance, for the Fibonacci
numbers 𝑏 = 88 and 𝑐 = 55 it yields: 𝑎 = 𝑏 ⋅ 𝑏/𝑐 = 88 ⋅ 88/55 = 140.8 rather than the desired 143.
Good enough for music perhaps which always needs some dirt for its life …
60
07 Hello p-Fields
But this all happens inside the instrument. Whenever we call our Hello instrument, it will play the
same pitches in the same volume. All we can decide currently from outside the instrument is to
adjust when the instrument shall start, and how long it will play. As you know from Tutorial 04,
these informations are submitted to the instrument via parameters in the score:
ä The first parameter, abbreviated p1, ist the number or name of the instrument which is called.
ä The second parameter, abbreviated p2, is the start time of this instrument.
ä The third parameter, abbreviated p3, is the duration of this instrument.
What we actually do is to instantiate a certain instrument. Each instance is a running object of the
instrument model; the concrete “thing” which is there as realization of the model.
In this case, we create and call three instances of the “Hello” instrument which follow each other
with some pauses between them:
61
Make instruments flexible 07 Hello p-Fields
We can create as many instances of an instrument as we like. They can follow in time, as above,
or can overlap, as for these score lines:
i "Hello" 0 7
i "Hello" 3 6
i "Hello" 5 1
To achieve this, we add more p-fields to our score line. We write the first MIDI note number as
fourth score parameter p4, and the second MIDI number as fifth score parameter p5:
i "Hello" 0 2 72 68
62
07 Hello p-Fields Direct Insertion of p-fields or Variable Definition
instr Hello
iMidiStart = p4
iMidiEnd = p5
kMidi = linseg:k(iMidiStart,p3,iMidiEnd)
...
endin
The instrument interpretes these values in the same way as it was for p3 which we already used
in our code. For p4 and p5, the instrument instance will look at the score line, and take the fourth
parameter as value for p4, and the fifth parameter as value for p5.
I say “from Csound’s side” because I think for the readability of code it is better to tell on top of the
instrument code, what p4 and p5 mean, and connect them with a variable name.
This variable will be i-rate variable because the score can only pass fixed values to the instrument.
The variable name should be as meaningful as possible, without becoming too long. Most impor-
tant, again, is the readability of the code.
63
Example 07 Hello p-Fields
ä What is between /* and */ will also be ignored by Csound. It can comprise more than one
line.
I suggest that you comment extensively; in particular when you start learning Csound. It will help
you to understand what is happening in the code, and it will help you to understand your own code
later.
I often even start with comments when I code. Then the comments are there to make clear what
I want to do. For instance:
instr Dontknowyet
//generate two random numbers
Example
We will now insert comments in the code. At first extensively; in later tutorials we will reduce it
and focus on the new parts of the code.
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
/* INSTRUMENT CODE */
instr Hello ;Hello is written here without double quotes!
//receive MIDI note at start and at end from the score
iMidiStart = p4 ;this is a MIDI note number
iMidiEnd = p5
//create a glissando over the whole duration (p3)
kMidi = linseg:k(iMidiStart,p3,iMidiEnd)
//create a decay from -10 dB to -20 dB in half of the duration (p3 / 2)
kDb = linseg:k(-10,p3/2,-20)
//sine tone with ampdb and mtof to convert the input to amp and freq
aSine = poscil:a(ampdb:k(kDb),mtof:k(kMidi))
//apply one second of fade out
aOut = linen:a(aSine,0,p3,1)
//output to all channels
outall(aOut)
endin
</CsInstruments>
64
07 Hello p-Fields Looking Back to History
<CsScore>
/* SCORE LINES*/
//score parameter fields
//p1 p2 p3 p4 p5
i "Hello" 0 2 72 68 ;here we need "Hello" with double quotes!
i "Hello" 4 3 67 73
i "Hello" 9 5 74 66
i "Hello" 11 .5 72 73
i "Hello" 12.5 .5 73 73.5
</CsScore>
</CsoundSynthesizer>
Here the P0 field contains the information about either “note” or “pause”. P1 is the instrument
number. As there is no polyphony here, P2 is the duration of a chain of events. P3 is an amplitude
here in the range of 0 through 1000, and P4 is the frequency.
It was Mathews’ MUSIC V which Jean-Claude Risset used to write his epochal “Catalogue of Com-
puter Synthesized Sounds” in 1969. His code can with only slight modifications be transformed
into Csound code. Here is a snippet in which you can see p-fields again:2
1 Gravesaner Blätter (Ed. Hermann Scherchen) VI, Heft 23-24 (1962), p. 115, online:
https://soundandscience.de/text/gravesaner-blatter-jahrgang-vi-heft-23-24
2 Jean-Claude Risset, An Introductory Catalogue of Computer Synthesized Sounds, Bell Telephone Laboratories, Murray
Hill, New Jersey, 1969, p. 56; online: https://ia801707.us.archive.org/13/items/an-introductory-catalogue-of-computer-
synthesized-sounds/An-Introductory-Catalogue-of-Computer-Synthesized-Sounds.pdf
65
Try it yourself 07 Hello p-Fields
The early versions of Max Mathews MUSIC program could only run on one particular computer.
The “C” in Csound points to the C Programming Language which was first released in 1972. It
made it possible to seperate the source code which is written and can be read by humans from a
specific machine on which it runs. C is still a very successful language, used for everything which
must be fast, like operating systems or audio applications.
P fields are on one hand simple and give a lot of possibilities. There are, on the other hand, restric-
tions. Basically, a p-field carries a number. It took a lot of work from the Csound developers to
make it possible to write a string in a p-field, too. But it is still not possible to pass a signal via a
p-field to an instrument.
Fortunately, p-fields are only one possibility for an instrument to communicate with the “outer
world”. We will discuss other ways later in these Tutorials.
Try it yourself
ä Change the values in the score in a way that all directions of the pitch slides are reversed
(upwards instead of downwards and vice versa).
ä Change the values in the score so that you have not any more a sliding pitch but a constant
one.
ä Add two p-fields in the score to specify the first and the last volume as dB. Refer to these
p-fields as p6 and p7 in the instrument code. Introduce two new variables and call them
iDbStart and iDbEnd.
ä Change the code so that the volume change uses the whole duration of the instrument in-
stance whilst the pitch change only uses half of the instrument’s duration.
ä Go back to the first code by reloading the page. Now remove the fifth p-field from the score
and change the code in the instrument so that the iMidiEnd variable is always 6 MIDI notes
lower than iMidiStart.
ä Introduce p5 again, but now with a different meaning: 1 in this p-field means that the iMidiEnd
note will be six MIDI keys higher than iMidiStart; -1 will mean that iMidiEnd will be six MIDI
keys lower than iMidiStart.
66
07 Hello p-Fields Terms and symbols you have learned in this tutorial
ä Add a p-field which establishes the fade-out duration as ratio to the whole duration. (1 would
mean: fade out equals the overall instrument duration; 0.5 would mean: fade out time is half
of the instrument duration.)
Terms
ä instance is the manifestation or realization of an instrument when it is called by a score line
Symbols
ä p4, p5 … are used in a Csound instrument code to refer to the fourth, fifth … parameter in the
score line which called the instrument instance
Go on now …
with the next tutorial: 08 Hello schedule.
But what is MIDI note number 73.5? There is obviously no key with this number. There is only key
number 73 (C#5) and 74 (D5).
This is true but the conversion from MIDI note numbers to frequencies does not only work for
integer MIDI note numbers. It is possible to specify any fraction of the semitone which is between
two integer MIDI note numbers. We can split a semitone in two quartertones. This is what we did
by referring to MIDI note number 73.5: A quarter tone higher than C#5, or a quarter tone lower than
D5.
67
… or read some more explanations here 07 Hello p-Fields
In the same way we can express any other fraction. Most common is to divide one semitone in
one hundred cents. So MIDI not number 60.14 would be C4 plus 14 Cent. Or 68.67 would be A4
minus 33 Cents.
In this case, we have called instrument 1.1 as first instance of instrument 1, and 1.2 as its second
instance, and 1.3 as its third instance.
The instrument instance gets this information, as we see in this simple example:
<CsoundSynthesizer>
<CsOptions>
-o dac -m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
print(p1)
endin
</CsInstruments>
<CsScore>
i 1.1 0 3
i 1.2 2 2
i 1.3 5 1
</CsScore>
</CsoundSynthesizer>
So the instance which we called as 1.1 “knows” that it is instance 1.1, and so do the other instances.
Imagine we want to send a specific message which is broadcasted on a certain software channel
68
07 Hello p-Fields … or read some more explanations here
called “radio” only to instance 1.2, then we would write something like: “If my instance is 1.2 then I
will receive the message from the radio channel.”
This is the Csound code for it, without explaining about the opcodes you do not know yet. But
perhaps you can follow the logic, and please have a look in the console printout.
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
iMyInstance = p1 ;get instance as 1.1, 1.2 or 1.3
print(iMyInstance) ;print it
//receive the message only if instance is 1.2
if iMyInstance == 1.2 then
Smessage = chnget:S("radio")
prints("%s\n",Smessage)
endif
endin
</CsInstruments>
<CsScore>
i 1.1 0 3
i 1.2 2 2
i 1.3 5 1
</CsScore>
</CsoundSynthesizer>
But we cannot call an instrument with a fractional number if we call the instrument by its name.
In the score line …
69
… or read some more explanations here 07 Hello p-Fields
i "LiveInput" 0 1000
… the first p-field "LiveInput" is a string, not a number. And we cannot extend a string with .1
as we can for the numbers. This will not work:
i "LiveInput".1 0 1000
We have two options here. The first option uses the fact that Csound internally converts each
instrument name to a number. The way Csound assigns numbers to the named instruments is
simple: The instrument which is on top gets number 1, the next one gets number 2, and so on.
When we only have one instrument, we can be sure that this is instrument 1 for Csound. And
therefore this code works without problems:
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr TestInstanceSelection
iMyInstance = p1 ;get instance as 1.1, 1.2 or 1.3
print(iMyInstance) ;print it
//receive the message only if instance is 1.2
if iMyInstance == 1.2 then
Smessage = chnget:S("radio")
prints("%s\n",Smessage)
endif
endin
</CsInstruments>
<CsScore>
i 1.1 0 3
i 1.2 2 2
i 1.3 5 1
</CsScore>
</CsoundSynthesizer>
We have an instrument with the name “TestInstanceSelection” here, but we call it in the score lines
as 1.1, 1.2 and 1.3. No problem.
The other way to work with named instruments and fractional parts is to move the instrument call
from the score to the actual Csound code. This is the subject of the next Tutorial in which we will
introduce the schedule opcode.
70
08 Hello Schedule
ä The CsOptions section sets some general options which are used once you pressed the
“Play” button. For instance, whether you want to render Csound in real-time to the sound
card, or to write a sound file. Or which MIDI device to use, or how many messages to display
in the console.
ä The CsInstruments section contains the actual Csound code. It is also called the “Orches-
tra” code because it comprises the Csound instruments.
ä The CsScore section collects score lines. Each score line which begins with i initiates an
instrument instance, at a certain time, for a certain duration, and possibly with additional
parameters.
On one hand, this is a reasonable division of jobs. In the CsInstruments we code the instru-
ments, in the CsScore we call them, and in the CsOptions we determine some general settings
about the performance. In early times of Csound, the instrument definitions and the score resided
in two seperated text files. The instrument definitions, or orchestra, were collected in an .orc file,
and the score lines were collected in a .sco file. The options were put in so called “flags” which
you still see in the CsOptions. The -o flag, for example, assigns the audio output.
It is still possible to use Csound in this way from the “command line”, or “Terminal”. A Csound run
would be started by a command line like this one:
csound -o dac -m 128 my_instruments.orc my_score_lines.sco
The call to the Csound executable is done with typing csound at first position. In second position
we can add as many options as we like. In this case: -o dac -m 128 for real-time output and
71
Using Csound Without the Score Section 08 Hello Schedule
reduced message level. At third and fourth position then follow the .orc and .sco file.
This command line way of running Csound is still a very versatile option. If you enjoy learning
Csound, and continue using it for your own audio projects, you will probably at any time in future
use it, because it is fast and stable, and you can integrate Csound in any scripting environment.
It is not only a different role. It can be said that an instrument knows a bit about the score. As
you learned in Tutorial 04 and Tutorial 07, each instrument instance knows about the p-fields which
created this instance. It knows about its duration, it even knows its start time in the overall Csound
performance, and if called by a fractional number, it knows its instance as a unique number.
But the score knows nothing about the instruments. Not even what the sample rate is, or how
many channels we set in the orchestra header via nchnls = 2. We cannot use any opcode in the
score, nor any variable name. The score does not understand the orchestra, and to put it in this
way: The score does not understand Csound language.
There are several situations, however, in which we need one language to do all. We may want to
start instruments from inside another instrument. We may want to pass a variable to an instrument
which we calculate during the performance. We may want to trigger instrument instances from
live input, like touchscreens, MIDI keyboards or messages from other applications.
Csound offers this flexibility. The most used opcodes for calling instruments from inside the
CsInstruments section, are schedule and schedulek. We will introduce at first schedule,
and then in Tutorial 11 come to schedulek.
So from now on you will see the score section mostly empty. But of course there are still many
situations in which a traditional score can be used. You will find some hints for score usage in the
optional section of this tutorial.
This code calls instrument “Hello” at start time zero for two seconds:
schedule("Hello",0,2)
72
08 Hello Schedule Example
As schedule is an opcode, its input arguments are separated by commas. This is the main
difference to the score in which the parameter fields are separated by spaces:
i "Hello" 0 2
Note 1: As you see the starting i in the score is omitted. This is possible because schedule only
instantiates instrument events whilst a score line can also have other statements. Have a look
below in the optional part of this tutorial if you want to know more about it.
Note 2: It is up to you whether you add spaces to the commas in the schedule argument list, or
not. For Csound, the commas are the separator. Once you accept this, you can use spaces or tabs
in addition, or not. So these two lines are both valid:
//arguments are separated by commas only
schedule("Hello",0,2)
//arguments are separated by commas followed by spaces
schedule("Hello", 0, 2)
Example
The following example simply transfers the score lines from Tutorial 07 to schedule statements.
So it will sound exactly the same as the example in Tutorial 07.
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
//the score is empty here!
</CsScore>
</CsoundSynthesizer>
73
Try it yourself 08 Hello Schedule
Try it yourself
Make sure that you always stop and restart Csound when you move to the next excercise item.
ä You can put the schedule lines anywhere in the CsInstruments section. Just make sure it
is outside the “Hello” instrument. Try putting the lines anywhere by copy-and-paste. You can
also scatter them anywhere, and in any order. This will probably not improve the readability
of your code, but for Csound it will make no difference.
ä Put one of the schedule lines in the score. Csound will report an error, because the score
does not understand Csound orchestra code.
ä Put all schedule lines in the “Hello” instrument. This is not an error, but nothing will hap-
pen when you run Csound: There is no statement any more which invokes any instance of
instrument “Hello”.
ä Internally, Csound converts all instrument names to positive integer numbers. You
can get this number via the nstrnum opcode. Put the code iWhatIsYourNumber =
nstrnum("Hello") anywhere in the CsInstruments section, for instance below the
schedulelines. Print this number to the console.
ä Once you know this number, replace the string “Hello” in the schedulelines by this number.
The Csound performance should be identical.
ä When we call instrument “Hello” five times, as we do in the example, we call five instances
of this instrument. We can assign numbers to these instances in calling the instrument as
fractional number. Rather than calling instrument 1, we will call instrument 1.1, 1.2, 1.3, 1.4,
and 1.5.
Replace the first argument of schedule by these numbers and insert print(p1) into the
instrument code to prove that the instrument received this information.
ä Keep this code, but change the first argument of schedule in the way that you still work with
the instrument name here. Convert the instrument name “Hello” to a number via nstrnum
and then add 0.1, 0.2, … 0.5 for the five lines. Again the console output will be the same.
But when we leave the score empty, Csound will not terminate. To put it in an anthropomorphistic
way: Csound waits. Imagine we have established a network connection and can communicate
with this Csound instance, then we might call again the “Hello” instrument this way. Or if we have
a MIDI keyboard connected, we can do the same.
So usually we want this “endless” performance. (According to the Csound Manual this is about
nine billion years on a 64-bit machine …) But in case we do want Csound to terminate after a certain
amount of time, we can put one line in the score: An e followed by a space and then a number.
Csound will stop after this number of seconds. Please insert e 20 in the CsScore section of our
example above, and run it again. Now it should stop after 20 seconds.
74
08 Hello Schedule The Csound Reference Manual as Companion and Field for Improvements
Every opcode has an own page here. Please have a look at the page for the scheduleopcode
which is at csound.com/docs/manual/schedule.html.
You will see that the native Csound way of writing code is used here. Rather than
schedule(insnum, iwhen, idur [, ip4] [, ip5] [...])
it is written
schedule insnum, iwhen, idur [, ip4] [, ip5] [...]
This is not a big difference. In the native Csound syntax, you will write the input arguments of an
opcode at right hand side, and the output of an opcode left hand side; without any = in between.
As the schedule opcode has only input arguments, we have nothing at left hand side.
To get used to this way of writing, let us look at the linseg manual page. You find it here in the
Reference Manual, and this is its information about the syntax of linseg:
ares linseg ia, idur1, ib [, idur2] [, ic] [...]
kres linseg ia, idur1, ib [, idur2] [, ic] [...]
In the functional style of coding which I use in this tutorial, it would read:
ares = linseg:a(ia, idu1, ib [, idur2] [, ic] [...])
kres = linseg:k(ia, idu1, ib [, idur2] [, ic] [...])
You will find detailed information in these Reference pages. Some of them may be too technical
for you. You will also find a working example for each opcode which is very valuable to get an
impression of what this opcode can do.
You may also read something on one of these pages which is outdated. For an Open Source Project
it is always a major issue to keep the documentation up to date. We all are asked to contribute, if we
can, for instance in opening a ticket at Github or by suggesting an improvement of the Reference
Manual to the Csound community.
Opcodes
ä schedule calls an instrument instance similar to an i score line
ä nstrnum returns the internal Csound number of an instrument name
75
Go on now … 08 Hello Schedule
Go on now …
with the next tutorial: 09 Hello If.
It must be said that for the ones who write a fixed media piece, the score offers a lot of useful and
proven tools.
So far we only used the i statement which calls an instrument instance, and which we somehow
replaced in this tutorial by the schedule opcode in the CsInstruments section.
We also mentioned the e statement which terminates Csound after a certain time.
Another useful statement is the t or “Tempo” statement which sets the time for one beat, in
metronome units. Per default, this is 60, which means that one beat equals one second. But it can
be set to other values; not only once for the whole performance but having different metronome
values at different times, and also interpolating between them (resulting in becoming faster or
becoming slower).
Here is an overview, with links to detailed descriptions, in the Csound Reference Manual:
csound.com/docs/manual/Scorestatements.html
And in this book, the Csound FLOSS Manual, we have a chapter about methods of writing scores,
too.
Steven Yi’s Blue frontend for Csound offers sophisticated possibilities of working with score events
as objects.
Triggering other score events than ‘i’ from the orchestra code
If you need to trigger another score event than an instrument event, you can use the event_i
opcode, and its k-rate version event. You should not expect everything to work, because the
Csound score is preprocessed before the Csound performance starts. This is not possible when
we fire a score event from inside the orchestra. So the t statements and similar will not work.
The e statement, however, will work, and can be used to terminate Csound in a safe way at any time
of the performance. In this example, we call at first the Play instrument which plays a sine tone for
three seconds. Then we call the Print instrument which displays its message in the console and
76
08 Hello Schedule … or read some more explanations here
calls the Terminate instrument after three seconds. This instrument then terminates the Csound
performance.
<CsoundSynthesizer>
<CsOptions>
-m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Play
aSine = poscil:a(.2,444)
outall(linen:a(aSine,p3/2,p3,p3/2))
endin
schedule("Play",0,3)
instr Print
puts("I am calling now the 'e' statement after 3 seconds",1)
schedule("Terminate",3,0)
endin
schedule("Print",3,1)
instr Terminate
event_i("e",0)
endin
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
instr Zero
prints("Look at me!\n")
endin
</CsInstruments>
<CsScore>
i "Zero" 0 0
</CsScore>
</CsoundSynthesizer>
When you look at the console output, you see Look at me!.
77
… or read some more explanations here 08 Hello Schedule
So the instrument instance has been called, although there is “no duration” in the score line: The
third parameter is 0.
For Csound, calling an instrument with duration zero means: Only execute the initialization pass.
In other words: All i-rate statements will work, but no k-rate or a-rate one.
The other particular duration is -1. This was introduced for “unlimited” (held) duration and “tied”
notes. Important to know if we want to use -1 as duration:
instr 1
prints("I am there!\n")
iMidiNote = p4
aSound = poscil:a(.2,mtof:i(iMidiNote))
aOut = linenr:a(aSound,0,1,.01)
outall(aOut)
endin
</CsInstruments>
<CsScore>
i 1 0 -1 70
i 1 1 -1 76
i -1 10 0 0
</CsScore>
</CsoundSynthesizer>
We hear that when the second instance starts after one seconds, the first instance is “pushed out”
brutally. Csound assumes that we want to continue a legato line; so no reason for more than one
instance at the same time.
After ten seconds, however, the held note is finished gracefully by the last score event, having -1
as first parameter. The fade-out is done here with the linenr opcode. We will explain more about
it when we get to realtime MIDI input.
If we want an instrument to play “endlessly”, we can use the z character as special symbol for the
duration. According to the Csound Reference Manual, it causes the instrument to run “approxi-
mately 25367 years”.
Well, I do not dare to doubt this. But in this case, after ten seconds I started another instrument
whichs turns off all instances of instrument “Zett”. We will use turnoff2 or turnoff2_i more
later.
<CsoundSynthesizer>
<CsOptions>
-o dac -m128
78
08 Hello Schedule … or read some more explanations here
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Zett
prints("I am there!\n")
iMidiNote = p4
aSound = poscil:a(.1,mtof:i(iMidiNote))
aOut = linenr:a(aSound,0,3,.01)
outall(aOut)
endin
instr Turnoff
turnoff2_i("Zett",0,1)
endin
</CsInstruments>
<CsScore>
i "Zett" 0 z 70
i "Zett" 1 z 76
i "Zett" 4 z 69
i "Turnoff" 10 0
</CsScore>
</CsoundSynthesizer>
Please note that z can only be used in the score. If we use it in a schedule() call, it will be
interpreted as a variable name.
79
… or read some more explanations here 08 Hello Schedule
80
09 Hello If
We have put the schedule code outside the instrument code. In this case, it works like a score
line.
But what will happen if we put a schedule statement also inside an instrument?
Try this:
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr InfiniteCalls
//play a simple tone
aSine = poscil:a(.2,415)
aOut = linen:a(aSine,0,p3,p3)
outall(aOut)
//call the next instance after 3 seconds
schedule("InfiniteCalls",3,2)
endin
schedule("InfiniteCalls",0,2)
</CsInstruments>
<CsScore>
81
Schedule Depending on Conditions 09 Hello If
</CsScore>
</CsoundSynthesizer>
But inside this instrument, again we have an instrument call. Three seconds after the instrument
instance is created, the next instance will be there:
schedule("InfiniteCalls",3,2)
Note that the start time of the new instance is very important here. If you set it to 2 seconds
instead of 3 seconds, it will create the next instance immediately after the current one. If you set
the start time of the next instance to 1 second, two instances will overlap.
We will implement now an instrument which triggers itself again six times. This is what we must
do:
ä We must pass the number 6 as count variable to the first instrument instance.
ä The second instance is then called with number 5 as count variable, and so on.
ä If the instance with count variable 1 is reached, no more instances are called.
82
09 Hello If The ‘if’ Opcode in Csound
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr TryMyIf
iCount = 6
if (iCount > 1) then
prints("True!\n")
else
prints("False!\n")
endif
endin
</CsInstruments>
<CsScore>
i "TryMyIf" 0 0
</CsScore>
</CsoundSynthesizer>
The keywords here are if, else and endif. endif ends the if clause in a similar way as endin
ends an instrument.
You again see the opcode prints here which shows a string in the console. We will explain more
in the optional part of this tutorial about formatting.
If you want to read more about conditional branching with if have a look at this section in Chapter
03 of this book.
Example
Please run the example and read the code. Can you figure out in which way the iCount variable is
modified? Which other parameters are changed from instance to instance?
<CsoundSynthesizer>
<CsOptions>
-o dac -m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
83
Try it yourself 09 Hello If
0dbfs = 1
instr Hello
iMidiStart = p4
iMidiEnd = p5
kDb = linseg:k(-10,p3/2,-20)
kMidi = linseg:k(iMidiStart,p3/3,iMidiEnd)
aSine = poscil:a(ampdb(kDb),mtof(kMidi))
aOut = linen:a(aSine,0,p3,1)
outall(aOut)
iCount = p6
print(iCount)
if (iCount > 1) then
schedule("Hello",p3,p3+1,iMidiStart-1,iMidiEnd+2,iCount-1)
endif
endin
schedule("Hello", 0, 2, 72, 68, 6)
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
Try it yourself
Change the code so that from instance to instance
If - else
“If the sun shines then I will go out, else I will stay at home.”
84
09 Hello If More About ‘if’: If-Else and If-Elseif-Else
instr If_Then
iSun = 1 // 1 = yes, 0 = no
if (iSun == 1) then
prints("I go out!\n")
else
prints("I stay at home!\n")
endif
endin
</CsInstruments>
<CsScore>
i "If_Then" 0 0
</CsScore>
</CsoundSynthesizer>
In if (iSun == 1) we ask for equality. This is the reason that we use == rather than =. The
double equal sign == asks for equality between left and right. The simple equal sign = sets the left
hand side variable to the right hand side value, like in iSun = 1. Csound also accepts if (iSun
= 1) but I believe it is better to distinguish these two qualities.
The parentheses in (iSun == 1) can be omitted but I prefer to put them for better readability.
Csound does not have an own symbol or keyword for the Boolean “True” and “False”. Usually we
use 1 for True/Yes and 0 for False/No.
If - elseif - else
When we add one or more elseif questions, we come to a decision between several cases.
“If the pitch is higher than MIDI note 80, then instrument High will start. Elseif the pitch is higher
than MIDI note 60, then instrument Middle will start. Else instrument Low will start.”
instr If_Elseif_Then
iPitch = 90 // change to 70 and 50
if (iPitch > 80) then
85
Even More About If: Nested ’if’s; AND and OR 09 Hello If
schedule("High",0,0)
elseif (iPitch > 60) then
schedule("Middle",0,0)
else
schedule("Low",0,0)
endif
endin
instr High
prints("Instrument 'High'!\n")
endin
instr Middle
prints("Instrument 'Middle'!\n")
endin
instr Low
prints("Instrument 'Low'!\n")
endin
</CsInstruments>
<CsScore>
i "If_Elseif_Then" 0 1
</CsScore>
</CsoundSynthesizer>
Note that in the second condition elseif (iPitch > 60) we grab all pitches which are larger
than 60 and lower or equal 80, because we only come to this branch if the first condition if
(iPitch > 80) is not true.
Nested ’if’s
We can have multiple levels of branching, for instance:
86
09 Hello If Even More About If: Nested ’if’s; AND and OR
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Nested_If
// input situation (1 = yes)
iSun = 1
iNeedFruits = 1
iAmHungry = 1
iAmTired = 1
// nested IFs
if (iSun == 1) then //sun is shining:
if (iNeedFruits == 1) then //i need fruits
prints("I will go to the market\n")
else //i do not need fruits
prints("I will go to the woods\n")
endif //end of the 'fruits' clause
else //sun is not shining:
if (iAmHungry == 1) then //i am hungry
prints("I will cook some food\n")
else //i am not hungry
if (iAmTired == 0) then //i am not tired
prints("I will learn more Csound\n")
else //i am tired
prints("I will have a rest\n")
endif //end of the 'tired' clause
endif //end of the 'hungry' clause
endif //end of the 'sun' clause
endin
</CsInstruments>
<CsScore>
i "Nested_If" 0 0
</CsScore>
</CsoundSynthesizer>
In Csound, like in most programming languages, the symbol for this logical AND is &&.
Here comes an example for both. Try changing the values and watch the output.
<CsoundSynthesizer>
<CsOptions>
-o dac -m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr AndOr
87
Opcodes you have learned in this tutorial 09 Hello If
iSunIsShining = 1
iFinishedWork = 1
prints("iSunIsShining = %d\n",iSunIsShining)
prints("iFinishedWork = %d\n",iFinishedWork)
//AND
if (iSunIsShining == 1) && (iFinishedWork == 1) then
prints("AND = True\n")
else
prints("AND = False\n")
endif
//OR
if (iSunIsShining == 1) || (iFinishedWork == 1) then
prints("OR = True\n")
else
prints("OR = False\n")
endif
endin
</CsInstruments>
<CsScore>
i "AndOr" 0 0
</CsScore>
</CsoundSynthesizer>
Go on now …
with the next tutorial: 10 Hello Random.
A short form
We have a short form for if in Csound which is quite handy when we set a variable to a certain
value depending on a condition.
Instead of …
if (iCondition == 1) then
iVariable = 10
else
88
09 Hello If … or read some more explanations here
iVariable = 20
endif
… we can write:
iVariable = (iCondition == 1) ? 10 : 20
ä A “label” which marks a certain position in the program text. In Csound, these labels end
with a colon. We use start: here as label.
ä A “jump to” mechanism. This is called goto in Csound.
This “old fashioned” loop counts from 10 to 1 and then leaves the loop.
<CsoundSynthesizer>
<CsOptions>
-o dac -m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr LoopIf
iCount = p4
start:
if (iCount > 0) then
print(iCount)
iCount = iCount-1
igoto start
else
prints("Finished!\n")
endif
endin
</CsInstruments>
<CsScore>
i "LoopIf" 0 0 10
</CsScore>
</CsoundSynthesizer>
In line 16, rather than writing iCount = iCount-1, we could also write:
iCount -= 1
89
… or read some more explanations here 09 Hello If
Format strings
The prints opcode prints a string to console.
The string can be a format string. This means that it has empty parts or place holders which can
be filled by variables.
These place holders always start with % and are followed by a character which signifies a data
type. The most common ones are:
ä %d for an integer
ä %f for a floating point number
ä %s for a string
Note that a new line must be assigned via \n. Otherwise after the printout the next message
immediately follows, on the same line.
instr FormatString
prints("This is %d - an integer.\n",4)
prints("This is %f - a float.\n",sqrt(2))
prints("This is a %s.\n","string")
prints("This is a %s","concate")
prints("nated ")
prints("string.\n")
endin
</CsInstruments>
<CsScore>
i "FormatString" 0 0
</CsScore>
</CsoundSynthesizer>
More can be found here in this book. The format specifier are basically the same as in the C
programming language. You can find a reference here.
90
10 Hello Random
In modern art and music random choices often have an important role. It can be on a more techni-
cal level, for instance when we use random deviations for granular synthesis to somehow imitate
nature in its permanent variety.
But it can also be an essential part of our invention that we create structures which can be realized
in one or the other way, rather than determining each single note like in a melody.
We will create a simple example for this way of composing here. It will show that “working with
random” does not at all mean “withdraw from decisions”. In contrary, the decisions are there, and
most important for what can happen.
91
The ‘random’ Opcode and the ‘seed’ 10 Hello Random
This is a simple example for a random number between 10 and 20. Please run it three times and
watch the console printout.
<CsoundSynthesizer>
<CsOptions>
-o dac -m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr Random
iRandomNumber = random:i(10,20)
prints("Random number between 10 and 20: %f\n",iRandomNumber)
endin
</CsInstruments>
<CsScore>
i "Random" 0 0
</CsScore>
</CsoundSynthesizer>
You will see that three times the same random number is generated. My printout shows:
Why?
Strictly spoken the computer has no random because it can only calculate. A random number is
created internally by a calculation. Once the first number is there, all other numbers are determined
by a pseudorandom number generator.
This starting point of a random sequence is called seed. If we do not set a seed, Csound uses a
fixed number. This is the reason why we always got the same number.
ä For seed(0) Csound will seed from the current clock. This is what most other applications
do as default. It results in an always different start value, so that is what we usually want to
get when we use random.
ä For any positive integer number we put in seed, for instance seed(1) or seed(65537),
we get a certain start value of the random sequence. seed(1) will yield another result as
seed(65537). But once you run your Csound program twice with seed(1), it will result in
the same random values. This is a good opportunity to check out different random traces,
but being able to reproduce any of them precisely.
Please insert seed(0) in the example above. It should be placed below the 0dbfs = 0 line, in
the global space of the orchestra. When you run your code several times, it should always print a
different iRandomNumber output.
Also try to insert seed(1) or seed(2) etc. instead. You will see that each output is different, but
once you run one of them twice, you will get the same result.
92
10 Hello Random The “Global Space” or “instrument 0”
Inside the orchestra, we have instrument definitions. Each instrument starts with the keyword
instr and ends with the keyword endin.
But we have also a “global space” in the orchestra. “Global” means here: Outside any instrument.
Figure 12.1: Global space (green) and instruments (brown) in CsInstruments section
As you see, the global space is not only on top of the orchestra. In fact, each single empty line
outside an instrument is part of the global space.
We have already used this global space. The “orchestra header constants” live in this global space,
outside any instrument, when we set:
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
We also used the global space in setting our instrument call via schedule below an instrument
definition:
instr MySpace
...
endin
schedule("MySpace",0,1)
The instrument definition establishes a local space. The schedule(...) line resides in the
global space.
Sometimes this global space is called instrument 0. The reason is that the instruments in the
orchestra have 1 as smallest possible number.
93
Example 10 Hello Random
ä We can set global parameters like sample rate etc, and also the seed because it is a global
parameter, too.
ä We can define own functions or import external code.
ä We can create tables (buffers) and assign software channels.
ä We can perform i-rate expressions. Insert, for instance, prints("Hello Global
Space!\n") in the global space and look in the console ouput.
The global space is read and executed once we run Csound, even if we do not call any instrument.
Example
Please this time read the code first, and guess how it will sound for each note. What will be the
development of this sketch, and how long do you expect it to go?
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
seed(12345)
instr Hello
//MIDI notes between 55 and 80 for both start and end
iMidiStart = random:i(55,80)
iMidiEnd = random:i(55,80)
//decibel between -30 and -10 for both start and end
iDbStart = random:i(-30,-10)
iDbEnd = random:i(-30,-10)
//calculate lines depending on the random choice
kDb = linseg:k(iDbStart,p3/2,iDbEnd)
kMidi = linseg:k(iMidiStart,p3/3,iMidiEnd)
//create tone with fade-out and output
aSine = poscil:a(ampdb(kDb),mtof(kMidi))
aOut = linen:a(aSine,0,p3,p3/2)
outall(aOut)
//trigger next instance with random range for start and duration
iCount = p4
if (iCount > 1) then
iStart = random:i(1,3)
iDur = p3 + random:i(-p3/2,p3)
schedule("Hello",iStart,iDur,iCount-1)
endif
endin
schedule("Hello", 0, 2, 15)
</CsInstruments>
<CsScore>
94
10 Hello Random Structural Decisions
</CsScore>
</CsoundSynthesizer>
Structural Decisions
We have a lot of random opcodes in the code. Let us look closer to the decisions inheritated in
them, and to the effects which result from the decisions.
iMidiStart = random:i(55,80)
iMidiEnd = random:i(55,80)
Setting both, start and end pitch of the glissando line to a range from MIDI note 55 (= F#4) to 80 (=
G#6) makes it equally probable that rising and falling lines will appear. Some will have a big range
(imagine a line from 78 to 56) whilst others will have a small range (imagine a line from 62 to 64).
Then the pitch line would be mostly upwards, but sometimes not.
Similar decisions apply for the volume line which is set to:
iDbStart = random:i(-30,-10)
iDbEnd = random:i(-30,-10)
The maximum difference is 20 dB which is not too much. So there is some variance between
louder and softer tones but all are well perceivable, and there is not much foreground-background
effect as it would probably occur with a range of say -50 to -10 dB.
The most important decisions for the form are these, concerning the distance between subsequent
notes and the duration of the notes:
iStart = random:i(1,3)
iDur = p3 + random:i(-p3/2,p3)
The distance between two notes is between one and three seconds. So in average we have a new
note every two seconds.
But the duration of the notes is managed in a way that the next note duration has this note’s dura-
tion (p3) plus a random range between
(This is the same as a random range between p3/2 and p3*2. But I personally prefer here thinking
the next duration as “this duration plus/minus something”.)
95
Try it yourself 10 Hello Random
For the first note which has a duration of two seconds, this means a random range between one
and four seconds. So the tendency of the duration is to become larger and larger. Here is what
happens in the example code above for the first seven notes:
It is interesting to see that note 2 and 3 expand their duration as expected, but then note 4 and 5
shrink because they chose their durations close to the minimum.
But on the long run the larger durations prevail so that more and more notes sound at the same
time, forming chords or clusters.
Try it yourself
ä Set seed(0) instead of seed(12345) and listen to some of the results.
ä Change line 32 iDur = ... so that you get equal probability for longer or shorter durations,
without any process. Make up your mind about this version.
ä Change line 32 iDur = ... so that on the long run the durations become shorter and
shorter.
ä Change the code so that for half of the notes to be played the durations become longer, and
for the second half the durations become shorter.
ä Change the code so that for the first half of the notes the distance between subsequent notes
become shorter, and in the second half again become longer.
ä Apply this change also to the pitches and the volume so that in the first half the pitches
increase whilst the volume decreases, and then in the second half vice versa.
Random Walks
In a random walk, the random values of the next step depend on the previous step.
96
10 Hello Random Random Walks
In the example above, the durations are following a random walk, whilst the other random decisions
are independent from a previous step.
As shown in the figure above, note number seven with a duration of 11.8 seconds would not have
been possible in one of the previous steps. It depends on the random range which is generated in
step six.
The random walk of the note durations is combined with a tendency, leading to a development. But
it is also possible to keep the conditions for the next step constant, but nevertheless get surprising
patterns. Have a look at the Wikipedia article or other sources.
This is a random walk for pitch, volume and duration. The conditions for the next step remain
unchanged, but nevertheless there can be a direction in each of the three parameters.
<CsoundSynthesizer>
<CsOptions>
-o dac -m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
seed(54321)
instr RandomWalk
//receive pitch and volume
iMidiPitch = p4
iDecibel = p5
//create tone with fade-out and output
aSine = poscil:a(ampdb:i(iDecibel),mtof:i(iMidiPitch))
aOut = linen:a(aSine,0,p3,p3)
outall(aOut)
//get count
iCount = p6
//only continue if notes are left
if (iCount > 1) then
//notes are always following each other
iStart = p3
//next duration is plusminus half of the current maximum/minimum
iDur = p3 + random:i(-p3/2,p3/2)
//next pitch is plusminus a semitone maximum/minimum
iNextPitch = iMidiPitch + random:i(-1,1)
//next volume is plusminus 2dB maximum/minimum but
//always in the range -50 ... -6
iNextDb = iDecibel + random:i(-3,3)
if (iNextDb > -6) || (iNextDb < -50) then
iNextDb = -25
endif
//start the next instance
schedule("RandomWalk",iStart,iDur,iNextPitch,iNextDb,iCount-1)
//otherwise turn off
else
event_i("e",0,0)
endif
97
Opcodes and terms you have learned in this tutorial 10 Hello Random
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
Without changing the conditions for the random walk, we get an extreme reduction of the durations
at the end of this sequence.
In general, random in art is a part of our phantasy and invention. By introducing any “if”, we can
change conditions in whatever situation. For instance: “If the pitch has been constant in the last
three steps, jump to the upper or lower border.”
“Random” and “if” together can create crazy realities. Very individual because you yourself came
across these ideas. Very rational because it is all based on written rules and conditions. Very
inpredictable because the random chain may lead to unforeseen results. And perhaps this is one
of the biggest qualities …
Go on now …
with the next tutorial: 11 Hello Keys.
98
10 Hello Random … or read some more explanations here
This will generate a random number between 55 and 80 in every k-cycle. As calculated here this
is usually about 1000 times per second.
So this is a massive difference which is only intrduced by calling either random:i or random:k.
Although it is consistent and just following what we explained in Tutorial 02 and Tutorial 05, it is
often surprising for beginners.
We even can use the random opcode at a-rate. Then we generate one random value for each
sample, so 44100 times per second, if this is our sample rate. When we choose a reasonable
range for minimum and maximum, we can listen to it, and call it “white noise”:
aNoise = random:a(-0.1,0.1)
This is a typical case which is covered by the randomi opcode. The i as the end of its name
means interpolating. This opcode generates random numbers in a certain density, and draws
lines between them. These lines are the interpolations.
The input arguments for randomi are: 1. minumum, 2. maximum, 3. how many values to generate
per second, 4. is about different possibilities at the beginning and should be set to 3 for normal
use (consult the reference for more or have a look here).
This is a random line between 0 and 1, with one new value every second:
99
… or read some more explanations here 10 Hello Random
Sometimes we want the random values to be held until the next one. This is the job of then
randomh opcode.
The input parameters carry the same meaning as for randomi. This is the output for randomh
with the same input arguments as in the previous plot:
All this is just a small selective view on the big world of random.
In all the opcodes we discussed so far we have an equal distribution of the random numbers in a
certain range. But in nature, we often have a larger probability in the middle than near the borders
of the random range. The Gaussian distribution is the mathematical formulation for it, and it is
implemented in the gauss opcode in Csound.
And the random walk is only one possibility of a random which depends on a context rather than
throwing the dies. The Markov chain is another approach. If you want to read more, and run some
examples, please have a look at the Random chapter in this book.
100
10 Hello Random … or read some more explanations here
Congratulations!
You have now finished the first ten of these tutorials. This block was meant as a general introduc-
tion. To summarize some of the contents which you know now:
I think this should be enough to jump anywhere in this textbook, and either dig deeper in the Csound
language in Chapter 03, or look at sound synthesis methods in Chapter 04, or to ways to modify
existing sounds in Chapter 05.
Of course you are welcome to continue these tutorials which will now move to live input and will
try to cover other basic subjects of using Csound in an interactive way.
101
… or read some more explanations here 10 Hello Random
102
HOW TO: INSTALLATION
What is a Frontend?
In computer science, we call frontend what is close to the user, and backend what is close to the
hardware.
So a frontend is what you as user see, in which you type text, scroll windows, open and save files
by clicking in the menu bar.
There were a lot of discussions in the Csound community whether one standard frontend should
be distributed together with Csound, or not. Sometimes it was the case, but as for today, it is not.
The main reasons are:
1. Each frontend has a different design and different ways to use it. There is not the one stan-
dard way to build a frontend for Csound.
2. Although it is confusing for beginners, it is important to distinguish between Csound as au-
dio engine and any frontend. It is the big flexibility of Csound that it can be used in many
contextes. Each of these contextes somehow establishes a new frontend.
103
How to install on Windows? HOW TO: INSTALLATION
A frontend will provide an editor in which you can open, modify and save .csd files.
In addition to these essentials, the existing frontends offer different other features, depending on
their orientation.
Cabbage has been developed since 2007 by Rory Walsh. Its main focus is to export plugins from
a Csound file to run in a DAW. Cabbage has grown a lot and is currently the most popular way to
work with Csound.
Blue has been developed since 2004 by Steven Yi. Its main focus is to provide an object-oriented
way to work with the Csound score. Around this, many features were added.
Web IDE is an online IDE which has been started in 2018. Nothing to install here — just create an
account and use Csound in your browser.
104
HOW TO: INSTALLATION How to install on Mac?
105
How to install on Android? HOW TO: INSTALLATION
I personally recommend to install Csound from sources. If you have some experiences in how to
do it, you get the most recent Csound and can even switch to the develpment version. Building is
not complicated; a good description is here.
106
HOW TO: GET HELP
REFERENCE MANUAL
ORIENTATION
ERRORS
The console will look different depending on your way to run Csound. Here in this online textbook,
the console shows up when you push the Run button of an example:
107
ERRORS HOW TO: GET HELP
In frontends the console may be hidden, but you will be able to bring it to view.
Depending on your way to use Csound, the console may show more or less information. But the
error messages should always be visible.
How can I find the related code line for an error message?
Csound tries to point us to the first error in the code. Look at the following output; the error was
to write poscel rather than poscil.
The first line is a bit obscure, but syntax error is the important bit here.
The second line reports the line number in the code as 13 which is true. (Sometimes in frontends
the line number is plus-minus one.)
Note that it does not report the full line, but up to the point at which the syntax error occured for
the Csound parser.
108
HOW TO: GET HELP ERRORS
If possible, run your Csound file from the command line. The command line itself will not crash,
and perhaps you see something which gives more information.
109
ERRORS HOW TO: GET HELP
110
HOW TO: HARDWARE
AUDIO DEVICES
But it is good to know what the command line options are and what they mean: Good for a deeper
understanding of what happens, and for being able to solve problems.
111
AUDIO DEVICES HOW TO: HARDWARE
(Note that Cabbage uses its own internal audio module instead of the ones provided by Csound.
It is also hardwired to 2 channels minimum.)
If you use plain Csound (command line Csound or Csound via API), use the option
-+rtaudio=...
Remember that you also need to choose a real-time audio module. (See above how to do this.)
Csound connects with a specific sound card seperately for input and output.
Output
If you use -o dac, Csound will connect with the system default. This is usually desirable.
If you want another audio device than the system default, you need to get the number of this device
first. It should be written in the Csound console when Csound starts. If not, run Csound with this
option:
csound --devices
Once you picked out the one you want to use, call it for instance as
-o dac2
Input
To get input from your sound card into Csound, use the option
112
HOW TO: HARDWARE REALTIME AUDIO SETTINGS
-i adc
If you use -i dac, Csound will connect with the system default. This is usually desirable.
If you want another audio device than the system default, you need to get the number of this device
first. It should be written in the Csound console when Csound starts. If not, run Csound with this
option:
csound --devices
Once you picked out the one you want to use, call it for instance as
-i adc2
Of course you can use both options together. To use the system default sound card, you will write:
-o dac -i adc
Or without spaces:
-odac -iadc
How can I synchronize the sample rate in Csound and in my audio card?
Say your system sample rate is 48000, but your Csound file has 44100. The solution is simple, and
can be in both ways.
113
REALTIME AUDIO SETTINGS HOW TO: HARDWARE
Or change the system’s sample rate to 44100. (Some systems will seamlessly handle the sampling
rate requested by Csound and will not require any intervention.)
Csound has two options to set the realtime audio buffer size:
As a rule of thumb:
Make sure you never have a larger ksmps value than the -b buffer size.
Usually your ksmps will be a fourth of -b. So as a standard configuration for real-time audio in
Csound I’d use:
Can I give realtime audio the priority over other processes in Csound?
When you have instruments that have substantial sections that could block out execution, for
instance with code that loads buffers from files or creates big tables, you can try the option
--realtime in you CsOptionstag.
1 AsVictor Lazzarini explains (mail to Joachim Heintz, 19 march 2013), the role of -b and -B varies between the Audio Mod-
ules: ”1. For portaudio, -B is only used to suggest a latency to the backend, whereas -b is used to set the actual buffersize.
2. For coreaudio, -B is used as the size of the internal circular buffer, and -b is used for the actual IO buffer size. 3. For
jack, -B is used to determine the number of buffers used in conjunction with -b , num = (N + M + 1) / M. -b is the size of each
buffer. 4. For alsa, -B is the size of the buffer size, -b is the period size (a buffer is divided into periods). 5. For pulse, -b is
the actual buffersize passed to the device, -B is not used. In other words, -B is not too significant in 1), not used in 5), but
has a part to play in 2), 3) and 4), which is functionally similar.”
114
HOW TO: HARDWARE REALTIME AUDIO ISSUES AND ERRORS
This option will give your audio processing the priority over other tasks to be done. It places all
initialisation code on a separate thread, and does not block the audio thread. Instruments start
performing only after all the initialisation is done. That can have a side-effect on scheduling if your
audio input and output buffers are not small enough, because the audio processing thread may
“run ahead” of the initialisation one, taking advantage of any slack in the buffering.
Given that this option is intrinsically linked to low-latency, realtime audio performance, and also to
reduce the effect on scheduling these other tasks, it is recommended that small ksmps and buffer
sizes, for example ksmps=16, 32, or 64, -b32 or 64, and -B256 or 512.
ä A realtime audio module is selected which does not fit to your system.
ä The buffer sizes are too large or too small.
ä The Csound instruments you run consume too much CPU power.
For Mac, this error can also be thrown when you use the internal sound card with nchnls = 2.
The reason is that the microphone input is mono, but with nchnls = 2 Csound tries to open two
input channels. The solution in this case is to use nchnls_i = 1 in addition:
nchnls = 2
nchnls_i = 1
MIDI DEVICES
115
MIDI DEVICES HOW TO: HARDWARE
How can I know the MIDI input device number when using plain Csound?
“Plain Csound” means: Using Csound via command line. If you use a frontend, MIDI handling is
done by it.
You should see the device numbers in the Csound console once you run Csound.
How can I know the MIDI output device number when using plain Csound?
“Plain Csound” means: Using Csound via command line. If you use a frontend, MIDI handling is
done by it.
You should see the device numbers in the Csound console once you run Csound.
How can I set another MIDI module than PortMidi when using plain Csound?
“Plain Csound” means: Using Csound via command line. If you use a frontend, MIDI handling is
done by it.
The default MIDI module in Csound is PortMidi. You can choose another MIDI module by using
-+rtmidi=...
The available MIDI modules depend on your operating system and on your installation. Usually
alsa is available for Linux, coremidi for Mac and winmme for Windows.
116
HOW TO: HARDWARE MIDI DEVICES
How can I select MIDI input and output devices in plain Csound?
“Plain Csound” means: Using Csound via command line. If you use a frontend, MIDI handling is
done by it.
Input devices are set via the option -M. Output devices are set with the option -Q. So if you
want MIDI input using the portmidi module, using device 2 for input and device 1 for output, your
<CsOptions> section should contain:
-+rtmidi=portmidi -M2 -Q1
Can I use more than one device for input using plain Csound?
“Plain Csound” means: Using Csound via command line. If you use a frontend, MIDI handling is
done by it.
You can use multiple MIDI input devices when you use PortMidi. This is done via the option
-Ma
If you want to change this routing of MIDI channels to instruments, you can use the massign op-
code. For instance, this statement lets you route your MIDI channel 1 to instrument 10:
massign 1, 10
On the following example, a simple instrument, which plays a sine wave, is defined in instrument
1. There are no score note events, so no sound will be produced unless a MIDI note is received on
channel 1.
EXAMPLE 02D01_Midi_Keybd.csd
<CsoundSynthesizer>
<CsOptions>
//it might be necessary to add -Ma here if you use plain Csound
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
117
MIDI DEVICES HOW TO: HARDWARE
0dbfs = 1
instr Play
//get the frequency from the key pressed
iCps = cpsmidi()
//get the amplitude
iAmp = ampmidi(0dbfs * 0.3)
//generate a sine tone with these parameters
aSine = poscil:a(iAmp,iCps)
//apply fade in and fade out
aOut =linenr:a(aSine,0.01,0.1,0.01)
//write it to the output
outall(aOut)
endin
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
;Example by Andrés Cabrera and joachim heintz
Note that Csound has an unlimited polyphony in this way: each key pressed starts a new instance
of instrument Play, and you can have any number of instrument instances at the same time.
To receive MIDI controller events in plain Csound, opcodes like ctrl7 can be used. In the following
example instrument 1 is turned on for 60 seconds. It will receive controller #1 on channel 1 and
convert MIDI range (0-127) to a range between 220 and 440. This value is used to set the frequency
of a simple sine oscillator.
EXAMPLE 02D02_Midi_Ctlin.csd
<CsoundSynthesizer>
<CsOptions>
//it might be necessary to add -Ma here if you use plain Csound
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr 1
//receive controller number 1 on channel 1 and scale from 220 to 440
kFreq = ctrl7(1, 1, 220, 440)
//use this value as varying frequency for a sine wave
aOut = poscil:a(0.2, kFreq)
118
HOW TO: HARDWARE MIDI DEVICES
//output
outall(aOut)´
endin
</CsInstruments>
<CsScore>
i 1 0 60
</CsScore>
</CsoundSynthesizer>
;Example by Andrés Cabrera
How can I get a simple printout of all MIDI input with plain Csound?
Csound can receive generic MIDI Data using the midiin opcode. The example below prints to the
console the data received via MIDI.
EXAMPLE 02C03_Midi_all_in.csd
<CsoundSynthesizer>
<CsOptions>
//it might be necessary to add -Ma here if you use plain Csound
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr 1
kStatus, kChan, kData1, kData2 midiin
if kStatus != 0 then ;print if any new MIDI message has been received
printk 0, kStatus
printk 0, kChan
printk 0, kData1
printk 0, kData2
endif
endin
</CsInstruments>
<CsScore>
i 1 0 -1
</CsScore>
</CsoundSynthesizer>
;Example by Andrés Cabrera
119
MIDI DEVICES HOW TO: HARDWARE
120
HOW TO: OPCODES
OVERVIEW
OSCILLATORS
As to oscillators, the reasons to write new oscillators are: Efficiency and use cases. In the early
time of computer music, each and every bit was valuable, so to say. The oscil opcode comes
from this period and uses a very simple way to read a table, because it was fast and good enough
for most use cases in the audible range. For other use cases, for instance LFO, other oscillators
were added.
121
SOUND FILES HOW TO: OPCODES
instr 1
//let oscil cross the table values once a second
a1 = oscil:a(1,1,giTable)
//print the result every 1/10 second
printks("Time = %.1f sec: Table value = %f\n", 1/10, times:k(), a1[0])
endin
</CsInstruments>
<CsScore>
i 1 0 2.01
</CsScore>
</CsoundSynthesizer>
The printout shows that oscil only returns the actual table values, without anything “in between”.
But this “in between” is required in many cases, for instance when a sine table consists of 1024
points, and is read with a frequency of 100 Hz at a sample rate of 44100 Hz. The number of table
values for one second is 102400, but we only have 44100 samples. So in this case 2.32… table
values meet one sample, and this requires interpolation.
Whether this lack of interpolation leads to audible artefacts or not depends on the size of the table
and the oscillator’s frequency. It can be audible for small tables and low frequencies. To avoid
any possible artefacts, oscili or poscilshould be used for linear interpolation, and oscil3 or
poscil3 for cubic interpolation.
SOUND FILES
122
HOW TO: OPCODES SOUND FILES
123
SOUND FILES HOW TO: OPCODES
124
HOW TO: PRINT
Csound’s print facilities are scattered amongst many different opcodes. You can print anything,
but depending on the situation, you will need a particular print opcode. The following list is not
complete, but may give some help to figure out the appropriate opcode for printing.
Printing at i-rate means: Printing only once, at the start of an instrument instance.
Printing at k-rate means: Printing during the performance of the instrument instance. Usually we
will not print values every k-cycle because this would result in more than thousand printouts per
second for a sr = 44100 and ksmps = 32. So the different opcodes offer either a time interval
or a trigger.
Printing at i-rate
print is for simple printing of i-rate numbers. Note that it rounds to three decimals.
iValue = 8.876543
print(iValue)
-> iValue = 8.877
Strings: puts()
125
Printing at k-rate HOW TO: PRINT
Arrays: printarray()
prints offers formatting and can be used for both, numbers and strings.
iValue = 8.876543
String = "Hello"
prints("%s %f!\n",String,iValue)
-> Hello 8.876543!
printf_i()
printf_i has an additional trigger input. It prints only when the trigger is larger than zero. So
this statement will do nothing:
iTrigger = 0
String = "Hello"
printf("%s %f!\n",iTrigger,String,iTrigger)
Printing at k-rate
printk is for simple printing of k-rate numbers (rounded to five decimals). The first input parame-
ter specifies the time intervall in seconds between each printout. So this code will print the control
cycle count once a second:
kValue = timek()
printk(1,kValue)
126
HOW TO: PRINT Printing at k-rate
Strings: puts()
puts prints a string on a new line. If the trigger input changes its value, the string is printed.
instr 1
//initialize string
String = "a"
//initialize trigger for puts()
kTrigger init 1
//printout when kTrigger is larger than zero and changed
puts(String,kTrigger)
//change string randomly and increase trigger
kValue = rnd:k(1)
if kValue > 0.999 then
kTrigger += 1
String = sprintfk("%c",random:k(65,90))
endif
endin
schedule(1,0,1)
-> a
W
H
X
D
A
N
Arrays: printarray()
The printarray opcode can also be used for k-rate arrays. It prints whenever the trigger input
changes from 0 to 1.
instr 1
//create an array [0,1,2,3,4,5]
kArr[] genarray_i 0,5
//print the array values once a second
printarray(kArr,metro:k(1))
//change the array values randomly
kIndx = 0
while(kIndx < lenarray(kArr)) do
kArr[kIndx] = rnd:k(6)
kIndx += 1
od
endin
schedule(1,0,4)
-> 0.0000 1.0000 2.0000 3.0000 4.0000 5.0000
0.7974 0.4855 3.4391 4.3606 5.9973 5.3945
127
Printing at k-rate HOW TO: PRINT
Similar to prints we can fill a format string. The time between print intervals is given as first
parameter. So this will print once a second:
instr 1
kValue = rnd:k(1)
printks("Time = %f, kValue = %f\n",1,times:k(),kValue)
endin
schedule(1,0,4)
-> Time = 0.000726, kValue = 0.973500
Time = 1.001361, kValue = 0.262336
Time = 2.001995, kValue = 0.858091
Time = 3.002630, kValue = 0.144621
printf()
printf works with a trigger, not with a fixed time interval between printouts.
instr 1
kValue = rnd:k(1)
if kValue > 0.999 then
kTrigger = 1
else
kTrigger = 0
endif
printf("Time = %f, kValue = %f\n",kTrigger,times:k(),kValue)
endin
schedule(1,0,4)
-> Time = 0.380952, kValue = 0.999717
Time = 0.383129, kValue = 0.999951
Time = 2.535329, kValue = 0.999191
Time = 3.274739, kValue = 0.999095
Time = 3.443810, kValue = 0.999773
Time = 3.950295, kValue = 0.999182
128
HOW TO: PRINT Formatting
kIndx += 1
od
printarray(kArr,-1)
endin
schedule(1,0,.001)
-> 0.0000 0.1420 0.2811 0.4145 0.5396 0.6536 0.7545 0.8400
0.9086 0.9587 0.9894 1.0000 0.9904 0.9607 0.9115 0.8439
0.7591 0.6590 0.5455 0.4210 0.2879 0.1490 0.0071 -0.1349
-0.2743 -0.4080 -0.5335 -0.6482 -0.7498 -0.8361 -0.9056 -0.9566
-0.9883 -0.9999 -0.9913 -0.9626 -0.9144 -0.8477 -0.7638 -0.6644
-0.5515 -0.4275 -0.2948 -0.1561 -0.0142 0.1279 0.2674 0.4015
Formatting
opcode ToAscii, S, S
;returns the ASCII numbers of the input string as string
Sin xin ;input string
ilen strlen Sin ;its length
ipos = 0 ;set counter to zero
Sres = "" ;initialize output string
loop: ;for all characters in input string:
ichr strchar Sin, ipos ;get its ascii code number
Snew sprintf "%d ", ichr ;put this number into a new string
Sres strcat Sres, Snew ;append this to the output string
loop_lt ipos, 1, ilen, loop ;see comment for 'loop:'
xout Sres ;return output string
endop
instr Integers
printf_i "\nIntegers:\n normal: %d or %d\n", 1, 123, -123
printf_i " signed if positive: %+d\n", 1, 123
printf_i " space left if positive:...% d...% d\n", 1, 123, -123
printf_i " fixed width left ...%-10d...or right...%10d\n", 1, 123, 123
printf_i " starting with zeros if ", 1
printf_i " necessary: %05d %05d %05d %05d %05d %05d\n",
1, 1, 12, 123, 1234, 12345, 123456
printf_i " floats are rounded: 1.1 -> %d, 1.9 -> %d\n", 1, 1.1, 1.9
endin
instr Floats
printf_i "\nFloats:\n normal: %f or %f\n", 1, 1.23, -1.23
printf_i " number of digits after point: %f %.5f %.3f %.1f\n",
1, 1.23456789, 1.23456789, 1.23456789, 1.23456789
printf_i " space left if positive:...% .3f...% .3f\n", 1, 123, -123
printf_i " signed if positive: %+f\n", 1, 1.23
printf_i " fixed width left ...%-10.3f...or right...%10.3f\n",
1, 1.23456, 1.23456
129
Formatting HOW TO: PRINT
endin
instr Strings
printf_i "\nStrings:\n normal: %s\n", 1, "csound"
printf_i " fixed width left ...%-10s...or right...%10s\n",
1, "csound", "csound"
printf_i {{ a string over
multiple lines in which
you can insert also quotes: "%s"\n}}, 1, "csound"
endin
instr Characters
printf_i "\nCharacters:\n given as single strings: %s%s%s%s%s%s\n",
1, "c", "s", "o", "u", "n", "d"
printf_i " but can also be given as numbers: %c%c%c%c%c%c\n",
1, 99, 115, 111, 117, 110, 100
Scsound ToAscii "csound"
printf_i " in csound, the ASCII code of a character ", 1
printf_i "can be accessed with the opcode strchar.%s", 1, "\n"
printf_i " the name 'csound' returns the numbers %s\n\n", 1, Scsound
endin
</CsInstruments>
<CsScore>
i "Integers" 0 0
i "Floats" 0 0
i "Strings" 0 0
i "Characters" 0 0
</CsScore>
</CsoundSynthesizer>
130
03 A. INITIALIZATION AND PERFORMANCE PASS
Not only for beginners, but also for experienced Csound users, many problems result from the
misunderstanding of the so-called i-rate and k-rate. You want Csound to do something just once,
but Csound does it continuously. You want Csound to do something continuously, but Csound
does it just once. If you experience such a case, you will most probably have confused i- and
k-rate-variables.
The concept behind this is actually not complicated. But it is something which is more implicitly
mentioned when we think of a program flow, whereas Csound wants to know it explicitely. So we
tend to forget it when we use Csound, and we do not notice that we ordered a stone to become a
wave, and a wave to become a stone. This chapter tries to explicate very carefully the difference
between stones and waves, and how you can profit from them, after you understood and accepted
both qualities.
Basic Distinction
We will explain at first the difference between i-rate and k-rate. Then we will look at some properties
of k-rate signals, and finally introduce the audio vector.
Init Pass
Whenever a Csound instrument is called, all variables are set to initial values. This is called the
initialization pass.
There are certain variables, which stay in the state in which they have been put by the init-pass.
These variables start with an i if they are local (= only considered inside an instrument), or with a
gi if they are global (= considered overall in the orchestra). This is a simple example:
EXAMPLE 03A01_Init-pass.csd
<CsoundSynthesizer>
<CsInstruments>
giGlobal = 1/2
131
Basic Distinction 03 A. INITIALIZATION AND PERFORMANCE PASS
instr 1
iLocal = 1/4
print giGlobal, iLocal
endin
instr 2
iLocal = 1/5
print giGlobal, iLocal
endin
</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
As you see, the local variables iLocal do have different meanings in the context of their instrument,
whereas giGlobal is known everywhere and in the same way. It is also worth mentioning that the
performance time of the instruments (p3) is zero. This makes sense, as the instruments are called,
but only the init-pass is performed.1
Performance Pass
After having assigned initial values to all variables, Csound starts the actual performance. As
music is a variation of values in time,2 audio signals are producing values which vary in time. In
all digital audio, the time unit is given by the sample rate, and one sample is the smallest possible
time atom. For a sample rate of 44100 Hz,3 one sample comes up to the duration of 1/44100 =
0.0000227 seconds.
So, performance for an audio application means basically: calculate all the samples which are
finally being written to the output. You can imagine this as the cooperation of a clock and a calcu-
lator. For each sample, the clock ticks, and for each tick, the next sample is calculated.
Most audio applications do not perform this calculation sample by sample. It is much more ef-
ficient to collect some amount of samples in a block or vector, and calculate them all together.
This means in fact, to introduce another internal clock in your application; a clock which ticks less
frequently than the sample clock. For instance, if (always assumed your sample rate is 44100
Hz) your block size consists of 10 samples, your internal calculation time clock ticks every 1/4410
1 You would not get any other result if you set p3 to 1 or any other value, as nothing is done here except initialization.
2 For the physical result which comes out of the loudspeakers or headphones, the variation is the variation of air pressure.
3 44100 samples per second
132
03 A. INITIALIZATION AND PERFORMANCE PASS Basic Distinction
(0.000227) seconds. If your block size consists of 441 samples, the clock ticks every 1/100 (0.01)
seconds.
The following illustration shows an example for a block size of 10 samples. The samples are
shown at the bottom line. Above are the control ticks, one for each ten samples. The top two lines
show the times for both clocks in seconds. In the upmost line you see that the first control cycle
has been finished at 0.000227 seconds, the second one at 0.000454 seconds, and so on.4
The rate (frequency) of these ticks is called the control rate in Csound. By historical reason,5
it is called kontrol rate instead of control rate, and abbreviated as kr instead of cr. Each of the
calculation cycles is called a k-cycle. The block size or vector size is given by the ksmps parameter,
which means: how many samples (smps) are collected for one k-cycle.6
Implicit Incrementation
EXAMPLE 03A02_Perf-pass_incr.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 4410
instr 1
kCount init 0; set kcount to 0 first
kCount = kCount + 1; increase at each k-pass
printk 0, kCount; print the value
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
4 These are by the way the times which Csound reports if you ask for the control cycles.
The first control cycle in this example
(sr=44100, ksmps=10) would be reported as 0.00027 seconds, not as 0.00000 seconds.
5 AsRichard Boulanger explains, in early Csound a line starting with c was a comment line. So it was not possible to ab-
breviate control variables as cAnything (http://csound.1045644.n5.nabble.com/OT-why-is-control-rate-called-kontrol-rate-
td5720858.html#a5720866).
6 Asthe k-rate is directly depending on sample rate (sr) and ksmps (kr = sr/ksmps), it is probably the best style to specify sr
and ksmps in the header, but not kr.
133
Basic Distinction 03 A. INITIALIZATION AND PERFORMANCE PASS
A counter (kCount) is set here to zero as initial value. Then, in each control cycle, the counter is
increased by one. What we see here, is the typical behaviour of a loop. The loop has not been
set explicitely, but works implicitely because of the continuous recalculation of all k-variables. So
we can also speak about the k-cycles as an implicit (and time-triggered) k-loop.7 Try changing the
ksmps value from 4410 to 8820 and to 2205 and observe the difference.
The next example reads the incrementation of kCount as rising frequency. The first instrument,
called Rise, sets the k-rate frequency kFreq to the initial value of 100 Hz, and then adds 10 Hz in
every new k-cycle. As ksmps=441, one k-cycle takes 1/100 second to perform. So in 3 seconds,
the frequency rises from 100 to 3100 Hz. At the last k-cycle, the final frequency value is printed
out.8 The second instrument, Partials, increments the counter by one for each k-cycle, but only
sets this as new frequency for every 100 steps. So the frequency stays at 100Hz for one second,
then at 200 Hz for one second, and so on. As the resulting frequencies are in the ratio 1 : 2 : 3 …,
we hear partials based on a 100 Hz fundamental, from the first partial up to the 31st. The opcode
printk2 prints out the frequency value whenever it has changed.
EXAMPLE 03A03_Perf-pass_incr_listen.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 441
0dbfs = 1
nchnls = 2
instr Rise
kFreq init 100
aSine poscil .2, kFreq, giSine
outs aSine, aSine
;increment frequency by 10 Hz for each k-cycle
kFreq = kFreq + 10
;print out the frequency for the last k-cycle
kLast release
if kLast == 1 then
printk 0, kFreq
7 This must not be confused with a ’real’ k-loop where inside one single k-cycle a loop is performed. See chapter 03C (section
Loops) for examples.
8 The value is 3110 instead of 3100 because it has already been incremented by 10.
134
03 A. INITIALIZATION AND PERFORMANCE PASS Basic Distinction
endif
endin
instr Partials
;initialize kCount
kCount init 100
;get new frequency if kCount equals 100, 200, ...
if kCount % 100 == 0 then
kFreq = kCount
endif
aSine poscil .2, kFreq, giSine
outs aSine, aSine
;increment kCount
kCount = kCount + 1
;print out kFreq whenever it has changed
printk2 kFreq
endin
</CsInstruments>
<CsScore>
i "Rise" 0 3
i "Partials" 4 31
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
EXAMPLE 03A04_Perf-pass_no_incr.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 4410
instr 1
kcount = 0; sets kcount to 0 at each k-cycle
kcount = kcount + 1; does not really increase ...
printk 0, kcount; print the value
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Outputs:
135
Basic Distinction 03 A. INITIALIZATION AND PERFORMANCE PASS
There are different opcodes to print out k-variables.9 There is no opcode in Csound to print out the
audio vector directly, but we can use the vaget opcode to see what is happening inside one control
cycle with the audio samples.
EXAMPLE 03A05_Audio_vector.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 5
0dbfs = 1
instr 1
aSine poscil 1, 2205
kVec1 vaget 0, aSine
kVec2 vaget 1, aSine
kVec3 vaget 2, aSine
kVec4 vaget 3, aSine
kVec5 vaget 4, aSine
printks "kVec1 = %f, kVec2 = %f, kVec3 = %f, kVec4 = %f, kVec5 = %f\n",
0, kVec1, kVec2, kVec3, kVec4, kVec5
endin
</CsInstruments>
<CsScore>
i 1 0 [1/2205]
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
9 See the manual page for printk, printk2, printks, printf to know more about the differences.
136
03 A. INITIALIZATION AND PERFORMANCE PASS Basic Distinction
In this example, the number of audio samples in one k-cycle is set to five by the statement ksmps=5.
The first argument to vaget specifies which sample of the block you get. For instance,
kVec1 vaget 0, aSine
gets the first value of the audio vector and writes it into the variable kVec1. For a frequency of 2205
Hz at a sample rate of 44100 Hz, you need 20 samples to write one complete cycle of the sine. So
we call the instrument for 1/2205 seconds, and we get 4 k-cycles. The printout shows exactly one
period of the sine wave.
At the end of this chapter we will show another and more advances method to access the audio
vector and modify its samples.
A Summarizing Example
After having put so much attention to the different single aspects of initialization, performance and
audio vectors, the next example tries to summarize and illustrate all the aspects in their practical
mixture.
EXAMPLE 03A06_Init_perf_audio.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 441
nchnls = 2
0dbfs = 1
instr 1
iAmp = p4 ;amplitude taken from the 4th parameter of the score line
iFreq = p5 ;frequency taken from the 5th parameter
; --- move from 0 to 1 in the duration of this instrument call (p3)
kPan line 0, p3, 1
aNote poscil iAmp, iFreq ;create an audio signal
aL, aR pan2 aNote, kPan ;let the signal move from left to right
outs aL, aR ;write it to the output
endin
</CsInstruments>
<CsScore>
i 1 0 3 0.2 443
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
As ksmps=441, each control cycle is 0.01 seconds long (441/44100). So this happens when the
instrument call is performed:
137
Applications and Concepts 03 A. INITIALIZATION AND PERFORMANCE PASS
As we saw, the init opcode is used to set initial values for k- or a-variables explicitely. On the other
hand, you can get the initial value of a k-variable which has not been set explicitely, by the i() facility.
This is a simple example:
EXAMPLE 03A07_Init-values_of_k-variables.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
instr 1
gkLine line 0, p3, 1
endin
instr 2
iInstr2LineValue = i(gkLine)
print iInstr2LineValue
endin
instr 3
138
03 A. INITIALIZATION AND PERFORMANCE PASS Applications and Concepts
iInstr3LineValue = i(gkLine)
print iInstr3LineValue
endin
</CsInstruments>
<CsScore>
i 1 0 5
i 2 2 0
i 3 4 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Outputs:
new alloc for instr 1:
B 0.000 .. 2.000 T 2.000 TT 2.000 M: 0.0
new alloc for instr 2:
instr 2: iInstr2LineValue = 0.400
B 2.000 .. 4.000 T 4.000 TT 4.000 M: 0.0
new alloc for instr 3:
instr 3: iInstr3LineValue = 0.800
B 4.000 .. 5.000 T 5.000 TT 5.000 M: 0.0
Instrument 1 produces a rising k-signal, starting at zero and ending at one, over a time of five
seconds. The values of this line rise are written to the global variable gkLine. After two seconds,
instrument 2 is called, and examines the value of gkLine at its init-pass via i(gkLine). The value at
this time (0.4), is printed out at init-time as iInstr2LineValue. The same happens for instrument 3,
which prints out iInstr3LineValue = 0.800, as it has been started at 4 seconds.
The i() feature is particularily useful if you need to examine the value of any control signal from a
widget or from midi, at the time when an instrument starts.
For getting the init value of an element in a k-time array, the syntax i(kArray,iIndex) must be used;
for instance i(kArr,0) will get the first element of array kArr at init-time. More about this in the
section Init Values of k-Arrays in the Arrays chapter of this book.
If this variable is not set explicitely, the init value in the first call of an instrument is zero, as usual.
But, for the next calls, the k-variable is initialized to the value which was left when the previous
instance of the same instrument turned off.
The following example shows this behaviour. Instrument Call simply calls the instrument Called
once a second, and sends the number of the call to it. Instrument Called generates the variable
kRndVal by a random generator, and reports both:
ä the value of kRndVal at performance time, i.e. the first control cycle. (After the first k-cycle,
the instrument is turned off immediately.)
EXAMPLE 03A08_k-inits_in_multiple_calls_1.csd
139
Applications and Concepts 03 A. INITIALIZATION AND PERFORMANCE PASS
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
ksmps = 32
instr Call
kNumCall init 1
kTrig metro 1
if kTrig == 1 then
event "i", "Called", 0, 1, kNumCall
kNumCall += 1
endif
endin
instr Called
iNumCall = p4
kRndVal random 0, 10
prints "Initialization value of kRnd in call %d = %.3f\n",
iNumCall, i(kRndVal)
printks " New random value of kRnd generated in call %d = %.3f\n",
0, iNumCall, kRndVal
turnoff
endin
</CsInstruments>
<CsScore>
i "Call" 0 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The printout shows what was stated before: If there is no previous value of a k-variable, this vari-
able is initialized to zero. If there is a previous value, it serves as initialization value.
Note that this is exactly the same for User-Defined Opcodes! If you call a UDO twice, it will have
the current value of a k-Variable of the first call as init-value of the second call, unless you initialize
the k-variable explicitely by an init statement.
The final example shows both possibilities, using explicit initialization or not, and the resulting
effect.
EXAMPLE 03A10_k-inits_in_multiple_calls_3.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
ksmps = 32
140
03 A. INITIALIZATION AND PERFORMANCE PASS Applications and Concepts
instr without_init
prints "instr without_init, call %d:\n", p4
kVal = 1
prints " Value of kVal at initialization = %d\n", i(kVal)
printks " Value of kVal at first k-cycle = %d\n", 0, kVal
kVal = 2
turnoff
endin
instr with_init
prints "instr with_init, call %d:\n", p4
kVal init 1
kVal = 1
prints " Value of kVal at initialization = %d\n", i(kVal)
printks " Value of kVal at first k-cycle = %d\n", 0, kVal
kVal = 2
turnoff
endin
</CsInstruments>
<CsScore>
i "without_init" 0 .1 1
i "without_init" + .1 2
i "with_init" 1 .1 1
i "with_init" + .1 2
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Note that this characteristics of using leftovers from previous instances which may lead to unde-
sired effects, does also occur for audio variables. Similar to k-variables, an audio vector is initalized
for the first instance to zero, or to the value which is explcitely set by an init statement. In case a
previous instance can be re-used, its last state will be the init state of the new instance.
The next example shows an undesired side effect in instrument 1. In the third call (start=2), the
previous values for the a1 audio vector will be used, because this variable is not set explicitely. This
means, though, that 32 amplitudes are repeated in a frequency of sr/ksmps, in this case 44100/32
= 1378.125 Hz. The same happens at start=4 with audio variable a2. Instrument 2 initializes a1
and a2 in the cases they need to be, so that the inadvertend tone disappears.
EXAMPLE 03A11_a_inits_in_multiple_calls.csd
<CsoundSynthesizer>
<CsOptions>
141
Applications and Concepts 03 A. INITIALIZATION AND PERFORMANCE PASS
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32 ;try 64 or other values
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
i 1 0 .5 0
i . 1 . 0
i . 2 . 1
i . 3 . 1
i . 4 . 0
i . 5 . 0
i . 6 . 1
i . 7 . 1
b 9
i 2 0 .5 0
i . 1 . 0
i . 2 . 1
i . 3 . 1
i . 4 . 0
i . 5 . 0
i . 6 . 1
i . 7 . 1
</CsScore>
</CsoundSynthesizer>
;example by oeyvind brandtsegg and joachim heintz
142
03 A. INITIALIZATION AND PERFORMANCE PASS Applications and Concepts
Reinitialization
As we saw above, an i-value is not affected by the performance loop. So you cannot expect this
to work as an incrementation:
EXAMPLE 03A12_Init_no_incr.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 4410
instr 1
iCount init 0 ;set iCount to 0 first
iCount = iCount + 1 ;increase
print iCount ;print the value
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
But you can advise Csound to repeat the initialization of an i-variable. This is done with the reinit
opcode. You must mark a section by a label (any name followed by a colon). Then the reinit
statement will cause the i-variable to refresh. Use rireturn to end the reinit section.
EXAMPLE 03A13_Re-init.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 4410
instr 1
iCount init 0 ; set icount to 0 first
reinit new ; reinit the section each k-pass
new:
iCount = iCount + 1 ; increase
print iCount ; print the value
rireturn
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Outputs:
143
Applications and Concepts 03 A. INITIALIZATION AND PERFORMANCE PASS
What happens here more in detail, is the following. In the actual init-pass, iCount is set to zero
via iCount init 0. Still in this init-pass, it is incremented by one (iCount = iCount+1) and the value
is printed out as iCount = 1.000. Now starts the first performance pass. The statement reinit new
advices Csound to initialise again the section labeled as new. So the statement iCount = iCount +
1 is executed again. As the current value of iCount at this time is 1, the result is 2. So the printout
at this first performance pass is iCount = 2.000. The same happens in the next nine performance
cycles, so the final count is 11.
Order of Calculation
In this context, it can be very important to observe the order in which the instruments of a Csound
orchestra are evaluated. This order is determined by the instrument numbers. So, if you want to
use during the same performance pass a value in instrument 10 which is generated by another
instrument, you must not give this instrument the number 11 or higher. In the following example,
first instrument 10 uses a value of instrument 1, then a value of instrument 100.
EXAMPLE 03A14_Order_of_calc.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 4410
instr 1
gkcount init 0 ;set gkcount to 0 first
gkcount = gkcount + 1 ;increase
endin
instr 10
printk 0, gkcount ;print the value
endin
instr 100
gkcount init 0 ;set gkcount to 0 first
gkcount = gkcount + 1 ;increase
endin
</CsInstruments>
<CsScore>
;first i1 and i10
i 1 0 1
i 10 0 1
144
03 A. INITIALIZATION AND PERFORMANCE PASS Applications and Concepts
Instrument 10 can use the values which instrument 1 has produced in the same control cycle, but
it can only refer to values of instrument 100 which are produced in the previous control cycle. By
this reason, the printout shows values which are one less in the latter case.
Named Instruments
Instead of a number you can also use a name for an instrument. This is mostly preferable, because
you can give meaningful names, leading to a better readable code. But what about the order of
calculation in named instruments?
The answer is simple: Csound calculates them in the same order as they are written in the orches-
tra. So if your instrument collection is like this …
EXAMPLE 03A15_Order_of_calc_named.csd
<CsoundSynthesizer>
<CsOptions>
-nd
</CsOptions>
<CsInstruments>
145
Applications and Concepts 03 A. INITIALIZATION AND PERFORMANCE PASS
instr Grain_machine
prints " Grain_machine\n"
endin
instr Fantastic_FM
prints " Fantastic_FM\n"
endin
instr Random_Filter
prints " Random_Filter\n"
endin
instr Final_Reverb
prints " Final_Reverb\n"
endin
</CsInstruments>
<CsScore>
i "Final_Reverb" 0 1
i "Random_Filter" 0 1
i "Grain_machine" 0 1
i "Fantastic_FM" 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Note that the score has not the same order. But internally, Csound transforms all names to num-
bers, in the order they are written from top to bottom. The numbers are reported on the top of
Csound’s output:10
instr Grain_machine uses instrument number 1
instr Fantastic_FM uses instrument number 2
instr Random_Filter uses instrument number 3
instr Final_Reverb uses instrument number 4
10 If you want to know the number in an instrument, use the nstrnum opcode.
146
03 A. INITIALIZATION AND PERFORMANCE PASS Applications and Concepts
first get the number which Csound appointed to this instrument (using the nstrnum opcode), and
then add the fractional part (0, 0.1, 0.2 etc) to it.
EXAMPLE 03A16_FractionalInstrNums.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 2
0dbfs = 1
ksmps = 32
seed 0
instr 1
iMidiNote = p4
iFreq mtof iMidiNote
aPluck pluck .1, iFreq, iFreq, 0, 1
aOut linenr aPluck, 0, 1, .01
out aOut, aOut
endin
instr Trigger
index = 0
while index < lenarray(giArr) do
iInstrNum = nstrnum("Play")+index/10
schedule(iInstrNum,index+random:i(0,.5),5)
index += 1
od
endin
instr Play
iIndx = frac(p1)*10 //index is fractional part of instr number
iFreq = mtof:i(giArr[round(iIndx)])
aPluck pluck .1, iFreq, iFreq, 0, 1
aOut linenr aPluck, 0, 1, .01
out aOut, aOut
endin
</CsInstruments>
<CsScore>
//traditional score
t 0 90
i 1.0 0 -1 60
i 1.1 1 -1 65
i 1.2 2 -1 55
i 1.3 3 -1 70
i 1.4 4 -1 50
i 1.5 5 -1 75
i -1.4 7 1 0
i -1.1 8 1 0
i -1.5 9 1 0
i -1.0 10 1 0
i -1.3 11 1 0
i -1.2 12 1 0
147
Tips for Pratical Use 03 A. INITIALIZATION AND PERFORMANCE PASS
i "Trigger" 15 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The print opcode just prints variables which are updated at each initialization pass (i-time or i-rate).
If you want to print a variable which is updated at each control cycle (k-rate or k-time), you need
its counterpart printk. (As the performance pass is usually updated some thousands times per
second, you have an additional parameter in printk, telling Csound how often you want to print out
the k-values.)
So, some opcodes are just for i-rate variables, like filelen or ftgen. Others are just for k-rate
variables like metro or max_k. Many opcodes have variants for either i-rate-variables or k-rate-
variables, like printf_i and printf, sprintf and sprintfk, strindex and strindexk.
Most of the Csound opcodes are able to work either at i-time or at k-time or at audio-rate, but you
have to think carefully what you need, as the behaviour will be very different if you choose the i-, k-
or a-variante of an opcode. For example, the random opcode can work at all three rates:
ires random imin, imax : works at "i-time"
kres random kmin, kmax : works at "k-rate"
ares random kmin, kmax : works at "audio-rate"
If you use the i-rate random generator, you will get one value for each note. For instance, if you
want to have a different pitch for each note you are generating, you will use this one.
If you use the k-rate random generator, you will get one new value on every control cycle. If your
sample rate is 44100 and your ksmps=10, you will get 4410 new values per second! If you take
this as pitch value for a note, you will hear nothing but a noisy jumping. If you want to have a
11 See the following section 03B about the variable types for more on this subject.
148
03 A. INITIALIZATION AND PERFORMANCE PASS Tips for Pratical Use
moving pitch, you can use the randomi variant of the k-rate random generator, which can reduce
the number of new values per second, and interpolate between them.
If you use the a-rate random generator, you will get as many new values per second as your sample
rate is. If you use it in the range of your 0 dB amplitude, you produce white noise.
EXAMPLE 03A17_Random_at_ika.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 2
</CsInstruments>
<CsScore>
i 1 0 .5
i 1 .25 .5
i 1 .5 .5
i 1 .75 .5
i 2 2 1
i 3 4 2
i 3 5 2
i 3 6 2
i 4 9 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
149
Tips for Pratical Use 03 A. INITIALIZATION AND PERFORMANCE PASS
It has been said that usually the k-rate clock ticks much slower than the sample (a-rate) clock.
For a common size of ksmps=32, one k-value remains the same for 32 samples. This can lead to
problems, for instance if you use k-rate envelopes. Let us assume that you want to produce a very
short fade-in of 3 milliseconds, and you do it with the following line of code:
kFadeIn linseg 0, .003, 1
Such a staircase-envelope is what you hear in the next example as zipper noise. The transeg op-
code produces a non-linear envelope with a sharp peak:
The rise and the decay are each 1/10 seconds long. If this envelope is produced at k-rate with a
blocksize of 128 (instr 1), the noise is clearly audible. Try changing ksmps to 64, 32 or 16 and com-
pare the amount of zipper noise. — Instrument 2 uses an envelope at audio-rate instead. Regard-
less the blocksize, each sample is calculated seperately, so the envelope will always be smooth.
— Instrument 3 shows a remedy for situations in which a k-rate envelope cannot be avoided: the
a() will convert the k-signal to audio-rate by interpolation thus smoothing the envelope.
150
03 A. INITIALIZATION AND PERFORMANCE PASS Tips for Pratical Use
EXAMPLE 03A18_Zipper.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
;--- increase or decrease to hear the difference more or less evident
ksmps = 128
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
r 3 ;repeat the following line 3 times
i 1 0 1
s ;end of section
r 3
i 2 0 1
s
r 3
i 3 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Time Impossible
There are two internal clocks in Csound. The sample rate (sr) determines the audio-rate, whereas
the control rate (kr) determines the rate, in which a new control cycle can be started and a new
block of samples can be performed. In general, Csound can not start any event in between two
control cycles, nor end.
The next example chooses an extreme small control rate (only 10 k-cycles per second) to illustrate
this.
151
Tips for Pratical Use 03 A. INITIALIZATION AND PERFORMANCE PASS
EXAMPLE 03A19_Time_Impossible.csd
<CsoundSynthesizer>
<CsOptions>
-o test.wav -d
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 4410
nchnls = 1
0dbfs = 1
instr 1
aPink poscil .5, 430
out aPink
endin
</CsInstruments>
<CsScore>
i 1 0.05 0.1
i 1 0.4 0.15
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The first call advices instrument 1 to start performance at time 0.05. But this is impossible as it
lies between two control cycles. The second call starts at a possible time, but the duration of 0.15
again does not coincident with the control rate. So the result starts the first call at time 0.1 and
extends the second call to 0.2 seconds:
With Csound6, the possibilities of these in between are enlarged via the –sample-accurate option.
The next image shows how a 0.01 second envelope which is generated by the code
a1 init 1
a2 linen a1, p3/3, p3, p3/3
out a2
1. ksmps=128
2. ksmps=32
3. ksmps=1
4. ksmps=128 and –sample-accurate enabled
152
03 A. INITIALIZATION AND PERFORMANCE PASS Tips for Pratical Use
1. At ksmps=128, the last section of the envelope is missing. The reason is that, at sr=44100
Hz, 0.01 seconds contain 441 samples. 441 samples divided by the block size (ksmps) of
128 samples yield to 3.4453125 blocks. This is rounded to 3. So only 3 * 128 = 384 Samples
are performed. As you see, the envelope itself is calculated correctly in its shape. It would
end exactly at 0.01 seconds .. but it does not, because the ksmps block ends too early. So
this envelope might introduce a click at the end of this note.
2. At ksmps=32, the number of samples (441) divided by ksmps yield to a value of 13.78125.
This is rounded to 14, so the rendered audio is slightly longer than 0.01 seconds (448 sam-
ples).
3. At ksmps=1, the envelope is as expected.
4. At ksmps=128 and --sample-accurate enabled, the envelope is correct, too. Note that the sec-
tion is now 4*128=512 samples long, but the envelope is more accurate than at ksmps=32.
So, in case you experience clicks at very short envelopes although you use a-rate envelopes, it
might be necessary to set either ksmps=1, or to enable the --sample-accurate option.
The direct access uses the a...[] syntax which is common in most programming languages for
arrays or lists. As an audio vector is of ksmps length, we must iterate in each k-cycle over it. By
this, we can both, get and modify the values of the single samples directly. Moreover, we can use
control structures which are usually k-rate only also at a-rate, for instance any condition depending
on the value of a single sample.
The next example demonstrates three different usages of the sample-by-sample processing. In the
SimpleTest instrument, every single sample is multiplied by a value (1, 3 and -1). Then the result is
added to the previous sample value. This leads to amplification for iFac=3 and to silence for iFac=-
1 because in this case every sample cancels itself. In the PrintSampleIf instrument, each sample
which is between 0.99 and 1.00 is printed to the console. Also in the PlaySampleIf instrument an
if-condition is applied on each sample, but here not for printing rather than playing out only the
153
Tips for Pratical Use 03 A. INITIALIZATION AND PERFORMANCE PASS
samples whichs values are between 0 and 1/10000. They are then multiplied by 10000 so that not
only rhythm is irregular but also volume.
EXAMPLE 03A20_Sample_by_sample_processing.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr SimpleTest
kIndx = 0
while kIndx < ksmps do
aSinus[kIndx] = aSinus[kIndx] * iFac + aSinus[kIndx]
kIndx += 1
od
endin
instr PrintSampleIf
aRnd rnd31 1, 0, 1
kBlkCnt init 0
kSmpCnt init 0
kIndx = 0
while kIndx < ksmps do
if aRnd[kIndx] > 0.99 then
printf "Block = %2d, Sample = %4d, Value = %f\n",
kSmpCnt, kBlkCnt, kSmpCnt, aRnd[kIndx]
endif
kIndx += 1
kSmpCnt += 1
od
kBlkCnt += 1
endin
instr PlaySampleIf
aRnd rnd31 1, 0, 1
aOut init 0
kBlkCnt init 0
kSmpCnt init 0
kIndx = 0
while kIndx < ksmps do
154
03 A. INITIALIZATION AND PERFORMANCE PASS Tips for Pratical Use
endin
</CsInstruments>
<CsScore>
i "SimpleTest" 0 1 1
i "SimpleTest" 2 1 3
i "SimpleTest" 4 1 -1
i "PrintSampleIf" 6 .033
i "PlaySampleIf" 8 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The output should contain these lines, generated by the PrintSampleIf instrument, showing that in
block 40 there were two subsequent samples which fell under the condition:
Block = 2, Sample = 86, Value = 0.998916
Block = 7, Sample = 244, Value = 0.998233
Block = 19, Sample = 638, Value = 0.995197
Block = 27, Sample = 883, Value = 0.990801
Block = 34, Sample = 1106, Value = 0.997471
Block = 40, Sample = 1308, Value = 1.000000
Block = 40, Sample = 1309, Value = 0.998184
Block = 43, Sample = 1382, Value = 0.994353
At the end of chapter 03G an example is shown for a more practical use of sample-by-sample
processing in Csound: to implement a digital filter as user defined opcode.
The first case is easy to understand, although some results may be unexpected. Any k-variable
which is not explicitly initialized is set to zero as initial value.
EXAMPLE 03A21_Init_explcit_implicit.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
155
Tips for Pratical Use 03 A. INITIALIZATION AND PERFORMANCE PASS
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
;explicit initialization
k_Exp init 10
S_Exp init "goodbye"
;implicit initialization
k_Imp linseg 10, 1, 0
S_Imp strcpyk "world"
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The implicit output may be of some surprise. The variable k_Imp is not initilalized to 10, although
10 will be the first value during performance. And S_Imp carries the world already at initialization
although the opcode name strcpyk may suggest something else. But as the manual page states:
strcpyk does the assignment both at initialization and performance time.
What happens if there are two init statements, following each other? Usually the second one
overwrites the first. But if a k-value is explicitely set via the init opcode, the implicit initialization
will not take place.
EXAMPLE 03A22_Init_overwrite.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
156
03 A. INITIALIZATION AND PERFORMANCE PASS Tips for Pratical Use
nchnls = 2
0dbfs = 1
instr 1
;k-variables
k_var init 20
k_var linseg 10, 1, 0
;string variables
S_var init "goodbye"
S_var strcpyk "world"
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Both pairs of lines in the code look similar, but do something quite different. For k_var the line
k_var linseg 10, 1, 0 will not initialize k_var to zero, as this happens only if no init value is
assigned. The line S_var strcpyk "world" instead does an explicit initialization, and this ini-
tialization will overwrite the preceding one. If the lines were swapped, the result would be goodbye
rather than world.
If-clauses can be either i-rate or k-rate. A k-rate if-clause initializes nevertheless. Reading the next
example may suggest that the variable String is only initalized to ”yes”, because the if-condition will
never become true. But regardless it is true or false, any k-rate if-clause initializes its expressions,
in this case the String variable.
EXAMPLE 03A23_Init_hidden_in_if.csd
<CsoundSynthesizer>
<CsOptions>
-m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
157
Tips for Pratical Use 03 A. INITIALIZATION AND PERFORMANCE PASS
;print it at init-time
printf_i "INIT 1: %s", 1, String
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Returns:
INIT 1: yes!
INIT 2: no!
PERF 1: yes!
PERF 2: yes!
PERF 3: yes!
If you want to skip the initialization at this point, you can use an igoto statement:
if kBla == 1 then
igoto skip
String strcpyk "no!\n"
skip:
endif
A user may expect that a UDO will behave identical to a csound native opcode, but in terms of
implicit initialization it is not the case. In the following example, we may expect that instrument 2
has the same output as instrument 1.
EXAMPLE 03A24_Init_hidden_in_udo.csd
<CsoundSynthesizer>
<CsOptions>
-m128
</CsOptions>
158
03 A. INITIALIZATION AND PERFORMANCE PASS Tips for Pratical Use
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
opcode RndInt, k, kk
kMin, kMax xin
kRnd random kMin, kMax+.999999
kRnd = int(kRnd)
xout kRnd
endop
instr 1 ;opcode
kBla init 10
kBla random 1, 2
prints "instr 1: kBla initialized to %d\n", i(kBla)
turnoff
endin
kBla init 10
kBla RndInt 1, 2
prints "instr 2: kBla initialized to %d\n", i(kBla)
turnoff
endin
kBla init 10
kBla = RndInt(1, 2)
prints "instr 3: kBla initialized to %d\n", i(kBla)
turnoff
endin
</CsInstruments>
<CsScore>
i 1 0 .1
i 2 + .
i 3 + .
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The reason that instrument 2 does implicit initialization of kBla is written in the manual page for
xin / xout: These opcodes actually run only at i-time. In this case, kBla is initialized to zero, because
the kRnd variable inside the UDO is implicitly zero at init-time.
Instrument 3, on the other hand, uses the = operator. It works as other native opcodes: if a k-
variable has an explicit init value, it does not initialize again.
159
Tips for Pratical Use 03 A. INITIALIZATION AND PERFORMANCE PASS
The examples about hidden (implicit) initialization may look somehow sophisticated and far from
normal usage. But this is not the case. As users we may think: ”I perform a line from 10 to 0 in 1
second, and i write this in the variable kLine. So i(kLine) is 10.” It is not, and if you send this value
at init-time to another instrument, your program will give wrong output.
160
03 B. LOCAL AND GLOBAL VARIABLES
Variable Types
In Csound, there are several types of variables. It is important to understand the differences be-
tween these types. There are
ä initialization variables, which are updated at each initialization pass, i.e. at the beginning of
each note or score event. They start with the character i. To this group count also the score
parameter fields, which always starts with a p, followed by any number: p1 refers to the first
parameter field in the score, p2 to the second one, and so on.
ä control variables, which are updated at each control cycle during the performance of an
instrument. They start with the character k.
ä audio variables, which are also updated at each control cycle, but instead of a single number
(like control variables) they consist of a vector (a collection of numbers), having in this way
one number for each sample. They start with the character a.
ä string variables, which are updated either at i-time or at k-time (depending on the opcode
which produces a string). They start with the character S.
Except these four standard types, there are two other variable types which are used for spectral
processing:
ä f-variables are used for the streaming phase vocoder opcodes (all starting with the char-
acters pvs), which are very important for doing realtime FFT (Fast Fourier Transform) in
Csound. They are updated at k-time, but their values depend also on the FFT parameters
like frame size and overlap. Examples for using f-sigs can be found in chapter 05 I.
ä w-variables are used in some older spectral processing opcodes.
The following example exemplifies all the variable types (except the w-type):
EXAMPLE 03B01_Variable_types.csd
<CsoundSynthesizer>
<CsOptions>
--env:SSDIR+=../SourceMaterials -o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
161
Variable Types 03 B. LOCAL AND GLOBAL VARIABLES
0dbfs = 1
nchnls = 2
instr 3; a-variables
aVar1 poscil .2, 400; first audio signal: sine
aVar2 rand 1; second audio signal: noise
aVar3 butbp aVar2, 1200, 12; third audio signal: noise filtered
aVar = aVar1 + aVar3; audio variables can also be added
outs aVar, aVar; write to sound card
endin
instr 4; S-variables
iMyVar random 0, 10; one random value per note
kMyVar random 0, 10; one random value per each control-cycle
;S-variable updated just at init-time
SMyVar1 sprintf "This string is updated just at init-time: kMyVar = %d\n",
iMyVar
printf_i "%s", 1, SMyVar1
;S-variable updates at each control-cycle
printks "This string is updated at k-time: kMyVar = %.3f\n", .1, kMyVar
endin
instr 5; f-variables
aSig rand .2; audio signal (noise)
; f-signal by FFT-analyzing the audio-signal
fSig1 pvsanal aSig, 1024, 256, 1024, 1
; second f-signal (spectral bandpass filter)
fSig2 pvsbandp fSig1, 350, 400, 400, 450
aOut pvsynth fSig2; change back to audio signal
outs aOut*20, aOut*20
endin
</CsInstruments>
<CsScore>
; p1 p2 p3
i 1 0 0.1
i 1 0.1 0.1
i 2 1 1
i 3 2 1
i 4 3 1
i 5 4 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
You can think of variables as named connectors between opcodes. You can connect the output
162
03 B. LOCAL AND GLOBAL VARIABLES Local Scope
from an opcode to the input of another. The type of connector (audio, control, etc.) is determined
by the first letter of its name.
For a more detailed discussion, see the article An overview Of Csound Variable Types by Andrés
Cabrera in the Csound Journal, and the page about Types, Constants and Variables in the Canoni-
cal Csound Manual.
Local Scope
The scope of these variables is usually the instrument in which they are defined. They are local
variables. In the following example, the variables in instrument 1 and instrument 2 have the same
names, but different values.
EXAMPLE 03B02_Local_scope.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 4410; very high because of printing
nchnls = 2
0dbfs = 1
instr 1
;i-variable
iMyVar init 0
iMyVar = iMyVar + 1
print iMyVar
;k-variable
kMyVar init 0
kMyVar = kMyVar + 1
printk 0, kMyVar
;a-variable
aMyVar oscils .2, 400, 0
outs aMyVar, aMyVar
;S-variable updated just at init-time
SMyVar1 sprintf "This string is updated just at init-time: kMyVar = %d\n",
i(kMyVar)
printf "%s", kMyVar, SMyVar1
;S-variable updated at each control-cycle
SMyVar2 sprintfk "This string is updated at k-time: kMyVar = %d\n", kMyVar
printf "%s", kMyVar, SMyVar2
endin
instr 2
;i-variable
iMyVar init 100
iMyVar = iMyVar + 1
print iMyVar
;k-variable
kMyVar init 100
kMyVar = kMyVar + 1
printk 0, kMyVar
;a-variable
163
Global Scope 03 B. LOCAL AND GLOBAL VARIABLES
</CsInstruments>
<CsScore>
i 1 0 .3
i 2 1 .3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
This is the output (first the output at init-time by the print opcode, then at each k-cycle the output
of printk and the two printf opcodes):
new alloc for instr 1:
instr 1: iMyVar = 1.000
i 1 time 0.10000: 1.00000
This string is updated just at init-time: kMyVar = 0
This string is updated at k-time: kMyVar = 1
i 1 time 0.20000: 2.00000
This string is updated just at init-time: kMyVar = 0
This string is updated at k-time: kMyVar = 2
i 1 time 0.30000: 3.00000
This string is updated just at init-time: kMyVar = 0
This string is updated at k-time: kMyVar = 3
B 0.000 .. 1.000 T 1.000 TT 1.000 M: 0.20000 0.20000
new alloc for instr 2:
instr 2: iMyVar = 101.000
i 2 time 1.10000: 101.00000
This string is updated just at init-time: kMyVar = 100
This string is updated at k-time: kMyVar = 101
i 2 time 1.20000: 102.00000
This string is updated just at init-time: kMyVar = 100
This string is updated at k-time: kMyVar = 102
i 2 time 1.30000: 103.00000
This string is updated just at init-time: kMyVar = 100
This string is updated at k-time: kMyVar = 103
B 1.000 .. 1.300 T 1.300 TT 1.300 M: 0.29998 0.29998
Global Scope
If you need variables which are recognized beyond the scope of an instrument, you must define
them as global. This is done by prefixing the character g before the types i, k, a or S. See the
following example:
164
03 B. LOCAL AND GLOBAL VARIABLES Global Scope
EXAMPLE 03B03_Global_scope.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 4410; very high because of printing
nchnls = 2
0dbfs = 1
instr 1
;global i-variable
giMyVar = giMyVar + 1
print giMyVar
;global k-variable
gkMyVar = gkMyVar + 1
printk 0, gkMyVar
;global S-variable updated just at init-time
gSMyVar1 sprintf "This string is updated just at init-time: gkMyVar = %d\n",
i(gkMyVar)
printf "%s", gkMyVar, gSMyVar1
;global S-variable updated at each control-cycle
gSMyVar2 sprintfk "This string is updated at k-time: gkMyVar = %d\n", gkMyVar
printf "%s", gkMyVar, gSMyVar2
endin
instr 2
;global i-variable, gets value from instr 1
giMyVar = giMyVar + 1
print giMyVar
;global k-variable, gets value from instr 1
gkMyVar = gkMyVar + 1
printk 0, gkMyVar
;global S-variable updated just at init-time, gets value from instr 1
printf "Instr 1 tells: '%s'\n", gkMyVar, gSMyVar1
;global S-variable updated at each control-cycle, gets value from instr 1
printf "Instr 1 tells: '%s'\n\n", gkMyVar, gSMyVar2
endin
</CsInstruments>
<CsScore>
i 1 0 .3
i 2 0 .3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The output shows the global scope, as instrument 2 uses the values which have been changed by
instrument 1 in the same control cycle:
new alloc for instr 1:
instr 1: giMyVar = 1.000
new alloc for instr 2:
instr 2: giMyVar = 2.000
i 1 time 0.10000: 1.00000
This string is updated just at init-time: gkMyVar = 0
This string is updated at k-time: gkMyVar = 1
i 2 time 0.10000: 2.00000
Instr 1 tells: 'This string is updated just at init-time: gkMyVar = 0'
Instr 1 tells: 'This string is updated at k-time: gkMyVar = 1'
165
How To Work With Global Audio Variables 03 B. LOCAL AND GLOBAL VARIABLES
The next few examples are going into a bit more detail. If you just want to see the result (= global
audio usually must be cleared), you can skip the next examples and just go to the last one of this
section.
Introductory Examples
It should be understood first that a global audio variable is treated the same by Csound if it is
applied like a local audio signal:
EXAMPLE 03B04_Global_audio_intro.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
166
03 B. LOCAL AND GLOBAL VARIABLES How To Work With Global Audio Variables
i 1 0 3
i 2 0 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Of course there is no need to use a global variable in this case. If you do it, you risk your audio
will be overwritten by an instrument with a higher number using the same variable name. In the
following example, you will just hear a 600 Hz sine tone, because the 400 Hz sine of instrument 1
is overwritten by the 600 Hz sine of instrument 2:
EXAMPLE 03B05_Global_audio_overwritten.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
i 1 0 3
i 2 0 3
i 3 0 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
EXAMPLE 03B06_Global_audio_added.csd
<CsoundSynthesizer>
<CsInstruments>
167
How To Work With Global Audio Variables 03 B. LOCAL AND GLOBAL VARIABLES
sr = 44100
ksmps = 4410; very high because of printing
nchnls = 2
0dbfs = 1
instr 1
kSum init 0; sum is zero at init pass
kAdd = 1; control signal to add
kSum = kSum + kAdd; new sum in each k-cycle
printk 0, kSum; print the sum
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
In this case, the ”sum bus” kSum increases at each control cycle by 1, because it adds the kAdd
signal (which is always 1) in each k-pass to its previous state. It is no different if this is done by a
local k-signal, like here, or by a global k-signal, like in the next example:
EXAMPLE 03B07_Global_control_added.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 4410; very high because of printing
nchnls = 2
0dbfs = 1
instr 1
gkAdd = 1; control signal to add
endin
instr 2
gkSum = gkSum + gkAdd; new sum in each k-cycle
printk 0, gkSum; print the sum
endin
</CsInstruments>
<CsScore>
i 1 0 1
i 2 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
168
03 B. LOCAL AND GLOBAL VARIABLES How To Work With Global Audio Variables
ksmps=100, you will calculate 441 times in one second a vector which consists of 100 numbers,
indicating the amplitude of each sample.
So, if you add an audio signal to its previous state, different things can happen, depending on the
vector’s present and previous states. If both previous and present states (with ksmps=9) are [0
0.1 0.2 0.1 0 -0.1 -0.2 -0.1 0] you will get a signal which is twice as strong: [0 0.2 0.4 0.2 0 -0.2 -0.4
-0.2 0]. But if the present state is opposite [0 -0.1 -0.2 -0.1 0 0.1 0.2 0.1 0], you will only get zeros
when you add them. This is shown in the next example with a local audio variable, and then in the
following example with a global audio variable.
EXAMPLE 03B08_Local_audio_add.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 4410; very high because of printing
;(change to 441 to see the difference)
nchnls = 2
0dbfs = 1
instr 1
;initialize a general audio variable
aSum init 0
;produce a sine signal (change frequency to 401 to see the difference)
aAdd oscils .1, 400, 0
;add it to the general audio (= the previous vector)
aSum = aSum + aAdd
kmax max_k aSum, 1, 1; calculate maximum
printk 0, kmax; print it out
outs aSum, aSum
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
prints:
i 1 time 0.10000: 0.10000
i 1 time 0.20000: 0.20000
i 1 time 0.30000: 0.30000
i 1 time 0.40000: 0.40000
i 1 time 0.50000: 0.50000
i 1 time 0.60000: 0.60000
i 1 time 0.70000: 0.70000
i 1 time 0.80000: 0.79999
i 1 time 0.90000: 0.89999
i 1 time 1.00000: 0.99999
EXAMPLE 03B09_Global_audio_add.csd
<CsoundSynthesizer>
<CsOptions>
169
How To Work With Global Audio Variables 03 B. LOCAL AND GLOBAL VARIABLES
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 4410; very high because of printing
;(change to 441 to see the difference)
nchnls = 2
0dbfs = 1
instr 1
;produce a sine signal (change frequency to 401 to see the difference)
aAdd oscils .1, 400, 0
;add it to the general audio (= the previous vector)
gaSum = gaSum + aAdd
endin
instr 2
kmax max_k gaSum, 1, 1; calculate maximum
printk 0, kmax; print it out
outs gaSum, gaSum
endin
</CsInstruments>
<CsScore>
i 1 0 1
i 2 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
In both cases, you get a signal which increases each 1/10 second, because you have 10 control
cycles per second (ksmps=4410), and the frequency of 400 Hz can be evenly divided by this. If
you change the ksmps value to 441, you will get a signal which increases much faster and is out
of range after 1/10 second. If you change the frequency to 401 Hz, you will get a signal which
increases first, and then decreases, because each audio vector has 40.1 cycles of the sine wave.
So the phases are shifting; first getting stronger and then weaker. If you change the frequency to
10 Hz, and then to 15 Hz (at ksmps=44100), you cannot hear anything, but if you render to file, you
can see the whole process of either enforcing or erasing quite clear:
Figure 19.1: Self-reinforcing global audio signal on account of its state in one control cycle being the
same as in the previous one
170
03 B. LOCAL AND GLOBAL VARIABLES How To Work With Global Audio Variables
Figure 19.2: Partly self-erasing global audio signal because of phase inversions in two subsequent
control cycles
EXAMPLE 03B10_Global_with_clear.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
171
How To Work With Global Audio Variables 03 B. LOCAL AND GLOBAL VARIABLES
</CsInstruments>
<CsScore>
f 1 0 1024 10 1 .5 .3 .1
i 1 0 20
i 2 0 20
i 3 0 20
i 100 0 20
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
For audio variables, instead of performing an addition, you can use the chnmix opcode. For clear-
ing an audio variable, the chnclear opcode can be used.
172
03 B. LOCAL AND GLOBAL VARIABLES How To Work With Global Audio Variables
EXAMPLE 03B11_Chn_demo.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
173
How To Work With Global Audio Variables 03 B. LOCAL AND GLOBAL VARIABLES
endin
</CsInstruments>
<CsScore>
i 1 0 20
i 2 0 20
i 3 0 20
i 11 0 20
i 12 0 20
i 20 0 20
i 100 0 20
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
174
03 C. CONTROL STRUCTURES
In a way, control structures are the core of a programming language. The fundamental element in
each language is the conditional if branch. Actually all other control structures like for-, until- or
while-loops can be traced back to if-statements.
So, Csound provides mainly the if-statement; either in the usual if-then-else form, or in the older way
of an if-goto statement. These will be covered first. Though all necessary loops can be built just by
if-statements, Csound’s while, until and loop facility offer a more comfortable way of performing
loops. They will be introduced later, in the Loop and the While / Until section of this chapter. Finally,
time loops are shown, which are particulary important in audio programming languages.
For instance, if we test a soundfile whether it is mono or stereo, this is done at init-time. If we test
an amplitude value to be below a certain threshold, it is done at performance time (k-time). If we
receive user-input by a scroll number, this is also a k-value, so we need a k-condition.
Thus, if and while as most used control structures have an i and a k descendant. In the next few
sections, a general introduction into the different control tools is given, followed by examples both
at i-time and at k-time for each tool.
175
If - then - [elseif - then -] else 03 C. CONTROL STRUCTURES
if <condition> then
...
else
...
endif
If statements can also be nested. Each level must be closed with an endif. This is an example
with three levels:
if <condition1> then; first condition opened
if <condition2> then; second condition openend
if <condition3> then; third condition openend
...
else
...
endif; third condition closed
elseif <condition2a> then
...
endif; second condition closed
else
...
endif; first condition closed
i-Rate Examples
A typical problem in Csound: You have either mono or stereo files, and want to read both with a
stereo output. For the real stereo ones that means: use diskin (soundin / diskin2) with two output
arguments. For the mono ones it means: use it with one output argument, and throw it to both
output channels:1
EXAMPLE 03C01_IfThen_i.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
1 The modern way to solve this is to work with an audio array as output of diskin. But nevertheless the example shows a
typical usage of the i-rate if branching.
176
03 C. CONTROL STRUCTURES If - then - [elseif - then -] else
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
Sfile = "ClassGuit.wav"
ifilchnls filenchnls Sfile
if ifilchnls == 1 then ;mono
aL soundin Sfile
aR = aL
else ;stereo
aL, aR soundin Sfile
endif
outs aL, aR
endin
</CsInstruments>
<CsScore>
i 1 0 5
</CsScore>
</CsoundSynthesizer>
;Example by Joachim Heintz
k-Rate Examples
The following example establishes a moving gate between 0 and 1. If the gate is above 0.5, the
gate opens and you hear a tone. If the gate is equal or below 0.5, the gate closes, and you hear
nothing.
EXAMPLE 03C02_IfThen_k.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
177
If - then - [elseif - then -] else 03 C. CONTROL STRUCTURES
else
kVol = 0; otherwise close gate
endif
kVol port kVol, .02; smooth volume curve to avoid clicks
aOut = aSig * kVol
outs aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Short Form: (a v b ? x : y)
If you need an if-statement to give a value to an (i- or k-) variable, you can also use a traditional
short form in parentheses: (a v b ? x : y).2 It asks whether the condition a or b is true. If a, the
value is set to x; if b, to y. For instance, the last example could be written in this way:
EXAMPLE 03C03_IfThen_short_form.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
giTone ftgen 0, 0, 2^10, 10, 1, .5, .3, .1
instr 1
kGate randomi 0, 1, 3; moves between 0 and 1 (3 new values per second)
kFreq randomi 300, 800, 1; moves between 300 and 800 hz
;(1 new value per sec)
kdB randomi -12, 0, 5; moves between -12 and 0 dB
;(5 new values per sec)
aSig oscil3 1, kFreq, giTone
kVol init 0
kVol = (kGate > 0.5 ? ampdb(kdB) : 0); short form of condition
kVol port kVol, .02; smooth volume curve to avoid clicks
aOut = aSig * kVol
outs aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 20
</CsScore>
2 Since the release of the new parser (Csound 5.14), the expression can also be written without parentheses.
178
03 C. CONTROL STRUCTURES If - goto
</CsoundSynthesizer>
;example by joachim heintz
If - goto
An older way of performing a conditional branch - but still useful in certain cases - is an if statement
which is not followed by a then, but by a label name. The else construction follows (or doesn’t
follow) in the next line. Like the if-then-else statement, the if-goto works either at i-time or at k-
time. You should declare the type by either using igoto or kgoto. Usually you need an additional
igoto/kgoto statement for omitting the else block if the first condition is true. This is the general
syntax:
i-time
if <condition> igoto this; same as if-then
igoto that; same as else
this: ;the label "this" ...
...
igoto continue ;skip the "that" block
that: ; ... and the label "that" must be found
...
continue: ;go on after the conditional branch
...
k-time
if <condition> kgoto this; same as if-then
kgoto that; same as else
this: ;the label "this" ...
...
kgoto continue ;skip the "that" block
that: ; ... and the label "that" must be found
...
continue: ;go on after the conditional branch
...
In case raw goto is used, it is a combination of igoto and kgoto, so the condition is tested on both,
initialization and performance pass.
i-Rate Examples
This is the same example as above in the if-then-else syntax for a branch depending on a mono or
stereo file. If you just want to know whether a file is mono or stereo, you can use the pure if-igoto
statement:
EXAMPLE 03C04_IfGoto_i.csd
<CsoundSynthesizer>
<CsOptions>
--env:SSDIR+=../SourceMaterials -odac
179
If - goto 03 C. CONTROL STRUCTURES
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
Sfile = "ClassGuit.wav"
ifilchnls filenchnls Sfile
if ifilchnls == 1 igoto mono; condition if true
igoto stereo; else condition
mono:
prints "The file is mono!%n"
igoto continue
stereo:
prints "The file is stereo!%n"
continue:
endin
</CsInstruments>
<CsScore>
i 1 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
But if you want to play the file, you must also use a k-rate if-kgoto, because, not only do you have
an event at i-time (initializing the soundin opcode) but also at k-time (producing an audio signal).
So goto must be used here, to combine igoto and kgoto.
EXAMPLE 03C05_IfGoto_ik.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
Sfile = "ClassGuit.wav"
ifilchnls filenchnls Sfile
if ifilchnls == 1 goto mono
goto stereo
mono:
aL soundin Sfile
aR = aL
goto continue
stereo:
aL, aR soundin Sfile
continue:
outs aL, aR
endin
</CsInstruments>
<CsScore>
i 1 0 5
180
03 C. CONTROL STRUCTURES If - goto
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
k-Rate Examples
This is the same example as above (03C02) in the if-then-else syntax for a moving gate between
0 and 1:
EXAMPLE 03C06_IfGoto_k.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
giTone ftgen 0, 0, 2^10, 10, 1, .5, .3, .1
instr 1
kGate randomi 0, 1, 3; moves between 0 and 1 (3 new values per second)
kFreq randomi 300, 800, 1; moves between 300 and 800 hz
;(1 new value per sec)
kdB randomi -12, 0, 5; moves between -12 and 0 dB
;(5 new values per sec)
aSig oscil3 1, kFreq, giTone
kVol init 0
if kGate > 0.5 kgoto open; if condition is true
kgoto close; "else" condition
open:
kVol = ampdb(kdB)
kgoto continue
close:
kVol = 0
continue:
kVol port kVol, .02; smooth volume curve to avoid clicks
aOut = aSig * kVol
outs aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
181
Loops 03 C. CONTROL STRUCTURES
Loops
Loops can be built either at i-time or at k-time just with the if facility. The following example shows
an i-rate and a k-rate loop created using the if-i/kgoto facility:
EXAMPLE 03C07_Loops_with_if.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
But Csound offers a slightly simpler syntax for this kind of i-rate or k-rate loops. There are four vari-
ants of the loop opcode. All four refer to a label as the starting point of the loop, an index variable
as a counter, an increment or decrement, and finally a reference value (maximum or minimum) as
comparision:
ä loop_lt counts upwards and looks if the index variable is lower than the reference value;
ä loop_le also counts upwards and looks if the index is lower than or equal to the reference
value;
ä loop_gt counts downwards and looks if the index is greater than the reference value;
ä loop_ge also counts downwards and looks if the index is greater than or equal to the refer-
ence value.
182
03 C. CONTROL STRUCTURES Loops
As always, all four opcodes can be applied either at i-time or at k-time. Here are some examples,
first for i-time loops, and then for k-time loops.
i-Rate Examples
The following .csd provides a simple example for all four loop opcodes:
EXAMPLE 03C08_Loop_opcodes_i.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 0
i 3 0 0
i 4 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The next example produces a random string of 10 characters and prints it out:
183
Loops 03 C. CONTROL STRUCTURES
EXAMPLE 03C09_Random_string.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
instr 1
icount = 0
Sname = ""; starts with an empty string
loop:
ichar random 65, 90.999
Schar sprintf "%c", int(ichar); new character
Sname strcat Sname, Schar; append to Sname
loop_lt icount, 1, 10, loop; loop construction
printf_i "My name is '%s'!\n", 1, Sname; print result
endin
</CsInstruments>
<CsScore>
; call instr 1 ten times
r 10
i 1 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
You can also use an i-rate loop to fill a function table (= buffer) with any kind of values. This table
can then be read, or manipulated and then be read again. In the next example, a function table
with 20 positions (indices) is filled with random integers between 0 and 10 by instrument
1. Nearly the same loop construction is used afterwards to read these values by instrument 2.
EXAMPLE 03C10_Random_ftable_fill.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
184
03 C. CONTROL STRUCTURES Loops
</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
k-Rate Examples
The next example performs a loop at k-time. Once per second, every value of an existing function
table is changed by a random deviation of 10%. Though there are some vectorial opcodes for this
task (and in Csound 6 probably array), it can also be done by a k-rate loop like the one shown here:
EXAMPLE 03C11_Table_random_dev.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 441
nchnls = 2
0dbfs = 1
instr 1
ktiminstk timeinstk ;time in control-cycles
kcount init 1
if ktiminstk == kcount * kr then; once per second table values manipulation:
kndx = 0
loop:
krand random -.1, .1;random factor for deviations
kval table kndx, giSine; read old value
knewval = kval + (kval * krand); calculate new value
tablew knewval, kndx, giSine; write new value
loop_lt kndx, 1, 256, loop; loop construction
kcount = kcount + 1; increase counter
endif
asig poscil .2, 400, giSine
outs asig, asig
endin
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
185
While / Until 03 C. CONTROL STRUCTURES
While / Until
Since the release of Csound 6, it has been possible to write loops in a manner similar to that used
by many other programming languages, using the keywords while or until. The general syntax is:
while <condition> do
...
od
until <condition> do
...
od
The body of the while loop will be performed again and again, as long as <condition> is true. The
body of the until loop will be performed, as long as <condition> is false (not true). This is a simple
example at i-rate:
EXAMPLE 03C12_while_until_i-rate.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
ksmps = 32
instr 1
iCounter = 0
while iCounter < 5 do
print iCounter
iCounter += 1
od
prints "\n"
endin
instr 2
iCounter = 0
until iCounter >= 5 do
print iCounter
iCounter += 1
od
endin
</CsInstruments>
<CsScore>
i 1 0 .1
i 2 .1 .1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Prints:
instr 1: iprint = 0.000
instr 1: iprint = 1.000
instr 1: iprint = 2.000
instr 1: iprint = 3.000
instr 1: iprint = 4.000
186
03 C. CONTROL STRUCTURES While / Until
The most important thing in using the while/until loop is to increment the variable you are using
in the loop (here: iCounter). This is done by the statement
iCounter += 1
If you miss this increment, Csound will perform an endless loop, and you will have to terminate it
by the operating system.
The next example shows a similar process at k-rate. It uses a while loop to print the values of an
array, and also set new values. As this procedure is repeated in each control cycle, the instrument
is being turned off after the third cycle.
EXAMPLE 03C13_while_until_k-rate.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
ksmps = 32
instr 1
;count performance cycles and print it
kCycle timeinstk
printks "kCycle = %d\n", 0, kCycle
;set index to zero
kIndex = 0
;perform the loop
while kIndex < lenarray(gkArray) do
;print array value
printf " gkArray[%d] = %d\n", kIndex+1, kIndex, gkArray[kIndex]
;square array value
gkArray[kIndex] = gkArray[kIndex] * gkArray[kIndex]
;increment index
kIndex += 1
od
;stop after third control cycle
if kCycle == 3 then
turnoff
endif
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
187
Time Loops 03 C. CONTROL STRUCTURES
Prints:
kCycle = 1
gkArray[0] = 1
gkArray[1] = 2
gkArray[2] = 3
gkArray[3] = 4
gkArray[4] = 5
kCycle = 2
gkArray[0] = 1
gkArray[1] = 4
gkArray[2] = 9
gkArray[3] = 16
gkArray[4] = 25
kCycle = 3
gkArray[0] = 1
gkArray[1] = 16
gkArray[2] = 81
gkArray[3] = 256
gkArray[4] = 625
Time Loops
Until now, we have just discussed loops which are executed ”as fast as possible”, either at i-time
or at k-time. But, in an audio programming language, time loops are of particular interest and
importance. A time loop means, repeating any action after a certain amount of time. This amount
of time can be equal to or different to the previous time loop. The action can be, for instance:
playing a tone, or triggering an instrument, or calculating a new value for the movement of an
envelope.
In Csound, the usual way of performing time loops, is the timout facility. The use of timout is a bit
intricate, so some examples are given, starting from very simple to more complex ones.
Another way of performing time loops is by using a measurement of time or k-cycles. This method
is also discussed and similar examples to those used for the timout opcode are given so that both
methods can be compared.
Timout Basics
The timout opcode refers to the fact that in the traditional way of working with Csound, each note
(an i score event) has its own time. This is the duration of the note, given in the score by the
duration parameter, abbreviated as p3. A timout statement says: ”I am now jumping out of this p3
duration and establishing my own time.” This time will be repeated as long as the duration of the
note allows it.
Let’s see an example. This is a sine tone with a moving frequency, starting at 400 Hz and ending
at 600 Hz. The duration of this movement is 3 seconds for the first note, and 5 seconds for the
second note:
188
03 C. CONTROL STRUCTURES Time Loops
EXAMPLE 03C14_Timout_pre.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
kFreq expseg 400, p3, 600
aTone poscil .2, kFreq, giSine
outs aTone, aTone
endin
</CsInstruments>
<CsScore>
i 1 0 3
i 1 4 5
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Now we perform a time loop with timout which is 1 second long. So, for the first note, it will be
repeated three times, and five times for the second note:
EXAMPLE 03C15_Timout_basics.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
loop:
timout 0, 1, play
reinit loop
play:
kFreq expseg 400, 1, 600
aTone poscil .2, kFreq, giSine
outs aTone, aTone
endin
</CsInstruments>
<CsScore>
i 1 0 3
i 1 4 5
</CsScore>
</CsoundSynthesizer>
189
Time Loops 03 C. CONTROL STRUCTURES
The first_label is an arbitrary word (followed by a colon) to mark the beginning of the time
loop section. The istart argument for timout tells Csound, when the second_label section
is to be executed. Usually istart is zero, telling Csound: execute the second_label section
immediately, without any delay. The idur argument for timout defines for how many seconds the
second_label section is to be executed before the time loop begins again. Note that the reinit
first_label is necessary to start the second loop after idur seconds with a resetting of all the
values. (See the explanations about reinitialization in the chapter Initialization and Performance
Pass.
As usual when you work with the reinit opcode, you can use a rireturn statement to constrain the
reinit-pass. In this way you can have both, the timeloop section and the non-timeloop section in
the body of an instrument:
EXAMPLE 03C16_Timeloop_and_not.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
loop:
timout 0, 1, play
reinit loop
play:
kFreq1 expseg 400, 1, 600
aTone1 oscil3 .2, kFreq1, giSine
rireturn ;end of the time loop
kFreq2 expseg 400, p3, 600
aTone2 poscil .2, kFreq2, giSine
</CsInstruments>
<CsScore>
i 1 0 3
i 1 4 5
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
190
03 C. CONTROL STRUCTURES Time Loops
Timout Applications
In a time loop, it is very important to change the duration of the loop. This can be done either by
referring to the duration of this note (p3) ...
EXAMPLE 03C17_Timout_different_durations.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
loop:
timout 0, p3/5, play
reinit loop
play:
kFreq expseg 400, p3/5, 600
aTone poscil .2, kFreq, giSine
outs aTone, aTone
endin
</CsInstruments>
<CsScore>
i 1 0 3
i 1 4 5
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
... or by calculating new values for the loop duration on each reinit pass, for instance by random
values:
EXAMPLE 03C18_Timout_random_durations.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
loop:
idur random .5, 3 ;new value between 0.5 and 3 seconds each time
191
Time Loops 03 C. CONTROL STRUCTURES
</CsInstruments>
<CsScore>
i 1 0 20
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The applications discussed so far have the disadvantage that all the signals inside the time loop
must definitely be finished or interrupted, when the next loop begins. In this way it is not possible
to have any overlapping of events. To achieve this, the time loop can be used to simply trigger
an event. This can be done with schedule, event_i or scoreline_i. In the following example, the
time loop in instrument 1 triggers a new instance of instrument 2 with a duration of 1 to 5 seconds,
every 0.5 to 2 seconds. So in most cases, the previous instance of instrument 2 will still be playing
when the new instance is triggered. Random calculations are executed in instrument 2 so that
each note will have a different pitch,creating a glissando effect:
EXAMPLE 03C19_Timout_trigger_events.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
loop:
idurloop random .5, 2 ;duration of each loop
timout 0, idurloop, play
reinit loop
play:
idurins random 1, 5 ;duration of the triggered instrument
event_i "i", 2, 0, idurins ;triggers instrument 2
endin
instr 2
ifreq1 random 600, 1000 ;starting frequency
idiff random 100, 300 ;difference to final frequency
ifreq2 = ifreq1 - idiff ;final frequency
kFreq expseg ifreq1, p3, ifreq2 ;glissando
iMaxdb random -12, 0 ;peak randomly between -12 and 0 dB
kAmp transeg ampdb(iMaxdb), p3, -10, 0 ;envelope
aTone poscil kAmp, kFreq, giSine
outs aTone, aTone
endin
192
03 C. CONTROL STRUCTURES Time Loops
</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The last application of a time loop with the timout opcode which is shown here, is a randomly
moving envelope. If we want to create an envelope in Csound which moves between a lower and
an upper limit, and has one new random value in a certain time span (for instance, once a second),
the time loop with timout is one way to achieve it. A line movement must be performed in each
time loop, from a given starting value to a new evaluated final value. Then, in the next loop, the
previous final value must be set as the new starting value, and so on. Here is a possible solution:
EXAMPLE 03C20_Timout_random_envelope.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
iupper = 0; upper and ...
ilower = -24; ... lower limit in dB
ival1 random ilower, iupper; starting value
loop:
idurloop random .5, 2; duration of each loop
timout 0, idurloop, play
reinit loop
play:
ival2 random ilower, iupper; final value
kdb linseg ival1, idurloop, ival2
ival1 = ival2; let ival2 be ival1 for next loop
rireturn ;end reinit section
aTone poscil ampdb(kdb), 400, giSine
outs aTone, aTone
endin
</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Note that in this case the oscillator has been put after the time loop section (which is terminated
by the rireturn statement. Otherwise the oscillator would start afresh with zero phase in each time
loop, thus producing clicks.
193
Time Loops 03 C. CONTROL STRUCTURES
EXAMPLE 03C21_Timeloop_metro.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
EXAMPLE 03C22_Metro_trigger_events.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
194
03 C. CONTROL STRUCTURES Time Loops
instr 1
kfreq init 1; give a start value for the trigger frequency
kTrig metro kfreq
if kTrig == 1 then ;if trigger impulse:
kdur random 1, 5; random duration for instr 2
event "i", 2, 0, kdur; call instr 2
kfreq random .5, 2; set new value for trigger frequency
endif
endin
instr 2
ifreq1 random 600, 1000; starting frequency
idiff random 100, 300; difference to final frequency
ifreq2 = ifreq1 - idiff; final frequency
kFreq expseg ifreq1, p3, ifreq2; glissando
iMaxdb random -18, -6; peak randomly between -12 and 0 dB
kAmp transeg ampdb(iMaxdb), p3, -10, 0; envelope
aTone poscil kAmp, kFreq
outs aTone, aTone
endin
</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Note the differences in working with the metro opcode compared to the timout feature:
ä As metro works at k-time, you must use the k-variants of event or scoreline to call the subin-
strument. With timout you must use the i-variants: of event_i or scoreline_i, because it uses
reinitialization for performing the time loops.
ä You must select the one k-cycle where the metro opcode sends a 1. This is done with an
if-statement. The rest of the instrument is not affected. If you use timout, you usually must
seperate the reinitialized from the not reinitialized section by a rireturn statement.
As Csound internally calculates the relation between sample rate and number of samples per con-
trol cycle as control rate or kr, rather than ksmps/sr we can also write 1/kr. This is a bit shorter and
more intuitive.
The idea for using this internal time as measurement for time loops is this:
1. We set a variable, say kTime, to the desired duration of the time loop.
195
Time Loops 03 C. CONTROL STRUCTURES
2. in each control cycle we subtract the internal time from this variable.
3. Once zero has reached, we perform the event we want to perform, and reset the kTime vari-
able to the next desired time.
The next example does exactly the same as example 03C21 with the help of the metro opcode did,
but now by using the internal clock.3
EXAMPLE 03C23_Timeloop_Internal_Clock.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr TimeLoop
//set desired time for time loop
kLoopTime = 1/2
//set kTime to zero at start
kTime init 0
//trigger event if zero has reached ...
if kTime <= 0 then
event "i", "Play", 0, .3
//... and reset time
kTime = kLoopTime
endif
//subtract time for each control cycle
kTime -= 1/kr
endin
instr Play
aEnv transeg 1, p3, -10, 0
aSig poscil .2*aEnv, 400
out aSig, aSig
endin
</CsInstruments>
<CsScore>
i "TimeLoop" 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
So the trigger events example which has been showed in using timout (03C19) and trigger (03C22)
is here again using the internal clock approach.
EXAMPLE 03C24_Internal_clock_trigger_events.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
3 To say the truth, metro is more precise. But this can be neglected for live situations for which this approach is mainly meant
to be used.
196
03 C. CONTROL STRUCTURES Time Loops
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
instr TimeLoop
kTime init 0
if kTime <= 0 then
event "i", "Play", 0, random:k(1,5)
kTime random .3, 1.5
endif
kTime -= 1/kr
endin
instr Play
ifreq1 random 600, 1000; starting frequency
idiff random 100, 300; difference to final frequency
ifreq2 = ifreq1 - idiff; final frequency
kFreq expseg ifreq1, p3, ifreq2; glissando
iMaxdb random -18, -6; peak randomly between -12 and 0 dB
kAmp transeg ampdb(iMaxdb), p3, -10, 0; envelope
aTone poscil kAmp, kFreq
out aTone, aTone
endin
</CsInstruments>
<CsScore>
i "TimeLoop" 0 30
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
EXAMPLE 03C25_self_triggering.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
instr Play
ifreq1 random 600, 1000; starting frequency
idiff random 100, 300; difference to final frequency
197
Time Loops 03 C. CONTROL STRUCTURES
instr Exit
exitnow()
endin
</CsInstruments>
<CsScore>
i "Play" 0 3
i "Exit" 20 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The problem here is: how to stop? The turnoff2 opcode does not help, because in the moment we
turn off the running instance, it has already triggered the next instance.
In our example, this problem has been solved the brutal way: to exit Csound. Much better is to
introduce a break condition. This is what is called base case in recursion. We can, for instance,
give a counter as p4, say 20. For each instance, the new call is done with p4-1 (19, 18, 17, …). When
zero is reached, no self-triggering is done any more.
EXAMPLE 03C26_recursion.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
instr Play
ifreq1 random 600, 1000; starting frequency
idiff random 100, 300; difference to final frequency
ifreq2 = ifreq1 - idiff; final frequency
kFreq expseg ifreq1, p3, ifreq2; glissando
iMaxdb random -18, -6; peak randomly between -12 and 0 dB
kAmp transeg ampdb(iMaxdb), p3, -10, 0; envelope
aTone poscil kAmp, kFreq
out aTone, aTone
if p4 > 0 then
schedule("Play",random:i(.3,1.5),random:i(1,5), p4-1)
endif
endin
</CsInstruments>
<CsScore>
i "Play" 0 3 20
</CsScore>
198
03 C. CONTROL STRUCTURES Time Loops
</CsoundSynthesizer>
;example by joachim heintz
Recursion is in particular important for User Defined Opcodes. Recursive UDOs will be explained
in chapter 03 G. They follow the same principles as shown here.
199
Time Loops 03 C. CONTROL STRUCTURES
200
03 D. FUNCTION TABLES
Note: This chapter was written before arrays had been introduced into Csound. Now the usage of
arrays is in some situations preferable to using function tables. Have a look in chapter 03 E to see
how you can use arrays.
A function table is essentially the same as what other audio programming languages might call
a buffer, a table, a list or an array. It is a place where data can be stored in an ordered way. Each
function table has a size: how much data (in Csound, just numbers) it can store. Each value in the
table can be accessed by an index, counting from 0 to size-1. For instance, if you have a function
table with a size of 7, and the numbers [1.1, 2.2, 3.3, 5.5, 8.8, 13.13, 21.21] in it, this is the relation of
value and index:
VALUE INDEX
1.1 0
2.2 1
3.3 2
5.5 3
8.8 4
13.13 5
21.21 6
So, if you want to retrieve the value 13.13, you must point to the value stored under index 5.
The use of function tables is manifold. A function table can contain pitch values to which you
may refer using the input of a MIDI keyboard. A function table can contain a model of a waveform
which is read periodically by an oscillator. You can record live audio input in a function table, and
then play it back. There are many more applications, all using the fast access (because function
tables are stored in RAM) and flexible use of function tables.
201
How to Generate a Function Table 03 D. FUNCTION TABLES
for it.
Each creation of a function table in Csound is performed by one of the GEN Routines. Each GEN
Routine generates a function table in a particular way: GEN01 transfers audio samples from a
soundfile into a table, GEN02 stores values we define explicitly one by one, GEN10 calculates
a waveform using user-defined weightings of harmonically related sinusoids, GEN20 generates
window functions typically used for granular synthesis, and so on. There is a good overview in the
Csound Manual of all existing GEN Routines. Here we will explain their general use and provide
some simple examples using commonly used GEN routines.
This is the traditional way of creating a function table by use of an ”f statement” or an ”f score
event” (in a manner similar to the use of “i score events” to call instrument instances). The input
parameters after the f are as follows:
The example below demonstrates how the values [1.1 2.2 3.3 5.5 8.8 13.13 21.21] can be stored in a
function table using an f-statement in the score. Two versions are created: an unnormalised ver-
sion (table number 1) and an normalised version (table number 2). The difference in their contents
will be demonstrated.
EXAMPLE 03D01_Table_norm_notNorm.csd
<CsoundSynthesizer>
<CsOptions>
2 Atleast this is still the safest method to declare a non-power-of-two size for the table, although for many GEN routines
also positive numbers work.
202
03 D. FUNCTION TABLES How to Generate a Function Table
-nm0
</CsOptions>
<CsInstruments>
instr 1 ;prints the values of table 1 or 2
prints "%nFunction Table %d:%n", p4
indx init 0
while indx < 7 do
ival table indx, p4
prints "Index %d = %f%n", indx, ival
indx += 1
od
endin
</CsInstruments>
<CsScore>
f 1 0 -7 -2 1.1 2.2 3.3 5.5 8.8 13.13 21.21; not normalized
f 2 0 -7 2 1.1 2.2 3.3 5.5 8.8 13.13 21.21; normalized
i 1 0 0 1; prints function table 1
i 1 0 0 2; prints function table 2
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Prints:
Function Table 1:
Index 0 = 1.100000
Index 1 = 2.200000
Index 2 = 3.300000
Index 3 = 5.500000
Index 4 = 8.800000
Index 5 = 13.130000
Index 6 = 21.210000
Function Table 2:
Index 0 = 0.051862
Index 1 = 0.103725
Index 2 = 0.155587
Index 3 = 0.259312
Index 4 = 0.414899
Index 5 = 0.619048
Index 6 = 1.000000
Instrument 1 simply reads and prints (to the terminal) the values of the table. Notice the difference
in values read, whether the table is normalized (positive GEN number) or not normalized (negative
GEN number).
Using the ftgen opcode is a more modern way of creating a function table, which is generally
preferable to the old way of writing an f-statement in the score.3 The syntax is explained below:
gir ftgen ifn, itime, isize, igen, iarg1 [, iarg2 [, ...]]
ä gir: a variable name. Each function is stored in an i-variable. Usually you want to have access
to it from every instrument, so a gi-variable (global initialization variable) is given.
ä ifn: a number for the function table. If 0 is given here, Csound will generate the number,
which is mostly preferable.
3 ftgenis preferred mainly because you can refer to the function table by a variable name and must not deal with constant
tables numbers. This will enhance the portability of orchestras and better facilitate the combining of multiple orchestras.
It can also enhance the readability of an orchestra if a function table is located in the code nearer the instrument that uses
it. And, last but not least, variables can be put as arguments into ftgen — imagine for instance a size for recording tables
which you generate or pass as user input.
203
How to Generate a Function Table 03 D. FUNCTION TABLES
The other parameters (size, GEN number, individual arguments) are the same as in the f-statement
in the score. As this GEN call is now a part of the orchestra, each argument is separated from the
next by a comma (not by a space or tab like in the score).
So this is the same example as above, but now with the function tables being generated in the
orchestra header:
EXAMPLE 03D02_Table_ftgen.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
giFt1 ftgen 1, 0, -10, -2, 1.1, 2.2, 3.3, 5.5, 8.8, 13.13, 21.21
giFt2 ftgen 2, 0, -10, 2, 1.1, 2.2, 3.3, 5.5, 8.8, 13.13, 21.21
</CsInstruments>
<CsScore>
i 1 0 0 1; prints function table 1
i 1 0 0 2; prints function table 2
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
ä varname, ifn, itime: These arguments have the same meaning as explained above in refer-
ence to GEN02. Note that on this occasion the function table number (ifn) has been defined
using a zero. This means that Csound will automatically assign a unique function table num-
ber. This number will also be held by the variable giFile which we will normally use to ref-
erence the function table anyway so its actual value will not be important to us. If you are
interested you can print the value of giFile out. If no other tables are defined, it will be 101 and
subsequent tables, also using automatically assigned table numbers, will follow accordingly:
102, 103 etc.
ä isize: Usually you won’t know the length of your soundfile in samples, and want to have a
204
03 D. FUNCTION TABLES How to Generate a Function Table
table length which includes exactly all the samples. This is done by setting isize to 0.
ä igen: As explained in the previous subchapter, this is always the place for indicating the
number of the GEN Routine which must be used. As always, a positive number means nor-
malizing, which is often convenient for audio samples.
ä Sfilnam: The name of the soundfile in double quotes. Similar to other audio programming
languages, Csound recognizes just the name if your .csd and the soundfile are in the same
folder. Otherwise, give the full path. (You can also include the folder via the SSDIR variable,
or add the folder via the –env:SSDIR+=/path/to/sounds option.)
ä iskip: The time in seconds you want to skip at the beginning of the soundfile. 0 means
reading from the beginning of the file.
ä iformat: The format of the amplitude samples in the soundfile, e.g. 16 bit, 24 bit etc. Usually
providing 0 here is sufficient, in which case Csound will read the sample format form the
soundfile header.
ä ichn: 1 = read the first channel of the soundfile into the table, 2 = read the second channel,
etc. 0 means that all channels are read. Note that only certain opcodes are able to properly
make use of multichannel audio stored in function tables.
The following example loads a short sample into RAM via a function table and then plays it. Read-
ing the function table here is done using the poscil3 opcode, as one of many choices in Csound.
EXAMPLE 03D03_Sample_to_table.csd
<CsoundSynthesizer>
<CsOptions>
-odac --env:SSDIR=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
gS_file = "fox.wav"
giSample ftgen 0, 0, 0, 1, gS_file, 0, 0, 1
instr PlayOnce
p3 filelen gS_file ;play whole length of the sound file
aSamp poscil3 .5, 1/p3, giSample
out aSamp, aSamp
endin
</CsInstruments>
<CsScore>
i "PlayOnce" 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
205
How to Generate a Function Table 03 D. FUNCTION TABLES
a sound.
There are many GEN Routines which can be used to achieve this. The simplest one is GEN10. It
produces a waveform by adding sine waves which have the ”harmonic” frequency relationship 1 :
2 : 3 : 4 … After the usual arguments for function table number, start, size and gen routine number,
which are the first four arguments in ftgen for all GEN Routines, with GEN10 you must specify the
relative strengths of the harmonics. So, if you just provide one argument, you will end up with a
sine wave (1st harmonic). The next argument is the strength of the 2nd harmonic, then the 3rd,
and so on. In this way, you can build approximations of the standard harmonic waveforms by the
addition of sinusoids. This is done in the next example by instruments 1-5. Instrument 6 uses the
sine wavetable twice: for generating both the sound and the envelope.
EXAMPLE 03D04_Standard_waveforms_with_GEN10.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Sine
aSine poscil .2, 400, giSine
aEnv linen aSine, .01, p3, .05
outs aEnv, aEnv
endin
instr Saw
aSaw poscil .2, 400, giSaw
aEnv linen aSaw, .01, p3, .05
outs aEnv, aEnv
endin
instr Square
aSqu poscil .2, 400, giSquare
aEnv linen aSqu, .01, p3, .05
outs aEnv, aEnv
endin
instr Triangle
aTri poscil .2, 400, giTri
aEnv linen aTri, .01, p3, .05
outs aEnv, aEnv
endin
instr Impulse
aImp poscil .2, 400, giImp
aEnv linen aImp, .01, p3, .05
outs aEnv, aEnv
endin
206
03 D. FUNCTION TABLES How to Write Values to a Function Table
instr Sine_with_env
aEnv poscil .2, (1/p3)/2, giSine
aSine poscil aEnv, 400, giSine
outs aSine, aSine
endin
</CsInstruments>
<CsScore>
i "Sine" 0 3
i "Saw" 4 3
i "Square" 8 3
i "Triangle" 12 3
i "Impulse" 16 3
i "Sine_with_env" 20 3
</CsScore>
</CsoundSynthesizer>
;Example by Joachim Heintz
To be precise, it is not actually correct to talk about an “empty table”. If Csound creates an “empty”
table, in fact it writes zeros to the indices which are not specified. Perhaps the easiest method of
creating an “empty” table for 100 values is shown below:
giEmpty ftgen 0, 0, -100, 2, 0
The simplest to use opcode that writes values to existing function tables during a note’s perfor-
mance is tablew and its i-time equivalent is tableiw. As usual, you must differentiate if your signal
(variable) is i-rate, k-rate or a-rate. The usage is simple and differs just in the class of values you
want to write to the table (i-, k- or a-variables):
tableiw isig, indx, ifn [, ixmode] [, ixoff] [, iwgmode]
tablew ksig, kndx, ifn [, ixmode] [, ixoff] [, iwgmode]
tablew asig, andx, ifn [, ixmode] [, ixoff] [, iwgmode]
207
How to Write Values to a Function Table 03 D. FUNCTION TABLES
ä isig, ksig, asig is the value (variable) you want to write into a specified location of the table;
ä indx, kndx, andx is the location (index) where you will write the value;
ä ifn is the function table you want to write to;
ä ixmode gives the choice to write by raw indices (counting from 0 to size-1), or by a normalized
writing mode in which the start and end of each table are always referred as 0 and 1 (not
depending on the length of the table). The default is ixmode=0 which means the raw index
mode. A value not equal to zero for ixmode changes to the normalized index mode.
ä ixoff (default=0) gives an index offset. So, if indx=0 and ixoff=5, you will write at index 5.
ä iwgmode tells what you want to do if your index is larger than the size of the table. If iwg-
mode=0 (default), any index larger than possible is written at the last possible index. If iwg-
mode=1, the indices are wrapped around. For instance, if your table size is 8, and your index
is 10, in the wraparound mode the value will be written at index 2.
i-Rate Example
The following example calculates the first 12 values of a Fibonacci series and writes them to a
table. An empty table has first been created in the header (filled with zeros), then instrument 1
calculates the values in an i-time loop and writes them to the table using tableiw. Instrument 2
simply prints all the values in a list to the terminal.
EXAMPLE 03D05_Write_Fibo_to_table.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
208
03 D. FUNCTION TABLES How to Write Values to a Function Table
</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
k-Rate Example
The next example writes a k-signal continuously into a table. This can be used to record any kind
of user input, for instance by MIDI or widgets. It can also be used to record random movements
of k-signals, like here:
EXAMPLE 03D06_Record_ksig_to_table.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
209
How to Write Values to a Function Table 03 D. FUNCTION TABLES
<CsScore>
i 1 0 5
i 2 6 5
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
As you see, this typical case of writing k-values to a table requires a changing value for the index,
otherwise tablew will continually overwrite at the same table location. This changing value can
be created using the line or linseg opcodes - as was done here - or by using a phasor. A phasor
moves continuously from 0 to 1 at a user-defined frequency. For example, if you want a phasor to
move from 0 to 1 in 5 seconds, you must set the frequency to 1/5. Upon reaching 1, the phasor will
wrap-around to zero and begin again. Note that phasor can also be given a negative frequency in
which case it moves in reverse from 1 to zero then wrapping around to
1. By setting the ixmode argument of tablew to 1, you can use the phasor output directly as
writing pointer. Below is an alternative version of instrument 1 from the previous example,
this time using phasor to generate the index values:
instr 1; rec/play of a random frequency movement for 5 seconds
kFreq randomi 400, 1000, 1; random frequency
aSnd oscil3 .2, kFreq, giWave; play it
outs aSnd, aSnd
;;record the k-signal with a phasor as index
prints "RECORDING!%n"
;create a writing pointer in the table,
;moving in 5 seconds from index 0 to the end
kindx phasor 1/5
;write the k-signal
tablew kFreq, kindx, giFt, 1
endin
a-Rate Example
Recording an audio signal is quite similar to recording a control signal. You just need an a-signal
to provide input values and also an index that changes at a-rate. The next example first records a
randomly generated audio signal and then plays it back. It then records the live audio input for 5
seconds and subsequently plays it back.
EXAMPLE 03D07_Record_audio_to_table.csd
<CsoundSynthesizer>
<CsOptions>
-iadc -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
210
03 D. FUNCTION TABLES How to Write Values to a Function Table
</CsInstruments>
<CsScore>
i 1 0 5 ; record 5 seconds of generated audio to a table
i 2 6 5 ; play back the recording of generated audio
i 3 12 7 ; record 5 seconds of live audio to a table
i 4 20 5 ; play back the recording of live audio
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
211
How to Retrieve Values from a Function Table 03 D. FUNCTION TABLES
As table reading often requires interpolation between the table values - for instance if you read k- or
a-values faster or slower than they have been written in the table - Csound offers two descendants
of table for interpolation: tablei interpolates linearly, whilst table3 performs cubic interpolation
(which is generally preferable but is computationally slightly more expensive) and when CPU cycles
are no object, tablexkt can be used for ultimate interpolating quality.4
Examples of the use of the table opcodes can be found in the earlier examples in the How to Write
Values to a Function Table section.
Oscillators
It is normal to read tables that contain a single cycle of an audio waveform using an oscillator
but you can actually read any table using an oscillator, either at a- or at k-rate. The advantage is
that you needn’t create an index signal. You can simply specify the frequency of the oscillator (the
opcode creates the required index internally based on the asked for frequency).
You should bear in mind that some of the oscillators in Csound might work only with power-of-two
table sizes. The poscil/ poscil3 opcodes do not have this restriction and offer a high precision,
because they work with floating point indices, so in general it is recommended to use them. Below
is an example that demonstrates both reading a k-rate and an a-rate signal from a buffer with
poscil3 (an oscillator with a cubic interpolation):
212
03 D. FUNCTION TABLES How to Retrieve Values from a Function Table
EXAMPLE 03D08_RecPlay_ak_signals.csd
<CsoundSynthesizer>
<CsOptions>
-iadc -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 2; read the values of the table and play it with poscil
prints "PLAYING CONTROL SIGNAL!%n"
kFreq poscil 1, 1/5, giControl
aSnd poscil .2, kFreq, giWave; play it
outs aSnd, aSnd
endin
instr 4; read the values from the table and play it with poscil
prints "PLAYING LIVE INPUT!%n"
aSnd poscil .5, 1/5, giAudio
outs aSnd, aSnd
endin
213
Saving the Contents of a Function Table to a File 03 D. FUNCTION TABLES
</CsInstruments>
<CsScore>
i 1 0 5
i 2 6 5
i 3 12 7
i 4 20 5
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
With the following example, you should end up with two textfiles in the same folder as your .csd:
“i-tim_save.txt” saves function table 1 (a sine wave) at i-time; “k-time_save.txt” saves function table
2 (a linear increment produced during the performance) at k-time.
EXAMPLE 03D09_ftsave.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
214
03 D. FUNCTION TABLES Saving the Contents of a Function Table to a File
</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 1
i 3 1 .1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The counterpart to ftsave/ftsavek are the ftload/ ftloadk opcodes. You can use them to load the
saved files into function tables.
EXAMPLE 03D10_Table_to_soundfile.csd
<CsoundSynthesizer>
<CsOptions>
-i adc
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
; -- size for 5 seconds of recording audio data
giAudio ftgen 0, 0, -5*sr, 2, 0
215
Reading a Text File in a Function Table 03 D. FUNCTION TABLES
endif
endin
</CsInstruments>
<CsScore>
i 1 0 7
i 2 7 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The array is scaled from 0 to 10, and the 100 data points are saved in the file drawing_data.txt.
When we open the file, we see each data point as number on a new line: 9.67289 9.34579 9.25233
8.50468 7.85049 7.52339 … We can import now this text file, and write the data points in it to a
function table, via GEN23, The syntax, as written in the manual:
f # time size -23 "filename.txt"
The size parameter we will set usually to 0 which means: The table will have the same size as
the numbers in the file (so 100 here).
-23 tells Csound to use GEN Routine 23 without normalizing the data. (23 instead of -23 would
scale our data between 0 and 1 rather than 0 and 10.)
216
03 D. FUNCTION TABLES Reading a Text File in a Function Table
“filename.txt” is the text file with the numerical data to read. Not only newlines can be used, but
also spaces, tabs or commas as separators between the numbers. (So we could use a numerical
spread sheet when we export it as .csv text file.)
In the following example we apply a simple granular synthesis and we use the function table with
the drawing data in two ways in it:
ä We interpret the drawing as number of grains per second. So at the beginning we will have
a high grain density, and at the end a low grain density.
ä We set the grain duration as reciprocal of the grain density. In the simple form, it would mean
that for a density of 10 grains per second we have a grain duration of 1/10 seconds. To avoid
very long grains at the end, we modify it to half of the reciprocal (so that a grain density of
10 Hz results in grains of 1/20 seconds).
EXAMPLE 03D11_textfile_to_table.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr ReadTable
//index as pointer from start to end of the table over duration (p3)
kIndx = linseg:k(0,p3,100)
//values are read at k-rate with interpolation as a global variable
gkDrawVals = tablei:k(kIndx,giDrawing)
endin
instr PlayWithData
//calculate the skiptime for the sound file compared to the duration
kFoxSkip = (timeinsts:k() / p3) * filelen(gS_file)
endif
217
Other GEN Routine Highlights 03 D. FUNCTION TABLES
endin
instr PlayGrain
endin
</CsInstruments>
<CsScore>
i "ReadTable" 0 10
i "PlayWithData" 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
There is no need to read the table values as a global variable in a separate instrument. It would be
perfectly fine to have it in the PlayWithData instrument, too.
GEN17, GEN41 and GEN42 are used the generate histogram-type functions which may prove useful
in algorithmic composition and work with probabilities.
GEN09 and GEN19 are developments of GEN10 and are useful in additive synthesis.
GEN11 is a GEN routine version of the gbuzz opcode and as it is a fixed waveform (unlike gbuzz)
it can be a useful and efficient sound source in subtractive synthesis.
GEN08
f # time size 8 a n1 b n2 c n3 d ...
GEN08 creates a curved function that forms the smoothest possible line between a sequence of
user defined break-points. This GEN routine can be useful for the creation of window functions for
218
03 D. FUNCTION TABLES Other GEN Routine Highlights
use as envelope shapes or in granular synthesis. In forming a smooth curve, GEN08 may create
apexes that extend well above or below any of the defined values. For this reason GEN08 is mostly
used with post-normalisation turned on, i.e. a minus sign is not added to the GEN number when
the function table is defined. Here are some examples of GEN08 tables:
f 1 0 1024 8 0 1 1 1023 0
219
Other GEN Routine Highlights 03 D. FUNCTION TABLES
GEN16
f # time size 16 val1 dur1 type1 val2 [dur2 type2 val3 ... typeX valN]
GEN16 allows the creation of envelope functions using a sequence of user defined breakpoints.
Additionally for each segment of the envelope we can define a curvature. The nature of the curva-
ture – concave or convex – will also depend upon the direction of the segment: rising or falling.
For example, positive curvature values will result in concave curves in rising segments and con-
vex curves in falling segments. The opposite applies if the curvature value is negative. Below are
some examples of GEN16 function tables:
220
03 D. FUNCTION TABLES Other GEN Routine Highlights
GEN19
f # time size 19 pna stra phsa dcoa pnb strb phsb dcob ...
GEN19 follows on from GEN10 and GEN09 in terms of complexity and controlScreenshot_2023-05-
31_16-20-42 options. It shares the basic concept of generating a harmonic waveform from stacked
sinusoids but in addition to control over the strength of each partial (GEN10) and the partial number
and phase (GEN09) it offers control over the DC offset of each partial. In addition to the creation
of waveforms for use by audio oscillators other applications might be the creation of functions
for LFOs and window functions for envelopes in granular synthesis. Below are some examples of
GEN19:
221
Other GEN Routine Highlights 03 D. FUNCTION TABLES
f 1 0 1024 19 1 1 0 0 20 0.1 0 0
GEN30
f # time size 30 src minh maxh [ref_sr] [interp]
GEN30 uses FFT to create a band-limited version of a source waveform without band-limiting. We
can create a sawtooth waveform by drawing one explicitly using GEN07 by used as an audio wave-
form this will create problems as it contains frequencies beyond the Nyquist frequency therefore
will cause aliasing, particularly when higher notes are played. GEN30 can analyse this waveform
and create a new one with a user defined lowest and highest partial. If we know what note we are
going to play we can predict what the highest partial below the Nyquist frequency will be. For a
given frequency, freq, the maximum number of harmonics that can be represented without aliasing
can be derived using sr / (2 * freq).
Here are some examples of GEN30 function tables (the first table is actually a GEN07 generated
sawtooth, the second two are GEN30 band-limited versions of the first):
f 1 0 1024 7 1 1024 -1
222
03 D. FUNCTION TABLES Other GEN Routine Highlights
f 2 0 1024 30 1 1 20
f 3 0 1024 30 1 2 20
223
Other GEN Routine Highlights 03 D. FUNCTION TABLES
224
03 E. ARRAYS
Arrays can be used in Csound since version 6. This chapter first describes the naming conventions
and the different possibilities to create an array. After looking more closely to the different types
of arrays, the operations on arrays will be explained. Finally examples for the usuage of arrays in
user-defined opcodes (UDOs) are given.
Naming Conventions
An array is stored in a variable. As usual in Csound, the first character of the variable name declares
the array as i (numbers, init-time), k (numbers, perf-time), a (audio vectors, perf-time) or S (strings,
init- or perf-time). (More on this below, and in chapter 03 A.)
At first occurrence, the array variable must be followed by brackets. The brackets determine the
dimensions of the array. So
kArr[] init 10
After the first occurence of the array, referring to it as a whole is done without any brackets. Brack-
ets are only used if an element is indexed:
kArr[] init 10 ;with brackets: first occurrence
kLen = lenarray(kArr) ;without brackets: *kArr* not *kArr[]*
kFirstEl = kArr[0] ;with brackets because of indexing
The same syntax is used for a simple copy via the = operator:
kArr1[] init 10 ;creates kArr1
kArr2[] = kArr1 ;creates kArr2[] as copy of kArr1
225
Creating an Array 03 E. ARRAYS
Creating an Array
An array can be created by different methods:
init
The most general method, which works for arrays of any number of dimensions, is to use the init
opcode. Each argument for init denotes the size of one dimension.
kArr[] init 10 ;creates a one-dimensional array with length 10
kArr[][] init 8, 10 ;creates a two-dimensional array (8 lines, 10 columns)
fillarray
With the fillarray opcode distinct values are assigned to an array. If the array has not been created
before, it will be created as result, in the size of elements which are given to fillarray. This …
iArr[] fillarray 1, 2, 3, 4
… creates an i-array of size=4. Note the difference in using the brackets in case the array has been
created before, and is filled afterwards:
iArr[] init 4
iArr fillarray 1, 2, 3, 4
In conjunction with a previously defined two-dimensional array, fillarray can set the elements, for
instance:
iArr[][] init 2, 3
iArr fillarray 1, 2, 3, -1, -2, -3
This results in a 2D array (matrix) with the elements 1 2 3 as first row, and -1 -2 -3 as second row.1
1 Another method to fill a matrix is to use the setrow opcode. This will be covered later in this chapter.
226
03 E. ARRAYS Creating an Array
genarray
This opcode creates an array which is filled by a series of numbers from a start value to an (in-
cluded) end value. Here are some examples:
iArr[] genarray 1, 5 ; creates i-array with [1, 2, 3, 4, 5]
kArr[] genarray_i 1, 5 ; creates k-array at init-time with [1, 2, 3, 4, 5]
iArr[] genarray -1, 1, 0.5 ; i-array with [-1, -0.5, 0, 0.5, 1]
iArr[] genarray 1, -1, -0.5 ; [1, 0.5, 0, -0.5, -1]
iArr[] genarray -1, 1, 0.6 ; [-1, -0.4, 0.2, 0.8]
Copy with =
The = operator copies any existing array to a new variable. The example shows how a global array
is copied into a local one depending on a score p-field: If p4 is set to 1, iArr[] is set to the content
of gi_Arr_1; if p4 is 2, it gets the content of gi_Arr_2. The content of iArr[] is then sent to instr Play
in a while loop.
EXAMPLE 03E01_CopyArray.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 2
0dbfs = 1
ksmps = 32
gi_Arr_1[] fillarray 1, 2, 3, 4, 5
gi_Arr_2[] fillarray 5, 4, 3, 2, 1
instr Select
if p4==1 then
iArr[] = gi_Arr_1
else
iArr[] = gi_Arr_2
endif
index = 0
while index < lenarray(iArr) do
schedule("Play",index/2,1,iArr[index])
index += 1
od
endin
instr Play
aImp mpulse 1, p3
iFreq = mtof:i(60 + (p4-1)*2)
aTone mode aImp,iFreq,100
out aTone, aTone
endin
</CsInstruments>
227
Types of Arrays 03 E. ARRAYS
<CsScore>
i "Select" 0 4 1
i "Select" + . 2
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Other opcodes which return arrays as output are vbap, bformdec1, loscilx for audio arrays, and
directory for string arrays.
Types of Arrays
i- and k-Rate
Most arrays which are typed by the user to hold data will be either i-rate or k-rate. An i-array can
only be modified at init-time, and any operation on it is only performed once, at init-time. A k-array
can be modified during the performance, and any (k-) operation on it will be performed in every
k-cycle (!).2 Here is a simple example showing the difference:
EXAMPLE 03E02_i_k_arrays.csd
<CsoundSynthesizer>
<CsOptions>
-nm128 ;no sound and reduced messages
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 4410 ;10 k-cycles per second
instr 1
iArr[] fillarray 1, 2, 3
iArr[0] = iArr[0] + 10
prints " iArr[0] = %d\n\n", iArr[0]
228
03 E. ARRAYS Types of Arrays
endin
instr 2
kArr[] fillarray 1, 2, 3
kArr[0] = kArr[0] + 10
printks " kArr[0] = %d\n", 0, kArr[0]
endin
</CsInstruments>
<CsScore>
i 1 0 1
i 2 1 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
kArr[0] = 11
kArr[0] = 21
kArr[0] = 31
kArr[0] = 41
kArr[0] = 51
kArr[0] = 61
kArr[0] = 71
kArr[0] = 81
kArr[0] = 91
kArr[0] = 101
Although both instruments run for one second, the operation to increment the first array value by
ten is executed only once in the i-rate version of the array. But in the k-rate version, the incremen-
tation is repeated in each k-cycle - in this case every 1/10 second, but usually something around
every 1/1000 second.
Audio Arrays
An audio array is a collection of audio signals. The size (length) of the audio array denotes the
number of audio signals which are hold in it. In the next example, the audio array is created for
two audio signals:
aArr[] init 2
The first audio signal in the array aArr[0] carries the output of a sine oscillator with frequency 400
Hz whereas aArr[1] gets 500 Hz:
aArr[0] poscil .2, 400
aArr[1] poscil .2, 500
A percussive envelope aEnv is generated with the transeg opcode. The last line
out aArr*aEnv
multiplies the envelope with each element of the array, and the out opcode outputs the result to
both channels of the audio output device.
229
Types of Arrays 03 E. ARRAYS
EXAMPLE 03E03_Audio_array.csd
<CsoundSynthesizer>
<CsOptions>
-odac -d
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr AudioArray
aArr[] init 2
aArr[0] poscil .2, 400
aArr[1] poscil .2, 500
aEnv transeg 1, p3, -3, 0
out aArr*aEnv
endin
</CsInstruments>
<CsScore>
i "AudioArray" 0 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
As mentioned above, some opcodes create audio arrays implicitely according to the number of
input audio signals:
arr[] diskin "7chnls.aiff", 1
This code will create an audio array of size 7 according to the seven channel input file.
Strings
Arrays of strings can be very useful in many situations, for instance while working with file paths.3
The array can be filled by one of the ways described above, for instance:
S_array[] fillarray "one", "two", "three"
In this case, S_array is of length 3. The elements can be accessed by indexing as usual, for instance
puts S_array[1], 1
The directory opcode looks for all files in a directory and returns an array containing the file names:
EXAMPLE 03E04_Directory.csd
<CsoundSynthesizer>
<CsOptions>
3 You cannot currently have a mixture of numbers and strings in an array, but you can convert a string to a number with the
strtod opcode.
230
03 E. ARRAYS Types of Arrays
-odac -d
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Files
S_array[] directory "."
iNumFiles lenarray S_array
prints "Number of files in %s = %d\n", pwd(), iNumFiles
printarray S_array
endin
</CsInstruments>
<CsScore>
i "Files" 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Local or Global
Like any other variable in Csound, an array usually has a local scope. This means that it is only
valid in the instrument in which it has been defined. If an array is supposed to be valid across
instruments, the variable name must be prefixed with the character g, (as is done with other types
of global variable in Csound). The next example demonstrates local and global arrays at both i-
and k-rate.
EXAMPLE 03E05_Local_vs_global_arrays.csd
<CsoundSynthesizer>
<CsOptions>
-nm128 ;no sound and reduced messages
</CsOptions>
<CsInstruments>
ksmps = 32
instr i_local
iArr[] array 1, 2, 3
prints " iArr[0] = %d iArr[1] = %d iArr[2] = %d\n",
iArr[0], iArr[1], iArr[2]
endin
231
Types of Arrays 03 E. ARRAYS
instr i_global
giArr[] array 11, 12, 13
endin
instr k_local
kArr[] array -1, -2, -3
printks " kArr[0] = %d kArr[1] = %d kArr[2] = %d\n",
0, kArr[0], kArr[1], kArr[2]
turnoff
endin
instr k_local_diff
kArr[] array -4, -5, -6
printks " kArr[0] = %d kArr[1] = %d kArr[2] = %d\n",
0, kArr[0], kArr[1], kArr[2]
turnoff
endin
instr k_global
gkArr[] array -11, -12, -13
turnoff
endin
instr k_global_read
printks " gkArr[0] = %d gkArr[1] = %d gkArr[2] = %d\n",
0, gkArr[0], gkArr[1], gkArr[2]
turnoff
endin
</CsInstruments>
<CsScore>
i "i_local" 0 0
i "i_local_diff" 0 0
i "i_global" 0 0
i "i_global_read" 0 0
i "k_local" 0 1
i "k_local_diff" 0 1
i "k_global" 0 1
i "k_global_read" 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
For audio arrays, we must distinguish between the audio vector itself which is updated sample
by sample, and the array as container which can be updated at k-time. (Imagine an audio array
232
03 E. ARRAYS Operations on Arrays
whichs index switches each control cycle between 0 and 1; thus switching each k-time between
the audio vector of both signals.) So the coincidence between variable and index rate is here:
But what to do if array type and index type do not coincide? In general, the index type will then
determine whether the array is read or written only once (at init-time) or at each k-cycle. This is
valid in particular for S arrays (containing strings). Other cases are:
This will return an error. For this purpose, the i() expression gets a second argument which signifies
the index:
kArray[] fillarray 1, 2, 3
iFirst = i(kArray, 0)
print iFirst
Operations on Arrays
233
Operations on Arrays 03 E. ARRAYS
Analyse
lenarray — Array Length
For reporting the length of multidimensional arrays, lenarray has an additional argument denoting
the dimension. The default is 1 for the first dimension.
kArr[][] init 9, 5
iLen1 lenarray kArr ; -> 9
iLen2 lenarray kArr, 2 ; -> 5
kArrr[][][] init 7, 9, 5
iLen1 lenarray kArrr, 1 ; -> 7
iLen2 lenarray kArrr, 2 ; -> 9
iLen3 lenarray kArrr, 3 ; -> 5
By using functional syntax, lenarray() will report the array length at init-time. If the array length is
being changed during performance, lenarray:k() must be used to report this.
The opcodes minarray and maxarray return the smallest or largest element of a numerical array:
iArr[] fillarray 4, -2, 3, 10, 0
print minarray:i(iArr) ; -> -2
print maxarray:i(iArr) ; -> 10
The cmp opcode offers quite extended possibilities to compare an array to numbers or to another
array. The following example investigates in line 18 whether the elements of the array [1,2,3,4,5]
are larger or equal 3. Line 20 tests whether the elements are larger than 1 and smaller or equal
4. Line 22 performs an element by element comparison with the array [3,5,1,4,2], asking for larger
elements in the original array.
EXAMPLE 03E06_cmp.csd
234
03 E. ARRAYS Operations on Arrays
<CsoundSynthesizer>
<CsOptions>
-m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
giArray[] fillarray 1, 2, 3, 4, 5
giCmpArray[] fillarray 3, 5, 1, 4, 2
instr Compare
printarray giArray, "%d", "Array:"
printarray giCmpArray, "%d", "CmpArray:"
iResult[] cmp giArray, ">=", 3
printarray iResult, "%d", "Array >= 3?"
iResult[] cmp 1, "<", giArray, "<=", 4
printarray iResult, "%d", "1 < Array <= 4?"
iResult[] cmp giArray, ">", giCmpArray
printarray iResult, "%d", "Array > CmpArray?"
endin
</CsInstruments>
<CsScore>
i "Compare" 0 1
</CsScore>
</CsoundSynthesizer>
;example by eduardo moguillansky and joachim heintz
Content Modifications
scalearray — Scale Values
The scalearray opcode destructively changes the content of an array according to a new minimum
and maximum:
iArr[] fillarray 1, 3, 9, 5, 6, -1, 17
scalearray iArr, 1, 3
printarray iArr ; -> 1.2222 1.4444 2.1111 1.6667 1.7778 1.0000 3.0000
Optional a range of the array can be selected for the operation; in this example from index 0 to
index 4:
235
Operations on Arrays 03 E. ARRAYS
The opcodes sorta and sortd return an array in which the elements of the input array are sorted in
ascending or descending order. The input array is left untouched.
iArr[] fillarray 1, 3, 9, 5, 6, -1, 17
iAsc[] sorta iArr
iDesc[] sortd iArr
printarray iAsc, "%d", "Sorted ascending:"
printarray iDesc, "%d", "Sorted descending:"
printarray iArr, "%d", "Original array:"
Prints:
Sorted ascending:
-1 1 3 5 6 9 17
Sorted descending:
17 9 6 5 3 1 -1
Original array:
1 3 9 5 6 -1 17
The limit opcode sets a lower and upper limit to which any value off boundaries is restricted.
iArr[] fillarray 1, 3, 9, 5, 6, -1, 17
iLimit[] limit iArr, 0, 7
printarray(iLimit, "%d") ; -> 1 3 7 5 6 0 7
interleave/deinterleave
As the name suggests, the interleave opcode creates a new array in alternating the values of two
input arrays. This operation is meant for vectors (one-dimensional arrays) only.
iArr1[] genarray 1,5
iArr2[] genarray -1,-5,-1
iArr[] interleave iArr1, iArr2
printarray iArr1, "%d", "array 1:"
printarray iArr2, "%d", "array 2:"
printarray iArr, "%d", "interleaved:"
Which prints:
array 1:
1 2 3 4 5
array 2:
-1 -2 -3 -4 -5
interleaved:
1 -1 2 -2 3 -3 4 -4 5 -5
And vice versa, deinterleave returns two arrays from one input array in alternating its values:
iArr[] genarray 1,10
iArr1[], iArr2[] deinterleave iArr
printarray iArr, "%d", "input array:"
236
03 E. ARRAYS Operations on Arrays
Which prints:
input array:
1 2 3 4 5 6 7 8 9 10
deinterleaved 1:
1 3 5 7 9
deinterleaved 2:
2 4 6 8 10
Size Modifications
slicearray — New Array as Slice
The slicearray opcode creates a new array from an existing one. In addition to the input array the
first and the last (included) index must be specified:
iArr[] fillarray 1, 3, 9, 5, 6, -1, 17
iSlice[] slicearray iArr, 1, 3
printarray(iSlice, "%d") ; -> 3 9 5
SArr[] fillarray "bla", "blo", "bli"
Slice[] slicearray SArr, 1, 2
printarray(Slice) ; -> "blo", "bli"
Arrays have a fixed length, and it may be needed to shorten or lengthen it. trim_i works for any
array at i-rate:
iArr[] fillarray 1, 3, 9, 5, 6, -1, 17
trim_i iArr, 3
printarray(iArr, "%d") -> 1 3 9
kArr[] fillarray 1, 3, 9, 5, 6, -1, 17
trim_i kArr, 5
printarray(kArr, 1, "%d") ; -> 1 3 9 5 6
aArr[] diskin "fox.wav"
prints "%d\n", lenarray(aArr) ; -> 1
trim_i aArr, 2
prints "%d\n", lenarray(aArr) ; -> 2
SArr[] fillarray "a", "b", "c", "d"
trim_i SArr, 2
printarray(SArr) ; -> "a", "b"
If a length bigger than the current array size is required, the additional elements are set to zero.
This can only be used for the init-time version trim_i:
237
Operations on Arrays 03 E. ARRAYS
iArr[] fillarray 1, 3, 9
trim_i iArr, 5
printarray(iArr, "%d") ; -> 1 3 9 0 0
At performance rather than initialization trim can be used. This codes reduces the array size by
one for each trigger signal:
instr 1
kArr[] fillarray 1, 3, 9, 5, 6, -1, 17
kTrig metro 1
if kTrig==1 then
trim kArr, lenarray:k(kArr)-1
printarray kArr,-1,"%d"
endif
endin
schedule(1,0,5)
Prints:
1 3 9 5 6 -1
1 3 9 5 6
1 3 9 5
1 3 9
1 3
Growing an array during performance is not possible in Csound, because memory will only be
allocated at initialization. This is the reason that only trim_i can be used for this purpose.
Format Interchange
copyf2array — Function Table to Array
As function tables have been the classical way of working with vectors in Csound, switching be-
tween them and the array facility introduced in Csound 6 is a basic operation. Copying data from
a function table to a vector is done by copyf2array. The following example copies a sine function
table (8 points) to an array and prints the array content:
iFtSine ftgen 0, 0, 8, 10, 1
iArr[] init 8
copyf2array iArr, iFtSine
printarray iArr
; -> 0.0000 0.7071 1.0000 0.7071 0.0000 -0.7071 -1.0000 -0.7071
The copya2ftab opcode copies an array content to a function table. In the example a function
table of size 10 is created, and an array filled with the integers from 1 to 10. The array content is
then copied into the function table, and the resulting function table is printed via a while loop.
iTable ftgen 0, 0, 10, 2, 0
iArr[] genarray 1, 10
copya2ftab iArr, iTable
index = 0
while index < ftlen(iTable) do
prints "%d ", table:i(index, iTable)
238
03 E. ARRAYS Operations on Arrays
index += 1
od
The tab2array opcode is similar to copyf2array but offers more possibilities. One difference is
that the resulting array is generated by the opcode, so no need for the user to create the array in
advance. This code copies the content of a 16-point saw function table into an array and prints the
array:
iFtSaw ftgen 0, 0, 8, 10, 1, -1/2, 1/3, -1/4, 1/5, -1/6
iArr[] tab2array iFtSaw
printarray(iArr)
; -> 0.0000 0.4125 0.7638 1.0000 0.0000 -1.0000 -0.7638 -0.4125
This will copy the values from index 1 to index 8 (not included):
iFtSine ftgen 0, 0, 8, 10, 1, -1/2, 1/3, -1/4, 1/5, -1/6
iArr[] tab2array iFtSine, 1, 7
printarray(iArr)
; -> 0.4125 0.7638 1.0000 0.0000 -1.0000 -0.7638
And this will copy the whole array but only every second value:
iFtSine ftgen 0, 0, 8, 10, 1, -1/2, 1/3, -1/4, 1/5, -1/6
iArr[] tab2array iFtSine, 0, 0, 2
printarray(iArr)
; -> 0.0000 0.7638 0.0000 -0.7638
The data of an f-signal — containing the result of a Fast Fourier Transform — can be copied into
an array with the opcode pvs2array. The counterpart pvsfromarray copies the content of an array
to a f-signal.
kFrame pvs2array kArr, fSigIn ;from f-signal fSig to array kArr
fSigOut pvsfromarray kArr [,ihopsize, iwinsize, iwintype]
ä The array kArr must be declared in advance to its usage in these opcodes, usually with init.
ä The size of this array depends on the FFT size of the f-signal fSigIn. If the FFT size is N, the
f-signal will contain N/2+1 amplitude-frequency pairs. For instance, if the FFT size is 1024,
the FFT will write out 513 bins, each bin containing one value for amplitude and one value
for frequency. So to store all these values, the array must have a size of 1026. In general, the
size of kArr equals FFT-size plus two.
ä The indices 0, 2, 4, … of kArr will contain the amplitudes; the indices 1, 3, 5, … will contain the
frequencies of the bins in a specific frame.
ä The number of this frame is reported in the kFrame output of pvs2array. By this parameter
you know when pvs2array writes new values to the array kArr.
ä On the way back, the FFT size of fSigOut, which is written by pvsfromarray, depends on the
size of kArr. If the size of kArr is 1026, the FFT size will be 1024.
239
Operations on Arrays 03 E. ARRAYS
ä The default value for ihopsize is 4 (= fftsize/4); the default value for inwinsize is the fftsize;
and the default value for iwintype is 1, which means a hanning window.
Here is an example that implements a spectral high-pass filter. The f-signal is written to an array
and the amplitudes of the first 40 bins are then zeroed.4 This is only done when a new frame writes
its values to the array so as not to waste rendering power.
EXAMPLE 03E07_pvs_to_from_array.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr FFT_HighPass
ifftsize = 2048 ;fft size set to pvstanal default
fsrc pvstanal 1, 1, 1, gifil ;create fsig stream from function table
kArr[] init ifftsize+2 ;create array for bin data
kflag pvs2array kArr, fsrc ;export data to array
endin
</CsInstruments>
<CsScore>
i "FFT_HighPass" 0 2.7
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
4 As sample rate is here 44100,and fftsize is 2048, each bin has a frequency range of 44100 / 2048 = 21.533 Hz. Bin 0 looks
for frequencies around 0 Hz, bin 1 for frequencies around 21.533 Hz, bin 2 around 43.066 Hz, and so on. So setting the first
40 bin amplitudes to 0 means that no frequencies will be resynthesized which are lower than bin 40 which is centered at
40 * 21.533 = 861.328 Hz.
240
03 E. ARRAYS Operations on Arrays
1D - 2D Interchange
reshapearray — Change Array Dimension
With reshapearray a one-dimensional array can be transformed in a two-dimensional one, and vice
versa. In the following example, a 1D array of 12 elements is first printed and then transformed in
a 2D array with 3 lines and 4 columns:
iArr[] genarray 1, 12
printarray iArr, "%d", "1D array:"
reshapearray iArr, 3, 4
printarray iArr, "%d", "2D array:"
The opcodes getrow and getcol return the content of a 2D array’s row or column as a 1D array:
iArr[][] init 3, 4
iArr fillarray 1,2,3,4,5,6,7,8,9,10,11,12
printarray iArr, "%d", "2D array:"
iRow1[] getrow iArr, 0
printarray iRow1, "%d", "First row:"
iCol1[] getcol iArr, 0
printarray iCol1, "%d", "First columns:"
Prints:
2D array:
0: 1 2 3 4
1: 5 6 7 8
2: 9 10 11 12
First row:
1 2 3 4
First columns:
1 5 9
The opcodes setrow and setcol assign a 1D array as row or column of a 2D array:
iArr[][] init 3, 4
printarray iArr, "%d", "2D array empty:"
iRow[] fillarray 1, 2, 3, 4
iArr setrow iRow, 0
printarray iArr, "%d", "2D array with first row:"
iCol[] fillarray -1, -2, -3
iArr setcol iCol, 3
printarray iArr, "%d", "2D array with fourth column:"
241
Operations on Arrays 03 E. ARRAYS
Prints:
2D array empty:
0: 0 0 0 0
1: 0 0 0 0
2: 0 0 0 0
2D array with first row:
0: 1 2 3 4
1: 0 0 0 0
2: 0 0 0 0
2D array with fourth column:
0: 1 2 3 -1
1: 0 0 0 -2
2: 0 0 0 -3
The getrowlin opcode is similar to getrow but interpolates between adjacent rows of a matrix if a
non-integer number is given.
kArr[][] init 3, 4
kArr fillarray 1,2,3,4,5,6,7,8,9,10,11,12
printarray kArr, 1, "%d", "2D array:"
kRow[] getrowlin kArr, 0.5
printarray kRow, 1, "%d", "Row 0.5:"
The 0.5th row means an interpolation between first and second row, so this is the output:
2D array:
0: 1 2 3 4
1: 5 6 7 8
2: 9 10 11 12
Row 0.5:
3 4 5 6
Functions
Arithmetic Operators
The four basic operators +, -, * and / can directly be applied to an array, either with a scalar or a
second array as argument.
All operations can be applied to the input array itself (changing its content destructively), or can
create a new array as result. This is a simple example for the scalar addition:
iArr[] fillarray 1, 2, 3
iNew[] = iArr + 10 ; -> 11 12 13 as new array
iArr += 10 ; iArr is now 11 12 13
Which prints:
242
03 E. ARRAYS Operations on Arrays
original array:
0: 1 2 3
1: 4 5 6
modified array:
0: 11 12 13
1: 14 15 16
Both possibilities — creating a new array or modifying the existing one — are also valid if a second
array is given as argument:
iArr[] fillarray 1, 2, 3
iArg[] fillarray 10, 20, 30
iNew[] = iArr + iArg ; -> 11 22 33 as new array
iArr += iArg ; iArr is now 11 22 33
Both arrays must have the same size, but it also works for 2D arrays:
iArr[][] init 2, 3
iArr fillarray 1, 2, 3, 4, 5, 6
printarray(iArr, "%d", "original array:")
iArg[][] init 2, 3
iArg fillarray 3, 4, 5, 6, 7, 8
printarray(iArg, "%d", "argument array:")
iArr += iArg
printarray(iArr,"%d", "modified array:")
Which prints:
original array:
0: 1 2 3
1: 4 5 6
argument array:
0: 3 4 5
1: 6 7 8
modified array:
0: 4 6 8
1: 10 12 14
Unary Functions
243
Arrays in UDOs 03 E. ARRAYS
ä cosinv — arccosine
ä sininv — arcsine
ä taninv — arctangent
ä sinh — hyberbolic sine
ä cosh — hyberbolic cosine
ä tanh — hyperbolic tangent
ä cbrt — cubic root
maparray
The maparray opcode was used in early array implementation to apply a unary function to every
element of a 1D array. In case a function is not in the list above, this old solution may work.
Binary Functions
For instance:
iBase[] fillarray 1.1, 2.2, 3.3
iExp[] fillarray 2, -2, 0
iBasPow2[] pow iBase, 2 ; -> 1.2100 4.8400 10.8900
iBasExp[] pow iBase, iExp ; -> 1.2100 0.2066 1.0000
Print
The printarray opcode is easy to use and offers all possibilities to print out array contents.
Arrays in UDOs
244
03 E. ARRAYS Arrays in UDOs
This is a simple UDO definition which returns the first element of a given 1D k-array. Note that in
the intype list it is declared as k[], wheras in the input argument list it is declared as kArr[].
opcode FirstEl, k, k[]
kArr[] xin
kOut = kArr[0]
xout kOut
endop
The output declaration is done quite similar: abstract type declaration in the outtypes list, and
variable name in the UDO body. Here the usual naming conventions are valid, as explained at the
beginning of this chapter (first occurrence with brackets, then without brackets).
This is an example which creates an i-array of N elements, applying recursively a given ratio on
each element. The output array is declared as i[] in the outtypes list, and as variable first as iOut[]
then only iOut in the body.
opcode GeoSer,i[],iii
iStart, iRatio, iSize xin
iOut[] init iSize
indx = 0
while indx < iSize do
iOut[indx] = iStart
iStart *= iRatio
indx += 1
od
xout iOut
The call
instr 1
iSeries[] GeoSer 2, 3, 5
printarray(iSeries,"%d")
endin
schedule(1,0,0)
will print:
2 6 18 54 162
As an expert note it should be mentioned that UDOs refer to arrays by value. This means that an
input array is copied into the UDO, and an output array is copied to the instrument. This can slow
down the performance for large arrays and k-rate calls.
245
Arrays in UDOs 03 E. ARRAYS
Overload
Usually we want to use a UDO for different types of arrays. The best method is to overload the
function in defining the different types with the same function name. Csound will then select the
appropriate version.
The following example extends the FirstEl function from k-arrays also to i- and S-arrays.
EXAMPLE 03E08_array_overload.csd
<CsoundSynthesizer>
<CsOptions>
-m0
</CsOptions>
<CsInstruments>
ksmps = 32
instr Test
iTest[] fillarray 1, 2, 3
kTest[] fillarray 4, 5, 6
STest[] fillarray "x", "y", "z"
prints "First element of i-array: %d\n", FirstEl(iTest)
printks "First element of k-array: %d\n", 0, FirstEl(kTest)
printf "First element of S-array: %s\n", 1, FirstEl(STest)
turnoff
endin
</CsInstruments>
<CsScore>
i "Test" 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
246
03 E. ARRAYS Arrays in UDOs
and element 4 has been selected randomly, and copied into the output array at first position, the
elements 5 6 7 will be shifted one position to the left, so that input array changes to
1 2 3 5 6 7
This procedure is repeated again and again; in the next run only looking amongst six rather than
seven elements.
As Csound has no random opcode for integers, this is first defined as helper function: RndInt
returns a random integer between iStart and iEnd (included).5
EXAMPLE 03E09_Shuffle.csd
<CsoundSynthesizer>
<CsOptions>
-m0
</CsOptions>
<CsInstruments>
ksmps = 32
seed 0
opcode RndInt, i, ii
iStart, iEnd xin
iRnd random iStart, iEnd+.999
iRndInt = int(iRnd)
xout iRndInt
endop
247
Arrays in UDOs 03 E. ARRAYS
instr Test
iValues[] fillarray 1, 2, 3, 4, 5, 6, 7
indx = 0
while indx < 5 do
iOut[] ArrShuffle iValues
printarray(iOut,"%d")
indx += 1
od
endin
</CsInstruments>
<CsScore>
i "Test" 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
248
03 F. LIVE EVENTS
Note: This chapter is not about live coding. Live coding should be covered in future in an own
chapter. For now, have a look at live.csound.com and Steven Yi’s related csound-live-code repository.
The basic concept of Csound from the early days of the program is still valid and useful because
it is a musically familiar one: you create a set of instruments and instruct them to play at various
times. These calls of instrument instances, and their execution, are called instrument events.
Whenever any Csound code is executed, it has to be compiled first. Since Csound6, you can
change the code of any running Csound instance, and recompile it on the fly. There are basically
two opcodes for this live coding: compileorc re-compiles any existing orc file, whereas compilestr
compiles any string. At the end of this chapter, we will present some simple examples for both
methods, followed by a description how to re-compile code on the fly in CsoundQt.
The scheme of instruments and events can be instigated in a number of ways. In the classical
approach you think of an orchestra with a number of musicians playing from a score, but you can
also trigger instruments using any kind of live input: from MIDI, from OSC, from the command
line, from a GUI (such as Csound’s FLTK widgets or the widgets in CsoundQt, Cabbage and Blue),
from the API. Or you can create a kind of master instrument, which is always on, and triggers other
instruments using opcodes designed for this task, perhaps under certain conditions: if the live
audio input from a singer has been detected to have a base frequency greater than 1043 Hz, then
start an instrument which plays a soundfile of broken glass …
It is worth to have a closer look to what is happening exactly in time if you trigger an instrument
from inside another instrument. The first example shows the result when instrument 2 triggers
instrument 1 and instrument 3 at init-time.
249
Order of Execution Revisited 03 F. LIVE EVENTS
EXAMPLE 03F01_OrderOfExc_event_i.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 441
instr 1
kCycle timek
prints "Instrument 1 is here at initialization.\n"
printks "Instrument 1: kCycle = %d\n", 0, kCycle
endin
instr 2
kCycle timek
prints " Instrument 2 is here at initialization.\n"
printks " Instrument 2: kCycle = %d\n", 0, kCycle
event_i "i", 3, 0, .02
event_i "i", 1, 0, .02
endin
instr 3
kCycle timek
prints " Instrument 3 is here at initialization.\n"
printks " Instrument 3: kCycle = %d\n", 0, kCycle
endin
</CsInstruments>
<CsScore>
i 2 0 .02
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Instrument 2 is the first one to initialize, because it is the only one which is called by the score.
Then instrument 3 is initialized, because it is called first by instrument 2. The last one is instrument
1. All this is done before the actual performance begins. In the performance itself, starting from
the first control cycle, all instruments are executed by their order.
Let us compare now what is happening when instrument 2 calls instrument 1 and 3 during the
performance (= at k-time):
250
03 F. LIVE EVENTS Order of Execution Revisited
EXAMPLE 03F02_OrderOfExc_event_k.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 441
0dbfs = 1
nchnls = 1
instr 1
kCycle timek
prints "Instrument 1 is here at initialization.\n"
printks "Instrument 1: kCycle = %d\n", 0, kCycle
endin
instr 2
kCycle timek
prints " Instrument 2 is here at initialization.\n"
printks " Instrument 2: kCycle = %d\n", 0, kCycle
if kCycle == 1 then
event "i", 3, 0, .02
event "i", 1, 0, .02
endif
printks " Instrument 2: still in kCycle = %d\n", 0, kCycle
endin
instr 3
kCycle timek
prints " Instrument 3 is here at initialization.\n"
printks " Instrument 3: kCycle = %d\n", 0, kCycle
endin
instr 4
kCycle timek
prints " Instrument 4 is here at initialization.\n"
printks " Instrument 4: kCycle = %d\n", 0, kCycle
endin
</CsInstruments>
<CsScore>
i 4 0 .02
i 2 0 .02
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
251
Instrument Events from the Score 03 F. LIVE EVENTS
Instrument 2 starts with its init-pass, and then instrument 4 is initialized. As you see, the reverse
order of the scorelines has no effect; the instruments which start at the same time are executed
in ascending order, depending on their numbers.
In this first cycle, instrument 2 calls instrument 3 and 1. As we see by the output of instrument
4, the whole control cycle is finished first, before instrument 3 and 1 (in this order) are initialized.
These both instruments start their performance in cycle number two, where they find themselves
in the usual order: instrument 1 before instrument 2, then instrument 3 before instrument 4.
Usually you will not need to know all of this with such precise timing. But in case you experience
any problems, a clearer awareness of the process may help.
EXAMPLE 03F03_Score_tricks.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
kFadout init 1
krel release ;returns "1" if last k-cycle
if krel == 1 && p3 < 0 then ;if so, and negative p3:
xtratim .5 ;give 0.5 extra seconds
kFadout linseg 1, .5, 0 ;and make fade out
endif
kEnv linseg 0, .01, p4, abs(p3)-.1, p4, .09, 0; normal fade out
aSig poscil kEnv*kFadout, p5, giWav
outs aSig, aSig
endin
</CsInstruments>
<CsScore>
t 0 120 ;set tempo to 120 beats per minute
i 1 0 1 .2 400 ;play instr 1 for one second
i 1 2 -10 .5 500 ;play instr 1 indefinetely (negative p3)
i -1 5 0 ;turn it off (negative p1)
; -- turn on instance 1 of instr 1 one sec after the previous start
i 1.1 ^+1 -10 .2 600
252
03 F. LIVE EVENTS Using MIDI Note-On Events
Triggering an instrument with an indefinite duration by setting p3 to any negative value, and stop-
ping it by a negative p1 value, can be an important feature for live events. If you turn instruments
off in this way you may have to add a fade out segment. One method of doing this is shown in the
instrument above with a combination of the release and the xtratim opcodes. Also note that you
can start and stop certain instances of an instrument with a floating point number as p1.
EXAMPLE 03F04_Midi_triggered_events.csd
<CsoundSynthesizer>
<CsOptions>
-Ma -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
253
Using Widgets 03 F. LIVE EVENTS
nchnls = 2
0dbfs = 1
instr 1
iFreq cpsmidi ;gets frequency of a pressed key
iAmp ampmidi 8 ;gets amplitude and scales 0-8
iRatio random .9, 1.1 ;ratio randomly between 0.9 and 1.1
aTone foscili .1, iFreq, 1, iRatio/5, iAmp+1, giSine ;fm
aEnv linenr aTone, 0, .01, .01 ; avoiding clicks at the note-end
outs aEnv, aEnv
endin
</CsInstruments>
<CsScore>
f 0 36000; play for 10 hours
e
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Using Widgets
If you want to trigger an instrument event in realtime with a Graphical User Interface, it is usually
a Button widget which will do this job. We will see here a simple example; first implemented using
Csound’s FLTK widgets, and then using CsoundQt’s widgets.
FLTK Button
This is a very simple example demonstrating how to trigger an instrument using an FLTK button.
A more extended example can be found here.
EXAMPLE 03F05_FLTK_triggered_events.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
254
03 F. LIVE EVENTS Using Widgets
instr 1
idur random .5, 3; recalculate instrument duration
p3 = idur; reset instrument duration
ioct random 8, 11; random values between 8th and 11th octave
idb random -18, -6; random values between -6 and -18 dB
aSig poscil ampdb(idb), cpsoct(ioct)
aEnv transeg 1, p3, -10, 0
outs aSig*aEnv, aSig*aEnv
endin
instr 2
exitnow
endin
</CsInstruments>
<CsScore>
f 0 36000
e
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Note that in this example the duration of an instrument event is recalculated when the instrument is
initialised. This is done using the statement p3 = i…. This can be a useful technique if you want the
duration that an instrument plays for to be different each time it is called. In this example duration
is the result of a random function. The duration defined by the FLTK button will be overwritten by
any other calculation within the instrument itself at i-time.
CsoundQt Button
In CsoundQt, a button can be created easily from the submenu in a widget panel:
In the Properties Dialog of the button widget, make sure you have selected event as Type. Insert
a Channel name, and at the bottom type in the event you want to trigger - as you would if writing a
line in the score.
255
Using A Realtime Score 03 F. LIVE EVENTS
In your Csound code, you need nothing more than the instrument you want to trigger:
For more information about CsoundQt, read the CsoundQt chapter in the Frontends section of this
manual.
EXAMPLE 03F06_Commandline_rt_events.csd
<CsoundSynthesizer>
<CsOptions>
-L stdin -odac
</CsOptions>
<CsInstruments>
256
03 F. LIVE EVENTS Using A Realtime Score
instr 1
idur random .5, 3; calculate instrument duration
p3 = idur; reset instrument duration
ioct random 8, 11; random values between 8th and 11th octave
idb random -18, -6; random values between -6 and -18 dB
aSig oscils ampdb(idb), cpsoct(ioct), 0
aEnv transeg 1, p3, -10, 0
outs aSig*aEnv, aSig*aEnv
endin
</CsInstruments>
<CsScore>
f 0 36000
e
</CsScore>
</CsoundSynthesizer>
If you now type the line i 1 0 1 and press return, you should hear that instrument 1 has been executed.
After three times your messages may look like this:
257
By Conditions 03 F. LIVE EVENTS
By Conditions
We have discussed first the classical method of triggering instrument events from the score sec-
tion of a .csd file, then we went on to look at different methods of triggering real time events using
MIDI, by using widgets, and by using score lines inserted live. We will now look at the Csound
orchestra itself and to some methods by which an instrument can internally trigger another instru-
ment. The pattern of triggering could be governed by conditionals, or by different kinds of loops.
As this “master” instrument can itself be triggered by a realtime event, you have unlimited options
available for combining the different methods.
Let’s start with conditionals. If we have a realtime input, we may want to define a threshold, and
trigger an event
In Csound, this could be implemented using an orchestra of three instruments. The first instru-
ment is the master instrument. It receives the input signal and investigates whether that signal is
crossing the threshold and if it does whether it is crossing from low to high or from high to low.
If it crosses the threshold from low to high the second instrument is triggered, if it crosses from
high to low the third instrument is triggered.
EXAMPLE 03F07_Event_by_condition.csd
<CsoundSynthesizer>
<CsOptions>
-iadc -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
258
03 F. LIVE EVENTS Using i-Rate Loops for Calculating a Pool of Instrument Events
</CsInstruments>
<CsScore>
i 1 0 1000 2 ;change p4 to "1" for live input
e
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Using this opportunity we can introduce the scoreline / scoreline_i opcode. It is quite similar to
the event / event_i opcode but has two major benefits:
ä You can write more than one scoreline by using {{ at the beginning and }} at the end.
ä You can send a string to the subinstrument (which is not possible with the event opcode).
Let’s look at a simple example for executing score events from an instrument using the scoreline
259
Using i-Rate Loops for Calculating a Pool of Instrument Events 03 F. LIVE EVENTS
opcode:
EXAMPLE 03F08_Generate_event_pool.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
i 1 0 7
e
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
With good right, you might say: “OK, that’s nice, but I can also write scorelines in the score itself!”
That’s right, but the advantage with the scoreline_i method is that you can render the score events
in an instrument, and then send them out to one or more instruments to execute them. This can
be done with the sprintf opcode, which produces the string for scoreline in an i-time loop (see the
chapter about control structures).
EXAMPLE 03F09_Events_sprintf.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
260
03 F. LIVE EVENTS Using i-Rate Loops for Calculating a Pool of Instrument Events
</CsInstruments>
<CsScore>
i 1 0 1 ;p3 is automatically set to the total duration
e
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
In this example, seven events have been rendered in an i-time loop in instrument 1. The result is
stored in the string variable Slines. This string is given at i-time to scoreline_i, which executes them
then one by one according to their starting times (p2), durations (p3) and other parameters.
Instead of collecting all score lines in a single string, you can also execute them inside the i-time
loop. Also in this way all the single score lines are added to Csound’s event pool. The next example
shows an alternative version of the previous one by adding the instrument events one by one in
the i-time loop, either with event_i (instr 1) or with scoreline_i (instr 2):
EXAMPLE 03F10_Events_collected.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
261
Using Time Loops 03 F. LIVE EVENTS
</CsInstruments>
<CsScore>
i 1 0 1
i 2 14 1
e
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
262
03 F. LIVE EVENTS Using Time Loops
ment performs a time loop (choose either instr 1 for the timout method or instr 2 for the metro
method) and triggers once in a loop a subinstrument. The subinstrument itself (instr 10) performs
an i-time loop and triggers several instances of a sub-subinstrument (instr 100). Each instance
performs a partial with an independent envelope for a bell-like additive synthesis.
EXAMPLE 03F11_Events_time_loop.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
instr 1; time loop with timout. events are triggered by event_i (i-rate)
loop:
idurloop random 1, 4; duration of each loop
timout 0, idurloop, play
reinit loop
play:
idurins random 1, 5; duration of the triggered instrument
event_i "i", 10, 0, idurins; triggers instrument 10
endin
instr 2; time loop with metro. events are triggered by event (k-rate)
kfreq init 1; give a start value for the trigger frequency
kTrig metro kfreq
if kTrig == 1 then ;if trigger impulse:
kdur random 1, 5; random duration for instr 10
event "i", 10, 0, kdur; call instr 10
kfreq random .25, 1; set new value for trigger frequency
endif
endin
263
Which Opcode Should I Use? 03 F. LIVE EVENTS
</CsInstruments>
<CsScore>
i 1 0 300 ;try this, or the next line (or both)
;i 2 0 300
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Let us start with the latter, which actually leads to the general question about i-rate and k-rate
opcodes.1 In short: Using event_i (the i-rate version) will only trigger an event once, when the
instrument in which this opcode works is initiated. Using event (the k-rate version) will trigger an
event potentially again and again, as long as the instrument runs, in each control cycle. This is a
very simple example:
EXAMPLE 03F12_event_i_vs_event.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
sr=44100
ksmps = 32
instr Call_i
;call another instrument at i-rate
event_i "i", "Called_i", 0, 1
endin
instr Call_k
1 See chapter 03A about Initialization and Performance Pass for a detailed discussion.
264
03 F. LIVE EVENTS Which Opcode Should I Use?
instr Called_i
;report that instrument starts and which instance
prints "Instance #%d of Called_i is starting!\n", giInstCi
;increment number of instance for next instance
giInstCi += 1
endin
instr Called_k
;report that instrument starts and which instance
prints " Instance #%d of Called_k is starting!\n", giInstCk
;increment number of instance for next instance
giInstCk += 1
endin
</CsInstruments>
<CsScore>
;run "Call_i" for one second
i "Call_i" 0 1
;run "Call_k" for 1/100 seconds
i "Call_k" 0 0.01
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Although instrument Call_i runs for one second, the call to instrument Called_i is only performed
once, because it is done with event_i: at initialization only. But instrument Call_k calls one instance
of Called_k in each control cycle; so for the duration of 0.01 seconds of running instrument Call_k,
fourteen instances of instrument Called_k are being started.2 So this is the output:
Instance #1 of Called_i is starting!
Instance #1 of Called_k is starting!
Instance #2 of Called_k is starting!
Instance #3 of Called_k is starting!
Instance #4 of Called_k is starting!
Instance #5 of Called_k is starting!
Instance #6 of Called_k is starting!
Instance #7 of Called_k is starting!
Instance #8 of Called_k is starting!
Instance #9 of Called_k is starting!
Instance #10 of Called_k is starting!
Instance #11 of Called_k is starting!
Instance #12 of Called_k is starting!
Instance #13 of Called_k is starting!
Instance #14 of Called_k is starting!
So the first (and probably most important) decision in asking “which opcode should I use”, is the
answer to the question: “Do I need an i-rate or a k-rate opcode?”
2 As for a sample rate of 44100 Hz (sr=44100) and a control period of 32 samples (ksmps=32), we have 1378 control periods
265
Which Opcode Should I Use? 03 F. LIVE EVENTS
There are two differences between schedule and event_i. The first is that schedule can only trigger
instruments, whereas event_i can also trigger f events (= build function tables).
The second difference is that schedule can pass strings to the called instrument, but event_i (and
event) can not. So, if you execute this code …
schedule "bla", 0, 1, "blu"
With scoreline_i sending strings is also possible. This opcode takes one or more lines of score
statements which follow the same conventions as if written in the score section itself.3 If you
enclose the line(s) by {{ and }}, you can include as many strings in it as you wish:
scoreline_i {{
i "bla" 0 1 "blu" "sound"
i "bla" 1 1 "brown" "earth"
}}
The advantage of schedulek against event is the possibility to pass strings as p-fields. On the other
hand, event can not only generate instrument events, but also other score events. For instrument
events, the syntax is:
event "i", kInstrNum (or "InstrName"), kStart, kDur [, kp4] [, kp5] [...]
3 This means that score parameter fields are separated by spaces, not by commas.
266
03 F. LIVE EVENTS Recompilation
Usually, you will not want to trigger another instrument each control cycle, but based on certain
conditions. A very common case is a “ticking” periodic signal, whichs ticks are being used as
trigger impulses. The typical code snippel using a metro and the event opcode would be:
kTrigger metro 1 ;"ticks" once a second
if kTrigger == 1 then ;if it ticks
schedulek "my_instr", 0, 1 ;call the instrument
endif
In other words: This code would only use one control-cycle per second to call my_instr, and would
do nothing in the other control cycles. The schedkwhen opcode simplifies such typical use cases,
and adds some other useful arguments. This is the syntax:
schedkwhen kTrigger, kMinTim, kMaxNum, kInsrNum (or "InstrName"),
kStart, kDur [, kp4] [, kp5] [...]
The kMinTim parameter specifies the time which has to be spent between two subsequent calls
of the subinstrument. This is often quite useful as you may want to state: “Do not call the next
instance of the subinstrument unless 0.1 seconds have been passed.” If you set this parameter to
zero, there will be no time limit for calling the subinstrument.
The kMaxNum parameter specifies the maximum number of instances which run simultaneously.
Say, kMaxNum = 2 and there are indeed two instances of the subinstrument running, no other
instance will be initiated. if you set this parameter to zero, there will be no limit for calling new
instances.
So, with schedkwhen, we can write the above code snippet in two lines instead of four:
kTrigger metro 1 ;"ticks" once a second
schedkwhen kTrigger, 0, 0, "my_instr", 0, 1
Only, you cannot pass strings as p-fields via schedkwhen (and event). So, very much similar as
described above for i-rate opcodes, scoreline fills this gap (as well as schedulek). Usually we will
use it with a condition, as we did for the event opcode:
kTrigger metro 1 ;"ticks" once a second
if kTrigger == 1 then
;if it ticks, call two instruments and pass strings as p-fields
scoreline {{
i "bla" 0 1 "blu" "sound"
i "bla" 1 1 "brown" "earth"
}}
endif
Recompilation
As it has been mentioned at the start of this chapter, since Csound6 you can re-compile any code
in an already running Csound instance. Let us first see some simple examples for the general use,
and then a more practical approach in CsoundQt.
267
Recompilation 03 F. LIVE EVENTS
compileorc / compilestr
The opcode compileorc refers to a definition of instruments which has been saved as an
.orc (“orchestra”) file. To see how it works, save this text in a simple text (ASCII) format as
“to_recompile.orc”:
instr 1
iAmp = .2
iFreq = 465
aSig oscils iAmp, iFreq, 0
outs aSig, aSig
endin
EXAMPLE 03F13_compileorc.csd
<CsoundSynthesizer>
<CsOptions>
-o dac -d -L stdin -Ma
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 2
ksmps = 32
0dbfs = 1
massign 0, 9999
instr 9999
ires compileorc "to_recompile.orc"
print ires ; 0 if compiled successfully
event_i "i", 1, 0, 3 ;send event
endin
</CsInstruments>
<CsScore>
i 9999 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
If you run this csd in the terminal, you should hear a three seconds beep, and the output should be
like this:
SECTION 1:
new alloc for instr 9999:
instr 9999: ires = 0.000
new alloc for instr 1:
B 0.000 .. 1.000 T 1.000 TT 1.000 M: 0.20000 0.20000
B 1.000 .. 3.000 T 3.000 TT 3.000 M: 0.20000 0.20000
Score finished in csoundPerform().
inactive allocs returned to freespace
end of score. overall amps: 0.20000 0.20000
overall samples out of range: 0 0
0 errors in performance
Having understood this, it is easy to do the next step. Remove (or comment out) the score line i
268
03 F. LIVE EVENTS Recompilation
9999 0 1 so that the score is empty. If you start the csd now, Csound will run indefinitely. Now call
instr 9999 by typing i 9999 0 1 in the terminal window (if the option -L stdin works for your setup),
or by pressing any MIDI key (if you have connected a keyboard). You should hear the same beep
as before. But as the recompile.csd keeps running, you can change now the instrument 1 in file
to_recompile.orc. Try, for instance, another value for kFreq. Whenever this is done (file is saved)
and you call again instr 9999 in recompile.csd, the new version of this instrument is compiled and
then called immediately.
The other possibility to recompile code by using an opcode is compilestr. It will compile any in-
strument definition which is contained in a string. As this will be a string with several lines, you will
usually use the {{ delimiter for the start and }} for the end of the string. This is a basic example:
EXAMPLE 03F14_compilestr.csd
<CsoundSynthesizer>
<CsOptions>
-o dac -d
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 1
ksmps = 32
0dbfs = 1
instr 1
endin
</CsInstruments>
<CsScore>
i1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Instrument 2 is defined inside instrument 1, and compiled via compilestr. in case you can change
this string in real-time (for instance in receiving it via OSC), you can add any new definition of
269
Recompilation 03 F. LIVE EVENTS
The frontends offer simplified methods for recompilation. In CsoundQt, for instance, you can se-
lect any instrument, and choose Edit > Evaluate Selection.
270
03 G. USER DEFINED OPCODES
Opcodes are the core units of everything that Csound does. They are like little machines that do
a job, and programming is akin to connecting these little machines to perform a larger job. An
opcode usually has something which goes into it: the inputs or arguments, and usually it has
something which comes out of it: the output which is stored in one or more variables. Opcodes
are written in the programming language C (that is where the name Csound comes from). If you
want to create a new opcode in Csound, you must write it in C. How to do this is described in
the Extending Csound chapter of this manual, and is also described in the relevant chapter of the
Canonical Csound Reference Manual.
There is, however, a way of writing your own opcodes in the Csound Language itself. The opcodes
which are written in this way, are called User Defined Opcodes or UDOs. A UDO behaves in the same
way as a standard opcode: it has input arguments, and usually one or more output variables. It
runs at i-time or at k-time. You use them as part of the Csound Language after you have defined
and loaded them.
User Defined Opcodes have many valuable properties. They make your instrument code clearer
because they allow you to create abstractions of blocks of code. Once a UDO has been defined
it can be recalled and repeated many times within an orchestra, each repetition requiring only
a single line of code. UDOs allow you to build up your own library of functions you need and
return to frequently in your work. In this way, you build your own Csound dialect within the Csound
Language. UDOs also represent a convenient format with which to share your work in Csound with
other users.
This chapter explains, initially with a very basic example, how you can build your own UDOs, and
what options they offer. Following this, the practice of loading UDOs in your .csd file is shown,
followed by some tips in regard to some unique capabilities of UDOs. Finally some examples are
shown for different User Defined Opcode definitions and applications.
If you want to write a User Defined Opcode in Csound6 which uses arrays, have a look at the end
of chapter 03E to see their usage and naming conventions.
271
Transforming Csound Instrument Code to a User Defined Opcode 03 G. USER DEFINED OPCODES
instrument code:
EXAMPLE 03G01_Pre_UDO.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
aDel init 0; initialize delay signal
iFb = .7; feedback multiplier
aSnd rand .2; white noise
kdB randomi -18, -6, .4; random movement between -18 and -6
aSnd = aSnd * ampdb(kdB); applied as dB to noise
kFiltFq randomi 100, 1000, 1; random movement between 100 and 1000
aFilt reson aSnd, kFiltFq, kFiltFq/5; applied as filter center frequency
aFilt balance aFilt, aSnd; bring aFilt to the volume of aSnd
aDelTm randomi .1, .8, .2; random movement between .1 and .8 as delay time
aDel vdelayx aFilt + iFb*aDel, aDelTm, 1, 128; variable delay
kdbFilt randomi -12, 0, 1; two random movements between -12 and 0 (dB) ...
kdbDel randomi -12, 0, 1; ... for the filtered and the delayed signal
aOut = aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it
outs aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 60
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
This is a filtered noise, and its delay, which is fed back again into the delay line at a certain ratio
iFb. The filter is moving as kFiltFq randomly between 100 and 1000 Hz. The volume of the filtered
noise is moving as kdB randomly between -18 dB and -6 dB. The delay time moves between 0.1
and 0.8 seconds, and then both signals are mixed together.
Basic Example
If this signal processing unit is to be transformed into a User Defined Opcode, the first question
is about the extend of the code that will be encapsulated: where the UDO code will begin and
end? The first solution could be a radical, and possibly bad, approach: to transform the whole
instrument into a UDO.
272
03 G. USER DEFINED OPCODES Transforming Csound Instrument Code to a User Defined Opcode
EXAMPLE 03G02_All_to_UDO.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
opcode FiltFb, 0, 0
aDel init 0; initialize delay signal
iFb = .7; feedback multiplier
aSnd rand .2; white noise
kdB randomi -18, -6, .4; random movement between -18 and -6
aSnd = aSnd * ampdb(kdB); applied as dB to noise
kFiltFq randomi 100, 1000, 1; random movement between 100 and 1000
aFilt reson aSnd, kFiltFq, kFiltFq/5; applied as filter center frequency
aFilt balance aFilt, aSnd; bring aFilt to the volume of aSnd
aDelTm randomi .1, .8, .2; random movement between .1 and .8 as delay time
aDel vdelayx aFilt + iFb*aDel, aDelTm, 1, 128; variable delay
kdbFilt randomi -12, 0, 1; two random movements between -12 and 0 (dB) ...
kdbDel randomi -12, 0, 1; ... for the filtered and the delayed signal
aOut = aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it
out aOut, aOut
endop
instr 1
FiltFb
endin
</CsInstruments>
<CsScore>
i 1 0 60
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Before we continue the discussion about the quality of this transformation, we should have a look
at the syntax first. The general syntax for a User Defined Opcode is:
opcode name, outtypes, intypes
...
endop
Here, the name of the UDO is FiltFb. You are free to use any name, but it is suggested that you
begin the name with a capital letter. By doing this, you avoid duplicating the name of most of the
pre-existing opcodes which normally start with a lower case letter. As we have no input arguments
and no output arguments for this first version of FiltFb, both outtypes and intypes are set to zero.
Similar to the instr … endin block of a normal instrument definition, for a UDO the opcode … endop
keywords begin and end the UDO definition block. In the instrument, the UDO is called like a normal
opcode by using its name, and in the same line the input arguments are listed on the right and the
output arguments on the left. In the previous a example, FiltFb has no input and output arguments
so it is called by just using its name:
273
Transforming Csound Instrument Code to a User Defined Opcode 03 G. USER DEFINED OPCODES
instr 1
FiltFb
endin
Now - why is this UDO more or less useless? It achieves nothing, when compared to the original
non UDO version, and in fact looses some of the advantages of the instrument defined version.
Firstly, it is not advisable to include this line in the UDO:
out aOut, aOut
This statement writes the audio signal aOut from inside the UDO to the output device. Imagine
you want to change the output channels, or you want to add any signal modifier after the opcode.
This would be impossible with this statement. So instead of including the out opcode, we give the
FiltFb UDO an audio output:
xout aOut
The xout statement of a UDO definition works like the “outlets” in PD or Max, sending the result(s)
of an opcode back to the caller instrument.
Now let us consider the UDO’s input arguments, choose which processes should be carried out
within the FiltFb unit, and what aspects would offer greater flexibility if controllable from outside
the UDO. First, the aSnd parameter should not be restricted to a white noise with amplitude 0.2,
but should be an input (like a “signal inlet” in PD/Max). This is implemented using the line:
aSnd xin
Both the output and the input type must be declared in the first line of the UDO definition, whether
they are i-, k- or a-variables. So instead of opcode FiltFb, 0, 0 the statement has changed
now to opcode FiltFb, a, a, because we have both input and output as a-variable.
The UDO is now much more flexible and logical: it takes any audio input, it performs the filtered
delay and feedback processing, and returns the result as another audio signal. In the next example,
instrument 1 does exactly the same as before. Instrument 2 has live input instead.
EXAMPLE 03G03_UDO_more_flex.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
opcode FiltFb, a, a
aSnd xin
aDel init 0; initialize delay signal
iFb = .7; feedback multiplier
kdB randomi -18, -6, .4; random movement between -18 and -6
aSnd = aSnd * ampdb(kdB); applied as dB to noise
kFiltFq randomi 100, 1000, 1; random movement between 100 and 1000
aFilt reson aSnd, kFiltFq, kFiltFq/5; applied as filter center frequency
aFilt balance aFilt, aSnd; bring aFilt to the volume of aSnd
274
03 G. USER DEFINED OPCODES Transforming Csound Instrument Code to a User Defined Opcode
aDelTm randomi .1, .8, .2; random movement between .1 and .8 as delay time
aDel vdelayx aFilt + iFb*aDel, aDelTm, 1, 128; variable delay
kdbFilt randomi -12, 0, 1; two random movements between -12 and 0 (dB) ...
kdbDel randomi -12, 0, 1; ... for the filtered and the delayed signal
aOut = aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it
xout aOut
endop
</CsInstruments>
<CsScore>
i 1 0 60 ;change to i 2 for live audio input
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Is this now the optimal version of the FiltFb User Defined Opcode? Obviously there are other parts
of the opcode definiton which could be controllable from outside: the feedback multiplier iFb, the
random movement of the input signal kdB, the random movement of the filter frequency kFiltFq,
and the random movements of the output mix kdbSnd and kdbDel. Is it better to put them outside
of the opcode definition, or is it better to leave them inside?
There is no general answer. It depends on the degree of abstraction you desire or you prefer
to relinquish. If you are working on a piece for which all of the parameters settings are already
defined as required in the UDO, then control from the caller instrument may not be necessary. The
advantage of minimizing the number of input and output arguments is the simplification in using
the UDO. The more flexibility you require from your UDO however, the greater the number of input
arguments that will be required. Providing more control is better for a later reusability, but may be
unnecessarily complicated.
Perhaps it is the best solution to have one abstract definition which performs one task, and to
create a derivative - also as UDO - fine tuned for the particular project you are working on. The final
example demonstrates the definition of a general and more abstract UDO FiltFb, and its various
applications: instrument 1 defines the specifications in the instrument itself; instrument 2 uses a
second UDO Opus123_FiltFb for this purpose; instrument 3 sets the general FiltFb in a new context
of two varying delay lines with a buzz sound as input signal.
275
Transforming Csound Instrument Code to a User Defined Opcode 03 G. USER DEFINED OPCODES
EXAMPLE 03G04_UDO_calls_UDO.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
opcode Opus123_FiltFb, a, a
;;the udo FiltFb here in my opus 123 :)
;input = aSnd
;output = filtered and delayed aSnd in different mixtures
aSnd xin
kdB randomi -18, -6, .4; random movement between -18 and -6
aSnd = aSnd * ampdb(kdB); applied as dB to noise
kFiltFq randomi 100, 1000, 1; random movement between 100 and 1000
iQ = 5
iFb = .7; feedback multiplier
aDelTm randomi .1, .8, .2; random movement between .1 and .8 as delay time
aFilt, aDel FiltFb aSnd, iFb, kFiltFq, iQ, 1, aDelTm
kdbFilt randomi -12, 0, 1; two random movements between -12 and 0 (dB) ...
kdbDel randomi -12, 0, 1; ... for the noise and the delay signal
aOut = aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it
xout aOut
endop
276
03 G. USER DEFINED OPCODES How to Use the User Defined Opcode Facility in Practice
</CsInstruments>
<CsScore>
i 1 0 30
i 2 31 30
i 3 62 120
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The good thing about the different possibilities of writing a more specified UDO, or a more gen-
eralized: You needn’t decide this at the beginning of your work. Just start with any formulation
you find useful in a certain situation. If you continue and see that you should have some more
parameters accessible, it should be easy to rewrite the UDO. Just be careful not to confuse the
different versions you create. Use names like Faulty1, Faulty2 etc. instead of overwriting Faulty.
Making use of extensive commenting when you initially create the UDO will make it easier to adapt
the UDO at a later time. What are the inputs (including the measurement units they use such as
Hertz or seconds)? What are the outputs? - How you do this, is up to you and depends on your
style and your preference.
277
How to Use the User Defined Opcode Facility in Practice 03 G. USER DEFINED OPCODES
You can load as many User Defined Opcodes into a Csound orchestra as you wish. As long as they
do not depend on each other, their order is arbitrarily. If UDO Opus123_FiltFb uses the UDO FiltFb
for its definition (see the example above), you must first load FiltFb, and then Opus123_FiltFb. If
not, you will get an error like this:
orch compiler:
opcode Opus123_FiltFb a a
error: no legal opcode, line 25:
aFilt, aDel FiltFb aSnd, iFb, kFiltFq, iQ, 1, aDelTm
1. Save your opcode definitions in a plain text file, for instance MyOpcodes.txt.
2. If this file is in the same directory as your .csd file, you can just call it by the statement:
#include "MyOpcodes.txt"
3. If MyOpcodes.txt is in a different directory, you must call it by the full path name, for instance:
#include "/Users/me/Documents/Csound/UDO/MyOpcodes.txt"
As always, make sure that the #include statement is the last one in the orchestra header, and that
the logical order is accepted if one opcode depends on another.
If you work with User Defined Opcodes a lot, and build up a collection of them, the #include feature
allows you easily import several or all of them to your .csd file.
278
03 G. USER DEFINED OPCODES How to Use the User Defined Opcode Facility in Practice
EXAMPLE 03G06_UDO_setksmps.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 44100 ;very high because of printing
opcode Faster, 0, 0
setksmps 4410 ;local ksmps is 1/10 of global ksmps
printks "UDO print!%n", 0
endop
instr 1
printks "Instr print!%n", 0 ;print each control period (once per second)
Faster ;print 10 times per second because of local ksmps
endin
</CsInstruments>
<CsScore>
i 1 0 2
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Default Arguments
For i-time arguments, you can use a simple feature to set default values:
ä o (instead of i) defaults to 0
ä p (instead of i) defaults to 1
ä j (instead of i) defaults to -1
For k-time arguments, you can use since Csound 5.18 these default values:
ä O (instead of k) defaults to 0
ä P (instead of k) defaults to 1
ä V (instead of k) defaults to 0.5
ä J (instead of k) defaults to -1
So you can omit these arguments - in this case the default values will be used. If you give an input
argument instead, the default value will be overwritten:
EXAMPLE 03G07_UDO_default_args.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
279
How to Use the User Defined Opcode Facility in Practice 03 G. USER DEFINED OPCODES
endop
instr 1
ia, ib, ic Defaults
print ia, ib, ic
ia, ib, ic Defaults 10
print ia, ib, ic
ia, ib, ic Defaults 10, 100
print ia, ib, ic
ia, ib, ic Defaults 10, 100, 1000
print ia, ib, ic
endin
</CsInstruments>
<CsScore>
i 1 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Overloading
Extending this example a bit shows an important feature of UDOs. If we have different input and/or
output types, we can use the same name for the UDO. Csound will choose the appropriate version
depending on the context. This is a well-known practice in many programming languages as over-
loading a function.
In the simple example below, the i-rate and the k-rate version of the UDO are both called Default.
Depending on the variable type and the number of outputs, the correct version is used by Csound.
EXAMPLE 03G08_UDO_overloading.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
</CsOptions>
<CsInstruments>
ksmps = 32
instr 1
ia, ib, ic Defaults
prints "ia = %d, ib = %d, ic = %d\n", ia, ib, ic
ia, ib, ic Defaults 10
prints "ia = %d, ib = %d, ic = %d\n", ia, ib, ic
ia, ib, ic Defaults 10, 100
prints "ia = %d, ib = %d, ic = %d\n", ia, ib, ic
280
03 G. USER DEFINED OPCODES How to Use the User Defined Opcode Facility in Practice
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Prints:
ia = 0, ib = 1, ic = -1
ia = 10, ib = 1, ic = -1
ia = 10, ib = 100, ic = -1
ia = 10, ib = 100, ic = 1000
ka = 0, kb = 1, kc = 0.5, kd = -1
ka = 2, kb = 1, kc = 0.5, kd = -1
ka = 2, kb = 4, kc = 0.5, kd = -1
ka = 2, kb = 4, kc = 6.0, kd = -1
ka = 2, kb = 4, kc = 6.0, kd = 8
EXAMPLE 03G09_Recursive_UDO.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
281
Examples 03 G. USER DEFINED OPCODES
instr 1
amix Recursion 400, 8 ;8 partials with a base frequency of 400 Hz
aout linen amix, .01, p3, .1
outs aout, aout
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Examples
We will focus here on some examples which will hopefully show the wide range of User Defined
Opcodes. Some of them are adaptions of examples from previous chapters about the Csound
Syntax.
It may be more useful to have an opcode which works for both, mono and stereo files as input.
This is a ideal job for a UDO. Two versions are implemented here by overloading. FilePlay either
returns one audio signal (if the file is stereo it uses just the first channel), or it returns two audio
signals (if the file is mono it duplicates this to both channels). We can use the default arguments
to make this opcode behave exactly as a tolerant diskin …
EXAMPLE 03G10_UDO_FilePlay.csd
<CsoundSynthesizer>
<CsOptions>
282
03 G. USER DEFINED OPCODES Examples
-odac --env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
aMono FilePlay "fox.wav", 1
outs aMono, aMono
endin
instr 2
aL, aR FilePlay "fox.wav", 1
outs aL, aR
endin
</CsInstruments>
<CsScore>
i 1 0 4
i 2 4 4
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
283
Examples 03 G. USER DEFINED OPCODES
EXAMPLE 03G11_UDO_rand_dev.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
kTrig metro 1 ;trigger signal once per second
kPerc linseg 0, p3, 100
TabDirtk giSine, kTrig, kPerc
aSig poscil .2, 400, giSine
out aSig, aSig
endin
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The next example permutes a series of numbers randomly each time it is called. For this purpose,
one random element of the input array1 is taken and written to the first position of the output array.
Then all elements which are “right of” this one random element are copied one position to the left.
As result the previously chosen element is being overwritten, and the number of values to read
is shrinked by one. This process is done again and again, until each old element has placed to a
(potentially) new position in the resulting output array.
EXAMPLE 03G12_ArrPermRnd.csd
<CsoundSynthesizer>
<CsOptions>
-nm0
1 Moreprecisely the random element is taken from a copy of the input array. This copy is always created by the UDO, so the
original array is left untouched. This is visible in the last line of the printout.
284
03 G. USER DEFINED OPCODES Examples
</CsOptions>
<CsInstruments>
ksmps = 32
seed 0
</CsInstruments>
<CsScore>
i "Permut" 0 .01
i "Permut" + .
i "Permut" + .
i "Permut" + .
i "Permut" + .
i "Print" .05 .01
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
285
Examples 03 G. USER DEFINED OPCODES
EXAMPLE 03G13_UDO_Recursive_AddSynth.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
286
03 G. USER DEFINED OPCODES Examples
endop
</CsInstruments>
<CsScore>
i 1 0 300
</CsScore>
</CsoundSynthesizer>
;Example by Joachim Heintz
EXAMPLE 03G14_UDO_zdf_svf.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
opcode zdf_svf,aaa,aKK
287
Examples 03 G. USER DEFINED OPCODES
;; output signals
alp init 0
ahp init 0
abp init 0
;;
kindx = 0
while kindx < ksmps do
khp = (ain[kindx] - (2*kR+kG) * kz1 - kz2) / (1 + (2*kR*kG) + (kG*kG))
kbp = kG * khp + kz1
klp = kG * kbp + kz2
; z1 register update
kz1 = kG * khp + kbp
kz2 = kG * kbp + klp
alp[kindx] = klp
ahp[kindx] = khp
abp[kindx] = kbp
kindx += 1
od
endop
opcode zdf_svf,aaa,aaa
iT = 1/sr
;; output signals
alp init 0
ahp init 0
abp init 0
;;
kindx = 0
while kindx < ksmps do
kR = aR[kindx]
288
03 G. USER DEFINED OPCODES Examples
; z1 register update
kz1 = kG * khp + kbp
kz2 = kG * kbp + klp
alp[kindx] = klp
ahp[kindx] = khp
abp[kindx] = kbp
kindx += 1
od
endop
instr 1
endin
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
;example by steven yi
289
Examples 03 G. USER DEFINED OPCODES
290
03 H. MACROS
Macros within Csound provide a mechanism whereby a line or a block of code can be referenced
using a macro codeword. Whenever the user-defined macro codeword for that block of code is
subsequently encountered in a Csound orchestra or score it will be replaced by the code text con-
tained within the macro. This mechanism can be extremely useful in situations where a line or a
block of code will be repeated many times - if a change is required in the code that will be repeated,
it need only be altered once in the macro definition rather than having to be edited in each of the
repetitions.
Csound utilises a subtly different mechanism for orchestra and score macros so each will be con-
sidered in turn. There are also additional features offered by the macro system such as the ability
to create a macro that accepts arguments - which can be thought of as the main macro containing
sub-macros that can be repeated multiple times within the main macro - the inclusion of a block
of text contained within a completely separate file and other macro refinements.
It is important to realise that a macro can contain any text, including carriage returns, and that
Csound will be ignorant to its use of syntax until the macro is actually used and expanded else-
where in the orchestra or score. Macro expansion is a feature of the orchestra and score prepro-
cessor and is not part of the compilation itself.
Orchestra Macros
Macros are defined using the syntax:
#define NAME # replacement text #
NAME is the user-defined name that will be used to call the macro at some point later in the or-
chestra; it must begin with a letter but can then contain any combination of numbers and letters. A
limited range of special characters can be employed in the name. Apostrophes, hash symbols and
dollar signs should be avoided. replacement text, bounded by hash symbols will be the text that will
replace the macro name when later called. Remember that the replacement text can stretch over
several lines. A macro can be defined anywhere within the <CsInstruments> … </CsInstruments>
sections of a .csd file. A macro can be redefined or overwritten by reusing the same macro name
in another macro definition. Subsequent expansions of the macro will then use the new version.
To expand the macro later in the orchestra the macro name needs to be preceded with a $ symbol
291
Orchestra Macros 03 H. MACROS
thus:
$NAME
The following example illustrates the basic syntax needed to employ macros. The name of a sound
file is referenced twice in the score so it is defined as a macro just after the header statements.
Instrument 1 derives the duration of the sound file and instructs instrument 2 to play a note for this
duration. Instrument 2 plays the sound file. The score as defined in the <CsScore> … </CsScore>
section only lasts for 0.01 seconds but the event_i statement in instrument 1 will extend this for
the required duration. The sound file is a mono file so you can replace it with any other mono file.
EXAMPLE 03H01_Macros_basic.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 16
nchnls = 1
0dbfs = 1
instr 1
; use an expansion of the macro in deriving the duration of the sound file
idur filelen $SOUNDFILE
event_i "i",2,0,idur
endin
instr 2
; use another expansion of the macro in playing the sound file
a1 diskin2 $SOUNDFILE,1
out a1
endin
</CsInstruments>
<CsScore>
i 1 0 0.01
e
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy
In more complex situations where we require slight variations, such as different constant values
or different sound files in each reuse of the macro, we can use a macro with arguments. A macro’s
arguments are defined as a list of sub-macro names within brackets after the name of the primary
macro with each macro argument being separated using an apostrophe as shown below.
#define NAME(Arg1'Arg2'Arg3...) # replacement text #
Arguments can be any text string permitted as Csound code, they should not be likened to opcode
arguments where each must conform to a certain type such as i, k, a etc. Macro arguments are
subsequently referenced in the macro text using their names preceded by a $ symbol. When the
main macro is called later in the orchestra its arguments are then replaced with the values or
strings required. The Csound Reference Manual states that up to five arguments are permitted
292
03 H. MACROS Orchestra Macros
but this still refers to an earlier implementation and in fact many more are actually permitted.
In the following example a 6 partial additive synthesis engine with a percussive character is defined
within a macro. Its fundamental frequency and the ratios of its six partials to this fundamental
frequency are prescribed as macro arguments. The macro is reused within the orchestra twice
to create two different timbres, it could be reused many more times however. The fundamental
frequency argument is passed to the macro as p4 from the score.
EXAMPLE 03H02_Macro_6partials.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 16
nchnls = 1
0dbfs = 1
instr 1 ; xylophone
; expand the macro with partial ratios that reflect those of a xylophone
; the fundemental frequency macro argument (the first argument -
; - is passed as p4 from the score
$ADDITIVE_TONE(p4'1'3.932'9.538'16.688'24.566'31.147)
endin
instr 2 ; vibraphone
$ADDITIVE_TONE(p4'1'3.997'9.469'15.566'20.863'29.440)
endin
</CsInstruments>
<CsScore>
i 1 0 1 200
i 1 1 2 150
i 1 2 4 100
i 2 3 7 800
i 2 4 4 700
i 2 5 7 600
e
293
Score Macros 03 H. MACROS
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy
Score Macros
Score macros employ a similar syntax. Macros in the score can be used in situations where a
long string of p-fields are likely to be repeated or, as in the next example, to define a palette of
score patterns that repeat but with some variation such as transposition. In this example two
riffs are defined which each employ two macro arguments: the first to define when the riff will
begin and the second to define a transposition factor in semitones. These riffs are played back
using a bass guitar-like instrument using the wgpluck2 opcode. Remember that mathematical
expressions within the Csound score must be bound within square brackets [].
EXAMPLE 03H03_Score_macro.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 16
nchnls = 1
0dbfs = 1
</CsInstruments>
<CsScore>
; p4 = pitch as a midi note number
#define RIFF_1(Start'Trans)
#
i 1 [$Start ] 1 [36+$Trans]
i 1 [$Start+1 ] 0.25 [43+$Trans]
i 1 [$Start+1.25] 0.25 [43+$Trans]
i 1 [$Start+1.75] 0.25 [41+$Trans]
i 1 [$Start+2.5 ] 1 [46+$Trans]
i 1 [$Start+3.25] 1 [48+$Trans]
#
#define RIFF_2(Start'Trans)
#
i 1 [$Start ] 1 [34+$Trans]
i 1 [$Start+1.25] 0.25 [41+$Trans]
i 1 [$Start+1.5 ] 0.25 [43+$Trans]
i 1 [$Start+1.75] 0.25 [46+$Trans]
i 1 [$Start+2.25] 0.25 [43+$Trans]
i 1 [$Start+2.75] 0.25 [41+$Trans]
i 1 [$Start+3 ] 0.5 [43+$Trans]
i 1 [$Start+3.5 ] 0.25 [46+$Trans]
294
03 H. MACROS Score Macros
#
t 0 90
$RIFF_1(0 ' 0)
$RIFF_1(4 ' 0)
$RIFF_2(8 ' 0)
$RIFF_2(12'-5)
$RIFF_1(16'-5)
$RIFF_2(20'-7)
$RIFF_2(24' 0)
$RIFF_2(28' 5)
e
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy
Score macros can themselves contain macros so that, for example, the above example could be
further expanded so that a verse, chorus structure could be employed where verses and choruses,
defined using macros, were themselves constructed from a series of riff macros.
UDOs and macros can both be used to reduce code repetition and there are many situations where
either could be used with equal justification but each offers its own strengths. UDOs strengths
lies in their ability to be used just like an opcode with inputs and outputs, the ease with which they
can be shared - between Csound projects and between Csound users - their ability to operate at a
different k-rate to the rest of the orchestra and in how they facilitate recursion. The fact that macro
arguments are merely blocks of text, however, offers up new possibilities and unlike UDOs, macros
can span several instruments. Of course UDOs have no use in the Csound score unlike macros.
Macros can also be used to simplify the creation of complex FLTK GUI where panel sections might
be repeated with variations of output variable names and location.
Csound’s orchestra and score macro system offers many additional refinements and this chapter
serves merely as an introduction to their basic use. To learn more it is recommended to refer to
the relevant sections of the Csound Reference Manual.
295
Score Macros 03 H. MACROS
296
03 I. FUNCTIONAL SYNTAX
Functional syntax is very common in many programming languages. It takes the form of fun(),
where fun is any function which encloses its arguments in parentheses. Even in “old” Csound,
there existed some rudiments of this functional syntax in some mathematical functions, such as
sqrt(), log(), int(), frac(). For instance, the following code
iNum = 1.234
print int(iNum)
print frac(iNum)
would print:
instr 1: #i0 = 1.000
instr 1: #i1 = 0.230
Here the integer part and the fractional part of the number 1.234 are passed directly as an argument
to the print opcode, without needing to be stored at any point as a variable.
This alternative way of formulating code can now be used with many opcodes in Csound6.1 First
we shall look at some examples.
The traditional way of applying a fade and a sliding pitch (glissando) to a tone is something like
this:
EXAMPLE 03I01_traditional_syntax.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 1
ksmps = 32
0dbfs = 1
instr 1
kFade linseg 0, p3/2, 0.2, p3/2, 0
kSlide expseg 400, p3/2, 800, p3/2, 600
aTone poscil kFade, kSlide
out aTone
endin
1 The main restriction is that it can only be used by opcodes which have only one output (not two or more).
297
03 I. FUNCTIONAL SYNTAX
</CsInstruments>
<CsScore>
i 1 0 5
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
1. We create a line signal with the opcode linseg. It starts at zero, moves to 0.2 in half of the
instrument’s duration (p3/2), and moves back to zero for the second half of the instrument’s
duration. We store this signal in the variable kFade.
2. We create an exponential signal with the opcode expseg. It starts at 400, moves to 800
in half the instrument’s duration, and moves to 600 for the second half of the instrument’s
duration. We store this signal in the variable kSlide.
3. We create a sine audio signal with the opcode poscil. We feed in the signal stored in the
variable kFade as amplitude, and the signal stored in the variable kSlide as frequency input.
We store the audio signal in the variable aTone.
4. Finally, we write the audio signal to the output with the opcode out.
Each of these four lines can be considered as a “function call”, as we call the opcodes (functions)
linseg, expseg, poscil and out with certain arguments (input parameters). If we now transform this
example to functional syntax, we will avoid storing the result of a function call in a variable. Rather
we will feed the function and its arguments directly into the appropriate slot, by means of the fun()
syntax.
If we write the first line in functional syntax, it will look like this:
linseg(0, p3/2, 0.2, p3/2, 0)
EXAMPLE 03I02_functional_syntax_1.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 1
ksmps = 32
0dbfs = 1
instr 1
aTone poscil linseg(0,p3/2,.2,p3/2,0), expseg(400,p3/2,800,p3/2,600)
out aTone
endin
</CsInstruments>
<CsScore>
i 1 0 5
</CsScore>
298
03 I. FUNCTIONAL SYNTAX Declare your color: i, k or a?
</CsoundSynthesizer>
;example by joachim heintz
EXAMPLE 03I03_functional_syntax_2.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 1
ksmps = 32
0dbfs = 1
instr 1
out(poscil(linseg(0,p3/2,.2,p3/2,0),expseg(400,p3/2,800,p3/2,600)))
endin
</CsInstruments>
<CsScore>
i 1 0 5
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Let us assume we want to change the highest frequency in our example from 800 to a random
value between 700 and 1400 Hz, so that we hear a different movement for each tone. In this case,
we can simply write random(700, 1400):
EXAMPLE 03I04_functional_syntax_rate_1.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 1
2 Please note that these two examples are not really correct, because the rates of the opcodes are not specified.
3 See chapter 03A Initialization and Performance Pass for a more thorough explanation.
299
Declare your color: i, k or a? 03 I. FUNCTIONAL SYNTAX
ksmps = 32
0dbfs = 1
instr 1
out(poscil(linseg(0,p3/2,.2,p3/2,0),
expseg(400,p3/2,random(700,1400),p3/2,600)))
endin
</CsInstruments>
<CsScore>
r 5
i 1 0 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
But why is the random opcode here performing at i-rate, and not at k- or a-rate? This is, so to
say, pure random — it happens because in the Csound soruces the i-rate variant of this opcode is
written first.4 If the k-rate variant were first, the above code failed.
So it is both, clearer and actually required, to explicitly declare at which rate a function is to be
performed. This code claims that poscil runs at a-rate, linseg and expseg run at k-rate, and random
runs at i-rate here:
EXAMPLE 03I05_functional_syntax_rate_2.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 1
ksmps = 32
0dbfs = 1
instr 1
out(poscil:a(linseg:k(0, p3/2, 1, p3/2, 0),
expseg:k(400, p3/2, random:i(700, 1400), p3/2, 600)))
endin
</CsInstruments>
<CsScore>
r 5
i 1 0 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Rate declaration is done with simply specifying :a, :k or :i after the function. It would represent
good practice to include it all the time, to be clear about what is happening.
300
03 I. FUNCTIONAL SYNTAX fun() with UDOs
EXAMPLE 03I06_functional_syntax_udo.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 1
ksmps = 32
0dbfs = 1
instr 1
kArr[] fillarray 1, 2000, 2.8, 2000, 5.2, 2000, 8.2, 2000
aImp mpulse .3, 1
out FourModes(aImp, 200, kArr)
endin
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz, based on an example of iain mccurdy
Besides the ability of functional expressions to abbreviate code, this way of writing Csound code
allows to conincide with a convention which is shared by many programming languages. This final
example is doing exactly the same as the previous one, but for some programmers in a more clear
and common way:
EXAMPLE 03I07_functional_syntax_udo_2.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 2
ksmps = 32
301
fun() with UDOs 03 I. FUNCTIONAL SYNTAX
0dbfs = 1
instr 1
kArr[] = fillarray(1, 2000, 2.8, 2000, 5.2, 2000, 8.2, 2000)
aImp = mpulse:a(.3, 1)
aOut = FourModes(aImp, randomh:k(200,195,1), kArr)
out(aOut, aOut)
endin
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz, based on an example of iain mccurdy
302
04 A. ADDITIVE SYNTHESIS
Jean Baptiste Joseph Fourier (1768-1830) claimed in this treatise Théorie analytique de la chaleur
(1822) that any periodic function can be described perfectly as a sum of weighted sine waves. The
frequencies of these harmonics are integer multiples of the fundamental frequency.
As we can easily produce sine waves of different amplitudes in digital sound synthesis, the Fourier
Synthesis or Additive Synthesis may sound the universal key for creating interesting sounds. But
first, not all sounds are periodic. Noise is very important part of the sounding world represents the
other pole which is essentially non-periodic. And dealing with single sine waves means dealing
with a lot of data and reqirements.
Nonetheless, additive synthesis can provide unusual and interesting sounds and the power of mod-
ern computers and their ability to manage data in a programming language offers new dimensions
of working with this old technique. As with most things in Csound there are several ways to go
about implementing additive synthesis. We shall endeavour to introduce some of them and to
allude to how they relate to different programming paradigms.
ä For each sine, there will be a frequency and an amplitude with an envelope.
– The frequency will usually be a constant value, but it can be varied and in fact natural
sounds typically exhibit slight modulations of partial frequencies.
– The amplitude must have at least a simple envelope such as the well-known ADSR but
more complex methods of continuously altering the amplitude will result in a livelier
sound.
– The total number of sinusoids. A sound which consists of just three sinusoids will most
likely sound poorer than one which employs 100.
– The frequency ratios of the sine generators. For a classic harmonic spectrum, the mul-
tipliers of the sinusoids are 1, 2, 3, … (If your first sine is 100 Hz, the others will be 200,
303
Different Methods for Additive Synthesis 04 A. ADDITIVE SYNTHESIS
300, 400, … Hz.) An inharmonic or noisy spectrum will probably have no simple integer
ratios. These frequency ratios are chiefly responsible for our perception of timbre.
– The base frequency is the frequency of the first partial. If the partials are exhibiting a
harmonic ratio, this frequency (in the example given 100 Hz) is also the overall perceived
pitch.
– The amplitude ratios of the sinusoids. This is also very important in determining the
resulting timbre of a sound. If the higher partials are relatively strong, the sound will be
perceived as being more “brilliant”; if the higher partials are soft, then the sound will be
perceived as being dark and soft.
– The duration ratios of the sinusoids. In simple additive synthesis, all single sines have
the same duration, but it will be more interesting if they differ - this will usually relate to
the durations of the envelopes: if the envelopes of different partials vary, some partials
will die away faster than others.
It is not always the aim of additive synthesis to imitate natural sounds, but the task of first analysing
and then attempting to imitate a sound can prove to be very useful when studying additive synthe-
sis. This is what a guitar note looks like when spectrally analysed:
Figure 27.1: Spectral analysis of a guitar tone in time (courtesy of W. Fohl, Hamburg)
Each partial possesses its own frequency movement and duration. We may or may not be able
to achieve this successfully using additive synthesis. We will begin with some simple sounds and
consider how to go about programming this in Csound. Later we will look at some more complex
sounds and the more advanced techniques required to synthesize them.
304
04 A. ADDITIVE SYNTHESIS Different Methods for Additive Synthesis
outputs together. In the following example, instrument 1 demonstrates the creation of a har-
monic spectrum, and instrument 2 an inharmonic one. Both instruments share the same am-
plitude multipliers: 1, 1/2, 1/3, 1/4, … and receive the base frequency in Csound’s pitch notation
(octave.semitone) and the main amplitude in dB.
EXAMPLE 04A01_AddSynth_simple.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
; pch amp
i 1 0 5 8.00 -13
i 1 3 5 9.00 -17
i 1 5 8 9.02 -15
i 1 6 9 7.01 -15
305
Different Methods for Additive Synthesis 04 A. ADDITIVE SYNTHESIS
i 1 7 10 6.00 -13
s
i 2 0 5 8.00 -13
i 2 3 5 9.00 -17
i 2 5 8 9.02 -15
i 2 6 9 7.01 -15
i 2 7 10 6.00 -13
</CsScore>
</CsoundSynthesizer>
;example by Andrés Cabrera
with the parameters iampfactor (the relative amplitude of a partial) and ifreqfactor (the frequency
multiplier) being transferred to the score as p-fields.
The next version of the previous instrument, simplifies the instrument code and defines the variable
values as score parameters:
EXAMPLE 04A02_AddSynth_score.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
iBaseFreq = cpspch(p4)
iFreqMult = p5 ;frequency multiplier
iBaseAmp = ampdbfs(p6)
iAmpMult = p7 ;amplitude multiplier
iFreq = iBaseFreq * iFreqMult
306
04 A. ADDITIVE SYNTHESIS Different Methods for Additive Synthesis
</CsInstruments>
<CsScore>
; freq freqmult amp ampmult
i 1 0 7 8.09 1 -10 1
i . . 6 . 2 . [1/2]
i . . 5 . 3 . [1/3]
i . . 4 . 4 . [1/4]
i . . 3 . 5 . [1/5]
i . . 3 . 6 . [1/6]
i . . 3 . 7 . [1/7]
s
i 1 0 6 8.09 1.5 -10 1
i . . 4 . 3.1 . [1/3]
i . . 3 . 3.4 . [1/6]
i . . 4 . 4.2 . [1/9]
i . . 5 . 6.1 . [1/12]
i . . 6 . 6.3 . [1/15]
</CsScore>
</CsoundSynthesizer>
;example by Andrés Cabrera and Joachim Heintz
You might ask: “Okay, where is the simplification? There are even more lines than before!” This is
true, but this still represents better coding practice. The main benefit now is flexibility. Now we are
able to realise any number of partials using the same instrument, with any amplitude, frequency
and duration ratios. Using the Csound score abbreviations (for instance a dot for repeating the
previous value in the same p-field), you can make great use of copy-and-paste, and focus just on
what is changing from line to line.
Note that you are now calling one instrument multiple times in the creation of a single additive
synthesis note, in fact, each instance of the instrument contributes just one partial to the additive
tone. Calling multiple instances of one instrument in this way also represents good practice in
Csound coding. We will discuss later how this end can be achieved in a more elegant way.
Before we continue, let us return to the first example and discuss a classic and abbreviated method
for playing a number of partials. As we mentioned at the beginning, Fourier stated that any periodic
oscillation can be described using a sum of simple sinusoids. If the single sinusoids are static
(with no individual envelopes, durations or frequency fluctuations), the resulting waveform will be
similarly static.
307
Different Methods for Additive Synthesis 04 A. ADDITIVE SYNTHESIS
Above you see four sine waves, each with fixed frequency and amplitude relationships. These are
then mixed together with the resulting waveform illustrated at the bottom (Sum). This then begs
the question: why not simply calculate this composite waveform first, and then read it with just a
single oscillator?
This is what some Csound GEN routines do. They compose the resulting shape of the periodic
waveform, and store the values in a function table. GEN10 can be used for creating a waveform
consisting of harmonically related partials. Its form begins with the common GEN routine p-fields
<table number>, <creation time>, <size in points>, <GEN number>
following which you just have to define the relative strengths of the harmonics. GEN09 is more
complex and allows you to also control the frequency multiplier and the phase (0-360°) of each
partial. Thus we are able to reproduce the first example in a shorter (and computationally faster)
form:
EXAMPLE 04A03_AddSynth_GEN.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
iBasFreq = cpspch(p4)
iTabFreq = p7 ;base frequency of the table
iBasFreq = iBasFreq / iTabFreq
iBaseAmp = ampdb(p5)
iFtNum = p6
aOsc poscil iBaseAmp, iBasFreq, iFtNum
aEnv linen aOsc, p3/4, p3, p3/4
outs aEnv, aEnv
endin
308
04 A. ADDITIVE SYNTHESIS Different Methods for Additive Synthesis
</CsInstruments>
<CsScore>
; pch amp table table base (Hz)
i 1 0 5 8.00 -10 1 1
i . 3 5 9.00 -14 . .
i . 5 8 9.02 -12 . .
i . 6 9 7.01 -12 . .
i . 7 10 6.00 -10 . .
s
i 1 0 5 8.00 -10 2 100
i . 3 5 9.00 -14 . .
i . 5 8 9.02 -12 . .
i . 6 9 7.01 -12 . .
i . 7 10 6.00 -10 . .
</CsScore>
</CsoundSynthesizer>
;example by Andrés Cabrera and Joachim Heintz
You maybe noticed that to store a waveform in which the partials are not harmonically related, the
table must be constructed in a slightly special way (see table giNois). If the frequency multipliers in
our first example started with 1 and 1.02, the resulting period is actually very long. If the oscillator
was playing at 100 Hz, the tone it would produce would actually contain partials at 100 Hz and
102 Hz. So you need 100 cycles from the 1.00 multiplier and 102 cycles from the 1.02 multiplier to
complete one period of the composite waveform. In other words, we have to create a table which
contains respectively 100 and 102 periods, instead of 1 and 1.02. Therefore the table frequencies
will not be related to 1 as usual but instead to 100. This is the reason that we have to introduce a
new parameter, iTabFreq, for this purpose. (N.B. In this simple example we could actually reduce
the ratios to 50 and 51 as 100 and 102 share a common denominator of 2.)
This method of composing waveforms can also be used for generating four standard waveform
shapes typically encountered in vintage synthesizers. An impulse wave can be created by adding a
number of harmonics of the same strength. A sawtooth wave has the amplitude multipliers 1, 1/2,
1/3, … for the harmonics. A square wave has the same multipliers, but just for the odd harmonics.
A triangle can be calculated as 1 divided by the square of the odd partials, with swapping positive
and negative values. The next example creates function tables with just the first ten partials for
each of these waveforms.
EXAMPLE 04A04_Standard_waveforms.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
asig poscil .2, 457, p4
309
Different Methods for Additive Synthesis 04 A. ADDITIVE SYNTHESIS
</CsInstruments>
<CsScore>
i 1 0 3 1
i 1 4 3 2
i 1 8 3 3
i 1 12 3 4
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Let us return to the second instrument (04A02.csd) which had already made use of some abstrac-
tions and triggered one instrument instance for each partial. This was done in the score, but now
we will trigger one complete note in one score line, not just one partial. The first step is to assign
the desired number of partials via a score parameter. The next example triggers any number of
partials using this one value:
EXAMPLE 04A05_Flexible_number_of_partials.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
1 This
term is used here in a general manner. There is also a Csound opcode subinstr, which has some more specific
meanings.
310
04 A. ADDITIVE SYNTHESIS Different Methods for Additive Synthesis
</CsInstruments>
<CsScore>
; number of partials
i 1 0 3 10
i 1 3 3 20
i 1 6 3 2
</CsScore>
</CsoundSynthesizer>
;Example by joachim heintz
This instrument can easily be transformed to be played via a midi keyboard. In the next the midi key
velocity will map to the number of synthesized partials played to implement a brightness control.
EXAMPLE 04A06_Play_it_with_Midi.csd
<CsoundSynthesizer>
<CsOptions>
-o dac -Ma
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
311
Different Methods for Additive Synthesis 04 A. ADDITIVE SYNTHESIS
<CsScore>
</CsScore>
</CsoundSynthesizer>
;Example by Joachim Heintz
Although this instrument is rather primitive it is useful to be able to control the timbre in this way
using key velocity. Let us continue to explore some other methods of creating parameter variation
in additive synthesis.
Let us start with some random deviations in our subinstrument. The following parameters can be
affected:
ä The frequency of each partial can be slightly detuned. The range of this possible maximum
detuning can be set in cents (100 cent = 1 semitone).
ä The amplitude of each partial can be altered relative to its default value. This alteration can
be measured in decibels (dB).
ä The duration of each partial can be made to be longer or shorter than the default value. Let us
define this deviation as a percentage. If the expected duration is five seconds, a maximum
deviation of 100% will mean a resultant value of between half the duration (2.5 sec) and
double the duration (10 sec).
The following example demonstrates the effect of these variations. As a base - and as a reference
to its author - we take as our starting point, the bell-like sound created by Jean-Claude Risset in
his Sound Catalogue.2
EXAMPLE 04A07_Risset_variations.csd
<CsoundSynthesizer>
<CsOptions>
-o dac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
2 Jean-ClaudeRisset, Introductory Catalogue of Computer Synthesized Sounds (1969), cited after Dodge/Jerse, Computer
Music, New York/London 1985, p.94
312
04 A. ADDITIVE SYNTHESIS Different Methods for Additive Synthesis
seed 0
</CsInstruments>
<CsScore>
; frequency amplitude duration
; deviation deviation deviation
; in cent in dB in %
;;unchanged sound (twice)
r 2
i 1 0 5 0 0 0
s
313
Different Methods for Additive Synthesis 04 A. ADDITIVE SYNTHESIS
In midi-triggered descendant of this instrument, we could - as one of many possible options - vary
the amount of possible random variation according to the key velocity so that a key pressed softly
plays the bell-like sound as described by Risset but as a key is struck with increasing force the
sound produced will be increasingly altered.
EXAMPLE 04A08_Risset_played_by_Midi.csd
<CsoundSynthesizer>
<CsOptions>
-o dac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
314
04 A. ADDITIVE SYNTHESIS Different Methods for Additive Synthesis
ifqmult = giFqs[indx]
ifreq = ibasfreq * ifqmult
iampmult = giAmps[indx]
iamp = iampmult / 20 ;scale
event_i "i", 10, 0, 3, ifreq, iamp, ifqdev, iampdev, idurdev
indx += 1
od
endin
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Whether you can play examples like this in realtime will depend on the power of your computer.
Have a look at chapter 2D (Live Audio) for tips on getting the best possible performance from your
Csound orchestra.
The next example demonstrates this in transforming the Risset bell code (04A07) to this approach.
The coding style is more condensed here, so some comments are added after the code.
315
Different Methods for Additive Synthesis 04 A. ADDITIVE SYNTHESIS
EXAMPLE 04A09_risset_bell_rec_udo.csd
<CsoundSynthesizer>
<CsOptions>
-o dac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
opcode AddSynth,a,i[]i[]iooo
/* iFqs[], iAmps[]: arrays with frequency ratios and amplitude multipliers
iBasFreq: base frequency (hz)
iPtlIndex: partial index (first partial = index 0)
iFreqDev, iAmpDev: maximum frequency (cent) and amplitude (db) deviation */
iFqs[], iAmps[], iBasFreq, iPtlIndx, iFreqDev, iAmpDev xin
iFreq = iBasFreq * iFqs[iPtlIndx] * cent(rnd31:i(iFreqDev,0))
iAmp = iAmps[iPtlIndx] * ampdb(rnd31:i(iAmpDev,0))
aPartial poscil iAmp, iFreq
if iPtlIndx < lenarray(iFqs)-1 then
aPartial += AddSynth(iFqs,iAmps,iBasFreq,iPtlIndx+1,iFreqDev,iAmpDev)
endif
xout aPartial
endop
instr Risset_Bell
ibasfreq = p4
iamp = ampdb(p5)
ifqdev = p6 ;maximum freq deviation in cents
iampdev = p7 ;maximum amp deviation in dB
aRisset AddSynth giFqs, giAmps, ibasfreq, 0, ifqdev, iampdev
aRisset *= transeg:a(0, .01, 0, iamp/10, p3-.01, -10, 0)
out aRisset, aRisset
endin
instr PlayTheBells
iMidiPitch random 60,70
schedule("Risset_Bell",0,random:i(2,8),mtof:i(iMidiPitch),
random:i(-30,-10),30,6)
if p4 > 0 then
schedule("PlayTheBells",random:i(1/10,1/4),1,p4-1)
endif
endin
</CsInstruments>
<CsScore>
; base db frequency amplitude
; freq deviation deviation
; in cent in dB
r 2 ;unchanged sound
i 1 0 5 400 -6 0 0
r 2 ;variations in frequency
i 1 0 5 400 -6 50 0
r 2 ;variations in amplitude
i 1 0 5 400 -6 0 10
s
316
04 A. ADDITIVE SYNTHESIS Csound Opcodes for Additive Synthesis
Some comments:
ä Line 12-17: The main inputs are array with the frequencies iFqs[], the array with the amplitudes
iAmps[], and the base frequency iBasFreq. The partial index iPtlIndx is by default zero, as well
as the possible frequency and amplitude deviation of each partial.
ä Line 18-19: The appropriate frequency and amplitude multiplier is selected from the array as
iFqs[iPtlIndx] and iAmps[iPtlIndx]. The deviations are calculated for each partial
by the rnd31 opcode, a bipolar random generator which by default seeds from system clock.
ä Line 21-23: The recursion is done if this is not the last partial. For the Risset bell it means:
partials 0, 1, 2, … are called until partial index 10. As index 10 is not smaller than the length
of the frequency array (= 11) minus 1, it will not perform the recursion any more.
ä Line 37: The envelope is applied for the sum of all partials (again in functional style, see
chapter 03 I), as we don’t use individual durations here.
ä Line 41-47: The PlayTheBells instrument also uses recursion. It starts with p4=50 and calls
the next instance of itself with p4=49, which in turn will call the next instance with p4=48,
until 0 has been reached. The Risset_Bell instrument is scheduled with random values for
duration, pitch and volume.
In the following example a 100Hz tone is created, in which the number of partials it contains rises
from 1 to 20 across its 8 second duration. A spectrogram/sonogram displays how this manifests
spectrally. A linear frequency scale is employed in the spectrogram so that harmonic partials
appear equally spaced.
317
Csound Opcodes for Additive Synthesis 04 A. ADDITIVE SYNTHESIS
EXAMPLE 04A10_gbuzz.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
; a cosine wave
gicos ftgen 0, 0, 2^10, 11, 1
instr 1
knh line 1, p3, 20 ; number of harmonics
klh = 1 ; lowest harmonic
kmul = 1 ; amplitude coefficient multiplier
asig gbuzz 1, 100, knh, klh, kmul, gicos
outs asig, asig
endin
</CsInstruments>
<CsScore>
i 1 0 8
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The total number of partials only reaches 19 because the line function only reaches 20 at the very
conclusion of the note.
In the next example the number of partials contained within the tone remains constant but the
partial number of the lowest partial rises from 1 to 20.
EXAMPLE 04A11_gbuzz_partials_rise.csd
<CsoundSynthesizer>
318
04 A. ADDITIVE SYNTHESIS Csound Opcodes for Additive Synthesis
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
; a cosine wave
gicos ftgen 0, 0, 2^10, 11, 1
instr 1
knh = 20
klh line 1, p3, 20
kmul = 1
asig gbuzz 1, 100, knh, klh, kmul, gicos
outs asig, asig
endin
</CsInstruments>
<CsScore>
i 1 0 8
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
In the spectrogram it can be seen how, as lowermost partials are removed, additional partials are
added at the top of the spectrum. This is because the total number of partials remains constant
at 20.
In the final gbuzz example the amplitude coefficient multiplier rises from 0 to 2. It can be heard
(and seen in the spectrogram) how, when this value is zero, emphasis is on the lowermost partial
and when this value is 2, emphasis is on the uppermost partial.
EXAMPLE 04A12_gbuzz_amp_coeff_rise.csd
<CsoundSynthesizer>
<CsOptions>
319
Csound Opcodes for Additive Synthesis 04 A. ADDITIVE SYNTHESIS
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
; a cosine wave
gicos ftgen 0, 0, 2^10, 11, 1
instr 1
knh = 20
klh = 1
kmul line 0, p3, 2
asig gbuzz 1, 100, knh, klh, kmul, gicos
outs asig, asig
endin
</CsInstruments>
<CsScore>
i 1 0 8
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
buzz is a simplified version of gbuzz with fewer parameters – it does not provide for modulation
of the lowest partial number and amplitude coefficient multiplier.
GEN11 creates a function table waveform using the same parameters as gbuzz. If a gbuss tone is
required but no performance time modulation of its parameters is needed, GEN11 may provide a
more efficient option. GEN11 also opens the possibility of using its waveforms in a variety of other
opcodes. gbuzz, buzz and GEN11 may also prove useful as a source for subtractive synthesis.
hsboscil
The opcode hsboscil offers an interesting method of additive synthesis in which all partials are
spaced an octave apart. Whilst this may at first seems limiting, it does offer simple means for
320
04 A. ADDITIVE SYNTHESIS Csound Opcodes for Additive Synthesis
morphing the precise make up of its spectrum. It can be thought of as producing a sound spec-
trum that extends infinitely above and below the base frequency. Rather than sounding all of the
resultant partials simultaneously, a window (typically a Hanning window) is placed over the spec-
trum, masking it so that only one or several of these partials sound at any one time. The user can
shift the position of this window up or down the spectrum at k-rate and this introduces the possi-
bility of spectral morphing. hsbosil refers to this control as kbrite. The width of the window can be
specified (but only at i-time) using its iOctCnt parameter. The entire spectrum can also be shifted
up or down, independent of the location of the masking window using the ktone parameter, which
can be used to create a Risset glissando-type effect. The sense of the interval of an octave between
partials tends to dominate but this can be undermined through the use of frequency shifting or by
using a waveform other than a sine wave as the source waveform for each partial.
In the next example, instrument 1 demonstrates the basic sound produced by hsboscil whilst
randomly modulating the location of the masking window (kbrite) and the transposition control
(ktone). Instrument 2 introduces frequency shifting (through the use of the hilbert opcode) which
adds a frequency value to all partials thereby warping the interval between partials. Instrument 3
employs a more complex waveform (pseudo-inharmonic) as the source waveform for the partials.
EXAMPLE 04A13_hsboscil.csd
<CsoundSynthesizer>
<CsOptions>
-o dac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 2
321
Csound Opcodes for Additive Synthesis 04 A. ADDITIVE SYNTHESIS
</CsInstruments>
<CsScore>
i 1 0 14
i 2 15 14
i 3 30 14
</CsScore>
</CsoundSynthesizer>
;example by iain mccurdy
Additive synthesis can still be an exciting way of producing sounds. It offers the user a level
of control that other methods of synthesis simply cannot match. It also provides an essential
workbench for learning about acoustics and spectral theory as related to sound.
322
04 B. SUBTRACTIVE SYNTHESIS
Subtractive synthesis is, at least conceptually, the inverse of additive synthesis in that instead
of building complex sound through the addition of simple cellular materials such as sine waves,
subtractive synthesis begins with a complex sound source, such as white noise or a recorded
sample, or a rich waveform, such as a sawtooth or pulse, and proceeds to refine that sound by
removing partials or entire sections of the frequency spectrum through the use of audio filters.
The creation of dynamic spectra (an arduous task in additive synthesis) is relatively simple in
subtractive synthesis as all that will be required will be to modulate a few parameters pertaining to
any filters being used. Working with the intricate precision that is possible with additive synthesis
may not be as easy with subtractive synthesis but sounds can be created much more instinctively
than is possible with additive or modulation synthesis.
Each oscillator can describe either a sawtooth, PWM waveform (i.e. square - pulse etc.) or white
noise and each oscillator can be transposed in octaves or in cents with respect to a fundamental
pitch. The two oscillators are mixed and then passed through a 4-pole / 24dB per octave resonant
lowpass filter. The opcode moogladder is chosen on account of its authentic vintage character.
The cutoff frequency of the filter is modulated using an ADSR-style (attack-decay-sustain-release)
envelope facilitating the creation of dynamic, evolving spectra. Finally the sound output of the filter
is shaped by an ADSR amplitude envelope. Waveforms such as sawtooths and square waves offer
rich sources for subtractive synthesis as they contain a lot of sound energy across a wide range of
frequencies - it could be said that white noise offers the richest sound source containing, as it does,
energy at every frequency. A sine wave would offer a very poor source for subtractive synthesis as
it contains energy at only one frequency. Other Csound opcodes that might provide rich sources
are the buzz and gbuzz opcodes and the GEN09, GEN10, GEN11 and GEN19 GEN routines.
As this instrument is suggestive of a performance instrument controlled via MIDI, this has been
partially implemented. Through the use of Csound’s MIDI interoperability opcode, mididefault, the
323
A Csound Two-Oscillator Synthesizer 04 B. SUBTRACTIVE SYNTHESIS
instrument can be operated from the score or from a MIDI keyboard. If a MIDI note is received,
suitable default p-field values are substituted for the missing p-fields. In the next example MIDI
controller 1 will be used to control the global cutoff frequency for the filter.
EXAMPLE 04B01_Subtractive_Midi.csd
<CsoundSynthesizer>
<CsOptions>
-odac -Ma
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 4
nchnls = 2
0dbfs = 1
instr 1
iNum notnum ;read in midi note number
iCF ctrl7 1,1,0.1,14 ;read in midi controller 1
324
04 B. SUBTRACTIVE SYNTHESIS A Csound Two-Oscillator Synthesizer
;oscillator 1
;if type is sawtooth or square...
if iType1==1||iType1==2 then
;...derive vco2 'mode' from waveform type
iMode1 = (iType1=1?0:2)
aSig1 vco2 kAmp1,iCPS*kOct1*kTune1,iMode1,kPW1;VCO audio oscillator
else ;otherwise...
aSig1 noise kAmp1, 0.5 ;...generate white noise
endif
;mix oscillators
aMix sum aSig1,aSig2
;lowpass filter
kFiltEnv expsegr 0.0001,iFAtt,iCPS*iCF,iFDec,iCPS*iCF*iFSus,iFRel,0.0001
aOut moogladder aMix, kFiltEnv, kRes
;amplitude envelope
aAmpEnv expsegr 0.0001,iAAtt,1,iADec,iASus,iARel,0.0001
aOut = aOut*aAmpEnv
outs aOut,aOut
endin
</CsInstruments>
<CsScore>
;p4 = oscillator frequency
;oscillator 1
;p5 = amplitude
;p6 = type (1=sawtooth,2=square-PWM,3=noise)
;p7 = PWM (square wave only)
;p8 = octave displacement
325
Simulation of Timbres from a Noise Source 04 B. SUBTRACTIVE SYNTHESIS
f 0 3600
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
22 reson filters are used for the bandpass filters on account of their ability to ring and resonate as
their bandwidth narrows. Another reason for this choice is the relative CPU economy of the reson
1 It
has been shown in the chapter about additive synthesis how this quality can be applied to additive synthesis by slight
random deviations.
326
04 B. SUBTRACTIVE SYNTHESIS Simulation of Timbres from a Noise Source
filter, a not insignificant concern as so many of them are used. The frequency ratios between the
22 parallel filters are derived from analysis of a hand bell, the data was found in the appendix of the
Csound manual here. Obviously with so much repetition of similar code, some sort of abstraction
would be a good idea (perhaps through a UDO or by using a macro), but here, and for the sake of
clarity, it is left unabstracted.
In addition to the white noise as a source, noise impulses are also used as a sound source (via
the mpulse opcode). The instrument will automatically and randomly slowly crossfade between
these two sound sources.
A lowpass and highpass filter are inserted in series before the parallel bandpass filters to shape the
frequency spectrum of the source sound. Csound’s butterworth filters butlp and buthp are chosen
for this task on account of their steep cutoff slopes and minimal ripple at the cutoff frequency.
The outputs of the reson filters are sent alternately to the left and right outputs in order to create
a broad stereo effect.
This example makes extensive use of the rspline opcode, a generator of random spline functions,
to slowly undulate the many input parameters. The orchestra is self generative in that instrument 1
repeatedly triggers note events in instrument 2 and the extensive use of random functions means
that the results will continually evolve as the orchestra is allowed to perform.
EXAMPLE 04B02_Subtractive_timbres.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 16
nchnls = 2
0dbfs = 1
327
Simulation of Timbres from a Noise Source 04 B. SUBTRACTIVE SYNTHESIS
328
04 B. SUBTRACTIVE SYNTHESIS Vowel-Sound Emulation Using Bandpass Filtering
; left and right channel mixes are created using alternate filter outputs.
; This shall create a stereo effect.
aMixL sum a1*kAmp1,a3*kAmp3,a5*kAmp5,a7*kAmp7,a9*kAmp9,a11*kAmp11,\
a13*kAmp13,a15*kAmp15,a17*kAmp17,a19*kAmp19,a21*kAmp21
aMixR sum a2*kAmp2,a4*kAmp4,a6*kAmp6,a8*kAmp8,a10*kAmp10,a12*kAmp12,\
a14*kAmp14,a16*kAmp16,a18*kAmp18,a20*kAmp20,a22*kAmp22
</CsInstruments>
<CsScore>
i 1 0 3600 ; instrument 1 (note generator) plays for 1 hour
e
</CsScore>
</CsoundSynthesizer>
;example written by Iain McCurdy
329
Vowel-Sound Emulation Using Bandpass Filtering 04 B. SUBTRACTIVE SYNTHESIS
reson filters are again used but butbp and others could be equally valid choices.
Data is stored in GEN07 linear break point function tables, as this data is read by k-rate line func-
tions we can interpolate and therefore morph between different vowel sounds during a note.
The source sound for the filters comes from either a pink noise generator or a pulse waveform. The
pink noise source could be used if the emulation is to be that of just the breath whereas the pulse
waveform provides a decent approximation of the human vocal chords buzzing. This instrument
can however morph continuously between these two sources.
EXAMPLE 04B03_Subtractive_vowels.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 16
nchnls = 2
0dbfs = 1
;BASS
giBF1 ftgen 0, 0, -5, -2, 600, 400, 250, 400, 350
giBF2 ftgen 0, 0, -5, -2, 1040, 1620, 1750, 750, 600
giBF3 ftgen 0, 0, -5, -2, 2250, 2400, 2600, 2400, 2400
giBF4 ftgen 0, 0, -5, -2, 2450, 2800, 3050, 2600, 2675
giBF5 ftgen 0, 0, -5, -2, 2750, 3100, 3340, 2900, 2950
330
04 B. SUBTRACTIVE SYNTHESIS Vowel-Sound Emulation Using Bandpass Filtering
;TENOR
giTF1 ftgen 0, 0, -5, -2, 650, 400, 290, 400, 350
giTF2 ftgen 0, 0, -5, -2, 1080, 1700, 1870, 800, 600
giTF3 ftgen 0, 0, -5, -2, 2650, 2600, 2800, 2600, 2700
giTF4 ftgen 0, 0, -5, -2, 2900, 3200, 3250, 2800, 2900
giTF5 ftgen 0, 0, -5, -2, 3250, 3580, 3540, 3000, 3300
;COUNTER TENOR
giCTF1 ftgen 0, 0, -5, -2, 660, 440, 270, 430, 370
giCTF2 ftgen 0, 0, -5, -2, 1120, 1800, 1850, 820, 630
giCTF3 ftgen 0, 0, -5, -2, 2750, 2700, 2900, 2700, 2750
giCTF4 ftgen 0, 0, -5, -2, 3000, 3000, 3350, 3000, 3000
giCTF5 ftgen 0, 0, -5, -2, 3350, 3300, 3590, 3300, 3400
;ALTO
giAF1 ftgen 0, 0, -5, -2, 800, 400, 350, 450, 325
giAF2 ftgen 0, 0, -5, -2, 1150, 1600, 1700, 800, 700
giAF3 ftgen 0, 0, -5, -2, 2800, 2700, 2700, 2830, 2530
giAF4 ftgen 0, 0, -5, -2, 3500, 3300, 3700, 3500, 2500
giAF5 ftgen 0, 0, -5, -2, 4950, 4950, 4950, 4950, 4950
;SOPRANO
giSF1 ftgen 0, 0, -5, -2, 800, 350, 270, 450, 325
giSF2 ftgen 0, 0, -5, -2, 1150, 2000, 2140, 800, 700
giSF3 ftgen 0, 0, -5, -2, 2900, 2800, 2950, 2830, 2700
giSF4 ftgen 0, 0, -5, -2, 3900, 3600, 3900, 3800, 3800
giSF5 ftgen 0, 0, -5, -2, 4950, 4950, 4950, 4950, 4950
331
Vowel-Sound Emulation Using Bandpass Filtering 04 B. SUBTRACTIVE SYNTHESIS
instr 1
kFund expon p4,p3,p5 ; fundamental
kVow line p6,p3,p7 ; vowel select
kBW line p8,p3,p9 ; bandwidth factor
iVoice = p10 ; voice select
kSrc line p11,p3,p12 ; source mix
</CsInstruments>
<CsScore>
; p4 = fundemental begin value (c.p.s.)
; p5 = fundemental end value
; p6 = vowel begin value (0 - 1 : a e i o u)
; p7 = vowel end value
; p8 = bandwidth factor begin (suggested range 0 - 2)
332
04 B. SUBTRACTIVE SYNTHESIS Vowel-Sound Emulation Using Bandpass Filtering
These examples have hopefully demonstrated the strengths of subtractive synthesis in its simplic-
ity, intuitive operation and its ability to create organic sounding timbres. Further research could
explore Csound’s other filter opcodes including vcomb, wguide1, wguide2, mode and the more
esoteric phaser1, phaser2 and resony.
333
Vowel-Sound Emulation Using Bandpass Filtering 04 B. SUBTRACTIVE SYNTHESIS
334
04 C. AMPLITUDE AND RING MODULATION
In Amplitude Modulation (AM) the amplitude of a Carrier oscillator is modulated by the output
of another oscillator, called Modulator. So the carrier amplitude consists of a constant value, by
tradition called DC Offset, and the modulator output which are added to each other.
If this modulation is in the sub-audio range (less than 15 Hz), it is perceived as periodic volume
modification.1 Volume-modulation above approximately 15 Hz are perceived as timbre changes.
So called sidebands appear. This transition is showed in the following example. The modulation
frequency starts at 2 Hz and moves over 20 seconds to 100 Hz.
EXAMPLE 04C01_Simple_AM.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
1 Forclassical string instruments there is a bow vibrato which resembles this effect. If the DC Offset is weak in comparison
to the modulation output, the comparison in classical music is the tremolo effect. Also pulsation is often used to describe
AM with low frequencies.
335
Sidebands 04 C. AMPLITUDE AND RING MODULATION
instr 1
aRaise expseg 2, 20, 100
aModulator poscil 0.3, aRaise
iDCOffset = 0.3
aCarrier poscil iDCOffset+aModulator, 440
out aCarrier, aCarrier
endin
</CsInstruments>
<CsScore>
i 1 0 25
</CsScore>
</CsoundSynthesizer>
; example by Alex Hofmann and joachim heintz
Sidebands
The sidebands appear on both sides of the carrier frequency 𝑓𝑐 . The frequency of the side bands
is the sum and the difference between the carrier frequency and the modulator frequency: 𝑓𝑐 − 𝑓𝑚
and 𝑓𝑐 + 𝑓𝑚 . The amplitude of each sideband is half of the modulator’s amplitude.
So the sounding result of the following example can be calculated as this: 𝑓𝑐 = 440 Hz, 𝑓𝑚 = 40
Hz, so the result is a sound with 400, 440, and 480 Hz. The sidebands have an amplitude of 0.2.
The amplitude of the carrier frequency starts with 0.2, moves to 0.4, and finally moves to 0. Note
that we use an alternative way of applying AM here, shown in the AM2 instrument:
It is equivalent to the signal flow in the first flow chart (AM1 here). It takes one more line, but now
you can substitute any audio signal as carrier, not only an oscillator. So this is the bridge to using
AM for the modification of sampled sound as shown in 05F.
EXAMPLE 04C02_Sidebands.csd
<CsoundSynthesizer>
<CsOptions>
336
04 C. AMPLITUDE AND RING MODULATION Sidebands
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr AM1
aDC_Offset linseg 0.2, 1, 0.2, 5, 0.4, 3, 0
aModulator poscil 0.4, 40
aCarrier poscil aDC_Offset+aModulator, 440
out aCarrier, aCarrier
endin
instr AM2
aDC_Offset linseg 0.2, 1, 0.2, 5, 0.4, 3, 0
aModulator poscil 0.4, 40
aCarrier poscil 1, 440
aAM = aCarrier * (aModulator+aDC_Offset)
out aAM, aAM
endin
</CsInstruments>
<CsScore>
i "AM1" 0 10
i "AM2" 11 10
</CsScore>
</CsoundSynthesizer>
; example by Alex Hofmann and joachim heintz
At the end of this example, when the DC Offset was zero, we reached Ring Modulation (RM). Ring
Modulation can thus be considered as special case of Amplitude Modulation, without any DC Off-
set. This is the very simple model:2
If Ring Modulation happens in the sub-audio domain (less than 10 Hz), it will be perceived as
tremolo.3 If it happens in the audio-domain, we get a sound with only the sidebands.
2 Here expressed as multiplication. The alternative would be to feed the modulator’s output in the amplitude input of the
carrier.
3 Note that the frequency of this tremolo in RM will be perceived twice as much as the frequency in AM because every half
sine in the modulating signal is perceived as an own period.
337
AM/RM of Complex Sounds 04 C. AMPLITUDE AND RING MODULATION
Given a carrier signal which consists of three harmonics: 400, 800 and 1200 Hz. The ratio of
these partials is 1 : 2 : 3, so our ear will perceice 400 Hz as base frequency. Ring Modulation with
a frequency of 100 Hz will result in the frequencies 300, 500, 700, 900, 1100 and 1300 Hz. We have
now a frequency every 200 Hz, and 400 Hz is not any more the base of it. (Instead, it will be heard
as partials 3, 5, 7, 9, 11 and 13 of 100 Hz as base frequency.) In case we modulate with a frequency
of 50 Hz, we get 350, 450, 750, 850, 1150 and 1250 Hz, so again a shifted spectrum, definitely not
with 400 Hz as base frequency.
EXAMPLE 04C03_RingMod.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Carrier
aPartial_1 poscil .2, 400
aPartial_2 poscil .2, 800
aPartial_3 poscil .2, 1200
338
04 C. AMPLITUDE AND RING MODULATION AM/RM of Complex Sounds
instr RM
iModFreq = p4
aRM = gaCarrier * poscil:a(1,iModFreq)
out aRM, aRM
endin
</CsInstruments>
<CsScore>
i "Carrier" 0 14
i "RM" 3 3 100
i "RM" 9 3 50
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
339
AM/RM of Complex Sounds 04 C. AMPLITUDE AND RING MODULATION
340
04 D. FREQUENCY MODULATION
Basic Model
In FM synthesis, the frequency of one oscillator (called the carrier) is modulated by the signal from
another oscillator (called the modulator). The output of the modulating oscillator is added to the
frequency input of the carrier oscillator.
The amplitude of the modulator determines the amount of modulation, or the frequency deviation
from the fundamental carrier frequency. The frequency of the modulator determines how frequent
the deviation will occur in one second. The amplitude of the modulator determines the amount
of the deviation. An amplitude of 1 will alter the carrier frequency by ±1 Hz, wheras an amplitude
of 10 will alter the carrier frequency by ±10 Hz. If the amplitude of the modulating signal is zero,
there is no modulation and the output from the carrier oscillator is simply a sine wave with the
frequency of the carrier. When modulation occurs, the signal from the modulation oscillator, a
sine wave with frequency FM , drives the frequency of the carrier oscillator both above and below
the carrier frequency FC . If the modulator is running in the sub-audio frequency range (below 20
Hz), the result of Modulation is vibrato. When the modulator’s frequency rises in the audio range,
we hear it as a change in the timbre of the carrier.
341
Basic Model 04 D. FREQUENCY MODULATION
EXAMPLE 04D01_Frequency_modulation.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
i "FM_vibr" 0 10
i "FM_timbr" 10 10
</CsScore>
</CsoundSynthesizer>
;example by marijana janevska
Carrier/Modulator Ratio
The position of the frequency components generated by FM depends on the relationship of the
carrier frequency to the modulating frequency FC :FM . This is called the ratio. When FC :FM is a
simple integer ratio, such as 4:1 (as in the case of two signals at 400 and 100 Hz), FM generates
harmonic spectra, that is sidebands that are integer multiples of the carrier and modulator fre-
quencies. When FC :FM is not a simple integer ratio, such as 8:2.1 (as in the case of two signals
at 800 and 210 Hz), FM generates inharmonic spectra (noninteger multiples of the carrier and
modulator).
EXAMPLE 04D02_Ratio.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
342
04 D. FREQUENCY MODULATION Basic Model
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Ratio
kRatio = p4
kCarFreq = 400
kModFreq = kCarFreq/kRatio
aModulator poscil 500, kModFreq
aCarrier poscil 0.3, kCarFreq + aModulator
aOut linen aCarrier, .1, p3, 1
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 5 2
i . + . 2.1
</CsScore>
</CsoundSynthesizer>
;example written by marijana janevska
Index of Modulation
FM of two sinusoids generates a series of sidebands around the carrier frequency FC . Each side-
band spreads out at a distance equal to a multiple of the modulating frequency FM .
The bandwidth of the FM spectrum (the number of sidebands) is controlled by the index of mod-
ulation 𝐼. The Index is defined mathematically according to the following relation:
𝐼 = 𝐴𝑀 ∶ 𝐹𝑀
where AM is the amount of frequency deviation (in Hz) from the carrier frequency. Hence, AM is a
way of expressing the depth or amount of modulation. The amplitude of each sideband depends
on the index of modulation. When there is no modulation, the index of modulation is zero and
all the signal power resides in the carrier frequency. Increasing the value of the index causes the
sidebands to acquire more power at the expense of the power of the carrier frequency. The wider
343
Basic Model 04 D. FREQUENCY MODULATION
the deviation, the more widely distributed is the power among the sidebands and the greater the
number of sidebands that have significant amplitudes. The number of significant sideband pairs
(those that are more than 1/100 the amplitude of the carrier) is approximately I+1. For certain
values of the carrier and modulator frequencies and Index, extreme sidebands reflect out of the
upper and lower ends of the spectrum, causing audible side effects. When the lower sidebands
extend below 0 Hz, they reflect back into the spectrum in 180 degree phase inverted form. Negative
frequency components add richness to the lower frequency parts of the spectrum, but if negative
components overlap exactly with positive components, they can cancel each other. In simple FM,
both oscillators use sine waves as their source waveform, although any waveform can be used.
The FM can produce such rich spectra, that, when one waveform with a large number of spectral
components frequency modulates another, the resulting sound can be so dense that it sounds
harsh and undefined. Aliasing can occur easily.
EXAMPLE 04D03_Index.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Rising_index
kModAmp = 400
kIndex linseg 3, p3, 8
kModFreq = kModAmp/kIndex
aModulator poscil kModAmp, kModFreq
aCarrier poscil 0.3, 400 + aModulator
aOut linen aCarrier, .1, p3, 1
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i "Rising_index" 0 10
</CsScore>
</CsoundSynthesizer>
;example by marijana janevska and joachim heintz
344
04 D. FREQUENCY MODULATION Basic Model
if R = C : M then M = C : R and
if I = D : M then D = I · M.
EXAMPLE 04D04_Standard.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Standard
//input
iC = 400
iR = p4 ;ratio
iI = p5 ;index
prints "Ratio = %.3f, Index = %.3f\n", iR, iI
//transform
iM = iC / iR
iD = iI * iM
endin
instr PlayMess
//transform
kM = kC / kR
kD = kI * kM
endin
</CsInstruments>
<CsScore>
//changing the ratio at constant index=3
i "Standard" 0 3 1 3
i . + . 1.41 .
i . + . 1.75 .
i . + . 2.07 .
s
345
Basic Model 04 D. FREQUENCY MODULATION
EXAMPLE 04D05_basic_FM_with_foscil.csd
<CsoundSynthesizer>
<CsOptions>
-odac -d
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
346
04 D. FREQUENCY MODULATION More Complex FM Algorithms
</CsInstruments>
<CsScore>
i 1 0 10
i 2 12 10
i 3 24 10
</CsInstruments>
</CsScore>
</CsoundSynthesizer>
;example by Marijana Janevska
In the example above, in instr 1 the Carrier has a frequency of 330 Hz, the Modulator has a fre-
quency of 110 Hz and the value of the index changes randomly between 1 and 2, 20 times a sec-
ond. In instr 2, the value of the Denominator is not static. Its value changes randomly between 100
and 120, which makes all the other parameters’ values change (Carrier and Modulator frequencies
and Index). In instr 3 we add a changing value to the parameter, that when multiplied with the
Denominator value, gives the frequency of the Modulator, which gives even more complex spectra
because it affects the value of the Index, too.
Combining more than two oscillators (operators) is called complex FM synthesis. Operators can
be connected in different combinations: Multiple modulators FM and Multiple carriers FM.
In multiple modulator frequency modulation, more than one oscillator modulates a single carrier
oscillator. The carrier is always the last operator in the row. Changing its pitch shifts the whole
sound. All other operators are modulators, changing their pitch and especially amplitude alters
the sound-spectrum. Two basic configurations are possible: parallel and serial. In parallel MM
FM, two sinewaves simultaneously modulate a single carrier oscillator. The principle here is, that
(Modulator1:Carrier) and (Modulator2:Carrier) will be separate modulations and later added to-
gether.
347
More Complex FM Algorithms 04 D. FREQUENCY MODULATION
EXAMPLE 04D06_Parallel_MM_FM.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr parallel_MM_FM
kAmpMod1 randomi 200, 500, 20
aModulator1 poscil kAmpMod1, 700
kAmpMod2 randomi 4, 10, 5
kFreqMod2 randomi 7, 12, 2
aModulator2 poscil kAmpMod2, kFreqMod2
kFreqCar randomi 50, 80, 1, 3
aCarrier poscil 0.2, kFreqCar+aModulator1+aModulator2
out aCarrier, aCarrier
endin
</CsInstruments>
<CsScore>
i "parallel_MM_FM" 0 20
</CsScore>
</CsoundSynthesizer>
;example by Alex Hofmann and Marijana Janevska
In serial MM FM, the output of the first modulator is added with a fixed value and then fed to the sec-
ond modulator, which then is applied to the frequency input of the carrier. This is much more com-
plicated to calculate and the timbre becomes harder to predict, because Modulator1:Modulator2
produces a complex spectrum, which then modulates the carrier.
348
04 D. FREQUENCY MODULATION More Complex FM Algorithms
EXAMPLE 04D07_Serial_MM_FM.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr serial_MM_FM
kAmpMod2 randomi 200, 1400, .5
aModulator2 poscil kAmpMod2, 700
kAmpMod1 linseg 400, 15, 1800
aModulator1 poscil kAmpMod1, 290+aModulator2
aCarrier poscil 0.2, 440+aModulator1
outs aCarrier, aCarrier
endin
</CsInstruments>
<CsScore>
i "serial_MM_FM" 0 20
</CsScore>
</CsoundSynthesizer>
;example by Alex Hofmann and Marijana Janevska
349
More Complex FM Algorithms 04 D. FREQUENCY MODULATION
EXAMPLE 04D08_MC_FM.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr FM_two_carriers
aModulator poscil 100, randomi:k(10,15,1,3)
aCarrier1 poscil 0.3, 700 + aModulator
aCarrier2 poscil 0.1, 701 + aModulator
outs aCarrier1+aCarrier2, aCarrier1+aCarrier2
endin
</CsInstruments>
<CsScore>
i "FM_two_carriers" 0 20
</CsScore>
</CsoundSynthesizer>
;example by Marijana Janevska
350
04 D. FREQUENCY MODULATION More Complex FM Algorithms
EXAMPLE 04D09_Trumpet.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr simple_trumpet
kCarFreq = 440
kModFreq = 440
kIndex = 5
kIndexM = 0
kMaxDev = kIndex*kModFreq
kMinDev = kIndexM * kModFreq
kVarDev = kMaxDev-kMinDev
aEnv expseg .001, 0.2, 1, p3-0.3, 1, 0.2, 0.001
aModAmp = kMinDev+kVarDev*aEnv
aModulator poscil aModAmp, kModFreq
aCarrier poscil 0.3*aEnv, kCarFreq+aModulator
outs aCarrier, aCarrier
endin
</CsInstruments>
<CsScore>
i 1 0 2
</CsScore>
</CsoundSynthesizer>
;example by Alex Hofmann
The following example uses the same instrument, with different settings to generate a bell-like
sound:
EXAMPLE 04D10_Bell.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr bell_like
kCarFreq = 200 ; 200/280 = 5:7 -> inharmonic spectrum
kModFreq = 280
kIndex = 12
kIndexM = 0
kMaxDev = kIndex*kModFreq
kMinDev = kIndexM * kModFreq
kVarDev = kMaxDev-kMinDev
aEnv expseg .001, 0.001, 1, 0.3, 0.5, 8.5, .001
aModAmp = kMinDev+kVarDev*aEnv
351
More Complex FM Algorithms 04 D. FREQUENCY MODULATION
</CsInstruments>
<CsScore>
i "bell_like" 0 9
</CsScore>
</CsoundSynthesizer>
;example by Alex Hofmann
For a feedback FM system, it can happen that the self-modulation comes to a zero point, which
would hang the whole system. To avoid this, the carriers table-lookup phase is modulated, instead
of its pitch.
Also the most famous FM-synthesizer Yamaha DX7 is based on the phase-modulation (PM) tech-
nique, because this allows feedback. The DX7 provides 7 operators, and offers 32 routing combi-
nations of these (cf http://yala.freeservers.com/t2synths.htm#DX7).
To build a PM-synth in Csound the tablei opcode substitutes the FM oscillator. In order to step
through the f-table, a phasor will output the necessary steps.
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr PM
kCarFreq = 200
kModFreq = 280
kModFactor = kCarFreq/kModFreq
kIndex = 12/6.28 ; 12/2pi to convert from radians to norm. table index
aEnv expseg .001, 0.001, 1, 0.3, 0.5, 8.5, .001
aModulator poscil kIndex*aEnv, kModFreq
aPhase phasor kCarFreq
aCarrier tablei aPhase+aModulator, giSine, 1, 0, 1
out aCarrier*aEnv, aCarrier*aEnv
endin
</CsInstruments>
352
04 D. FREQUENCY MODULATION More Complex FM Algorithms
<CsScore>
i "PM" 0 9
</CsScore>
</CsoundSynthesizer>
;example by Alex Hofmann
In the last example we use the possibilities of self-modulation (feedback-modulation) of the oscil-
lator. So here the oscillator is both modulator and carrier. To control the amount of modulation,
an envelope scales the feedback.
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr feedback_PM
kCarFreq = 200
kFeedbackAmountEnv linseg 0, 2, 0.2, 0.1, 0.3, 0.8, 0.2, 1.5, 0
aAmpEnv expseg .001, 0.001, 1, 0.3, 0.5, 8.5, .001
aPhase phasor kCarFreq
aCarrier init 0 ; init for feedback
aCarrier tablei aPhase+(aCarrier*kFeedbackAmountEnv), giSine, 1, 0, 1
outs aCarrier*aAmpEnv, aCarrier*aAmpEnv
endin
</CsInstruments>
<CsScore>
i "feedback_PM" 0 9
</CsScore>
</CsoundSynthesizer>
;example by Alex Hofmann
353
More Complex FM Algorithms 04 D. FREQUENCY MODULATION
354
04 E. WAVESHAPING
-0.5 or lower -1
between -0.5 and 0.5 remain unchanged
0.5 or higher 1
355
Basic Implementation Model 04 E. WAVESHAPING
stores the transfer function. To create a table like the one above, you can use Csound’s sub-routine
GEN07. This statement will create a table of 4096 points with the desired shape:
giTrnsFnc ftgen 0, 0, 4096, -7, -0.5, 1024, -0.5, 2048, 0.5, 1024, 0.5
Now two problems must be solved. First, the index of the function table is not -1 to +1. Rather, it is
either 0 to 4095 in the raw index mode, or 0 to 1 in the normalized mode. The simplest solution is to
use the normalized index and scale the incoming amplitudes, so that an amplitude of -1 becomes
an index of 0, and an amplitude of 1 becomes an index of 1:
aIndx = (aAmp + 1) / 2
The other problem stems from the difference in the accuracy of possible values in a sample and
in a function table. Every single sample is encoded in a 32-bit floating point number in standard
audio applications
ä or even in a 64-bit float in Csound. A table with 4096 points results in a 12-bit number, so
you will have a serious loss of accuracy (= sound quality) if you use the table values directly.
Here, the solution is to use an interpolating table reader. The opcode tablei (instead of table)
does this job. This opcode then needs an extra point in the table for interpolating, so we give
4097 as the table size instead of 4096.
This is the code for simple waveshaping using our transfer function which has been discussed
previously:
EXAMPLE 04E01_Simple_waveshaping.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
giTrnsFnc ftgen 0, 0, 4097, -7, -0.5, 1024, -0.5, 2048, 0.5, 1024, 0.5
giSine ftgen 0, 0, 1024, 10, 1
instr 1
aAmp poscil 1, 400, giSine
aIndx = (aAmp + 1) / 2
aWavShp tablei aIndx, giTrnsFnc, 1
out aWavShp, aWavShp
endin
356
04 E. WAVESHAPING Powershape
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Powershape
The powershape opcode performs waveshaping by simply raising all samples to the power of a
user given exponent. Its main innovation is that the polarity of samples within the negative domain
will be retained. It simply performs the power function on absolute values (negative values made
positive) and then reinstates the minus sign if required. It also normalises the input signal between
-1 and 1 before shaping and then rescales the output by the inverse of whatever multiple was
required to normalise the input. This ensures useful results but does require that the user states
the maximum amplitude value expected in the opcode declaration and thereafter abide by that
limit. The exponent, which the opcode refers to as shape amount, can be varied at k-rate thereby
facilitating the creation of dynamic spectra upon a constant spectrum input.
If we consider the simplest possible input - again a sine wave - a shape amount of 1 will produce
no change (raising any value to the power of 1 leaves that value unchanged).
A shaping amount of 2.5 will visibly “squeeze” the waveform as values less than 1 become increas-
ingly biased towards the zero axis.
357
Powershape 04 E. WAVESHAPING
Much higher values will narrow the positive and negative peaks further. Below is the waveform
resulting from a shaping amount of 50.
Shape amounts less than 1 (but greater than zero) will give the opposite effect of drawing values
closer to -1 or 1. The waveform resulting from a shaping amount of 0.5 shown below is noticeably
more rounded than the sine wave input.
Reducing shape amount even closer to zero will start to show squaring of the waveform. The
result of a shape amount of 0.1 is shown below.
358
04 E. WAVESHAPING Powershape
The sonograms of the five examples shown above are as shown below:
EXAMPLE 04E02_Powershape.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Powershape
iAmp = 0.2
iFreq = 300
aIn poscil iAmp, iFreq
ifullscale = iAmp
kShapeAmount linseg 1, 1.5, 1, .5, p4, 1.5, p4, .5, p5
aOut powershape aIn, kShapeAmount, ifullscale
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i "Powershape" 0 6 2.5 50
i "Powershape" 7 6 0.5 0.1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
As power (shape amount) is increased from 1 through 2.5 to 50, it can be observed how harmonic
359
Distort 04 E. WAVESHAPING
partials are added. It is worth noting also that when the power exponent is 50 the strength of
the fundamental has waned somewhat. What is not clear from the sonogram is that the partials
present are only the odd numbered ones. As the power exponent is reduced below 1 through
0.5 and finally 0.1, odd numbered harmonic partials again appear but this time the strength of the
fundamental remains constant. It can also be observed that aliasing is becoming a problem as ev-
idenced by the vertical artifacts in the sonograms for 0.5 and in particular 0.1. This is a significant
concern when using waveshaping techniques. Raising the sampling rate can provide additional
headroom before aliasing manifests but ultimately subtlety in waveshaping’s use is paramount.
Distort
The distort opcode, authored by Csound’s original creator Barry Vercoe, was originally part of the
Extended Csound project but was introduced into Canonical Csound in version 5. It waveshapes
an input signal according to a transfer function provided by the user using a function table. At
first glance this may seem to offer little more than what we have already demonstrated from first
principles, but it offers a number of additional features that enhance its usability. The input signal
first has soft-knee compression applied before being mapped through the transfer function. Input
gain is also provided via the distortion amount input argument and this provides dynamic control
of the waveshaping transformation. The result of using compression means that spectrally the
results are better behaved than is typical with waveshaping. A common transfer function would
be the hyperbolic tangent (tanh) function. Csound possesses an GEN routine GENtanh for the
creation of tanh functions:
GENtanh
f # time size "tanh" start end rescale
By adjusting the start and end values we can modify the shape of the 𝑡𝑎𝑛ℎ transfer function and
therefore the aggressiveness of the waveshaping (start and end values should be the same ab-
solute values and negative and positive respectively if we want the function to pass through the
origin from the lower left quadrant to the upper right quadrant).
Start and end values of -1 and 1 will produce a gentle “s” curve.
This represents only a very slight deviation from a straight line function from (-1,-1) to (1,1) - which
would produce no distortion - therefore the effects of the above used as a transfer function will be
360
04 E. WAVESHAPING Distort
extremely subtle.
Start and end points of -5 and 5 will produce a much more dramatic curve and more dramatic
waveshaping:
f 1 0 1024 "tanh" -5 5 0
Note that the GEN routine’s argument p7 for rescaling is set to zero ensuring that the function only
ever extends from -1 and 1. The values provided for start and end only alter the shape.
In the following test example a sine wave at 200 hz is waveshaped using distort and the tanh
function shown above.
EXAMPLE 04E03_Distort_1.csd
<CsoundSynthesizer>
<CsOptions>
-dm0 -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps =32
nchnls = 1
0dbfs = 1
instr 1
aSig poscil 1, 200, giSine ; a sine wave
kAmt line 0, p3, 1 ; rising distortion amount
aDst distort aSig, kAmt, giTanh ; distort the sine tone
out aDst*0.1
endin
</CsInstruments>
<CsScore>
i 1 0 4
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
361
Distort 04 E. WAVESHAPING
As the distort amount is raised from zero to 1 it can be seen from the sonogram how upper partials
emerge and gain in strength. Only the odd numbered partials are produced, therefore over the
fundemental at 200 hz partials are present at 600, 1000, 1400 hz and so on. If we want to restore
the even numbered partials we can simultaneously waveshape a sine at 400 hz, one octave above
the fundamental as in the next example:
EXAMPLE 04E04_Distort_2.csd
<CsoundSynthesizer>
<CsOptions>
-dm0 -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps =32
nchnls = 1
0dbfs = 1
instr 1
kAmt line 0, p3, 1 ; rising distortion amount
aSig poscil 1, 200, giSine ; a sine
aSig2 poscil kAmt*0.8,400,giSine ; a sine an octave above
aDst distort aSig+aSig2, kAmt, giTanh ; distort a mixture of the two sines
out aDst*0.1
endin
</CsInstruments>
<CsScore>
i 1 0 4
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The higher of the two sines is faded in using the distortion amount control so that when distortion
amount is zero we will be left with only the fundamental. The sonogram looks like this:
362
04 E. WAVESHAPING Distort
What we hear this time is something close to a sawtooth waveform with a rising low-pass filter.
The higher of the two input sines at 400 hz will produce overtones at 1200, 2000, 2800 … thereby
filling in the missing partials.
363
Distort 04 E. WAVESHAPING
364
04 F. GRANULAR SYNTHESIS
In his Computer Music Tutorial, Curtis Roads gives an interesting introductory model for granular
synthesis. A sine as source waveform is modified by a repeating envelope. Each envelope period
creates one grain.
Figure 32.1: After Curtis Roads, Computer Music Tutorial, Fig. 5.11
In our introductory example, we will start with 1 Hz as frequency for the envelope opscillator, then
rising to 10 Hz, then to 20, 50 and finally 300 Hz. The grain durations are therefore 1 second, then
1/10 second, then 1/20, 1/50 and 1/300 second. In a second run, we will use the same values, but
add a random value to the frequency of the envelope generator, thus avoiding regularities.
EXAMPLE 04F01_GranSynthIntro.csd
<CsoundSynthesizer>
<CsOptions>
-o dac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr EnvFreq
printf " Envelope frequency rising from %d to %d Hz\n", 1, p4, p5
gkEnvFreq expseg p4, 3, p4, 2, p5
endin
365
Concept Behind Granular Synthesis 04 F. GRANULAR SYNTHESIS
instr GrainGenSync
puts "\nSYNCHRONOUS GRANULAR SYNTHESIS", 1
aEnv poscil .2, gkEnvFreq, giEnv
aOsc poscil aEnv, 400
aOut linen aOsc, .1, p3, .5
out aOut, aOut
endin
instr GrainGenAsync
puts "\nA-SYNCHRONOUS GRANULAR SYNTHESIS", 1
aEnv poscil .2, gkEnvFreq+randomi:k(0,gkEnvFreq,gkEnvFreq), giEnv
aOsc poscil aEnv, 400
aOut linen aOsc, .1, p3, .5
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i "GrainGenSync" 0 30
i "EnvFreq" 0 5 1 10
i . + . 10 20
i . + . 20 50
i . + . 50 100
i . + . 100 300
b 31
i "GrainGenAsync" 0 30
i "EnvFreq" 0 5 1 10
i . + . 10 20
i . + . 20 50
i . + . 50 100
i . + . 100 300
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
We hear different characteristics, due to the regular or irregular sequence of the grains. To under-
stand what happens, we will go deeper in this matter and advance to a more flexible model for
grain generation.
If we repeat a fragment of sound with regularity, there are two principle attributes that we are most
concerned with. Firstly the duration of each sound grain is significant: if the grain duration if very
small, typically less than 0.02 seconds, then less of the characteristics of the source sound will be
evident. If the grain duration is greater than 0.02 then more of the character of the source sound
or waveform will be evident. Secondly the rate at which grains are generated will be significant: if
grain generation is below 20 Hertz, i.e. less than 20 grains per second, then the stream of grains
will be perceived as a rhythmic pulsation; if rate of grain generation increases beyond 20 Hz then
individual grains will be harder to distinguish and instead we will begin to perceive a buzzing tone,
the fundamental of which will correspond to the frequency of grain generation. Any pitch contained
366
04 F. GRANULAR SYNTHESIS Granular Synthesis Demonstrated Using First Principles
within the source material is not normally perceived as the fundamental of the tone whenever grain
generation is periodic, instead the pitch of the source material or waveform will be perceived as
a resonance peak (sometimes referred to as a formant); therefore transposition of the source
material will result in the shifting of this resonance peak.
It should also be noted how the amplitude of each grain is enveloped in instrument 2. If grains were
left unenveloped they would likely produce clicks on account of discontinuities in the waveform
produced at the beginning and ending of each grain.
Granular synthesis in which grain generation occurs with perceivable periodicity is referred to as
synchronous granular synthesis. Granular synthesis in which this periodicity is not evident is re-
ferred to as asynchronous granular synthesis.
EXAMPLE 04F02_GranSynth_basic.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 1
nchnls = 1
0dbfs = 1
instr 1
kRate expon p4,p3,p5 ; rate of grain generation
kTrig metro kRate ; a trigger to generate grains
kDur expon p6,p3,p7 ; grain duration
kForm expon p8,p3,p9 ; formant (spectral centroid)
; p1 p2 p3 p4
schedkwhen kTrig,0,0,2, 0, kDur,kForm ;trigger a note(grain) in instr 2
;print data to terminal every 1/2 second
printks "Rate:%5.2F Dur:%5.2F Formant:%5.2F%n", 0.5, kRate , kDur, kForm
367
Granular Synthesis of Vowels: FOF 04 F. GRANULAR SYNTHESIS
endin
instr 2
iForm = p4
aEnv linseg 0,0.005,0.2,p3-0.01,0.2,0.005,0
aSig poscil aEnv, iForm, giSine
out aSig
endin
</CsInstruments>
<CsScore>
;p4 = rate begin
;p5 = rate end
;p6 = duration begin
;p7 = duration end
;p8 = formant begin
;p9 = formant end
; p1 p2 p3 p4 p5 p6 p7 p8 p9
i 1 0 30 1 100 0.02 0.02 400 400 ;demo of grain generation rate
i 1 31 10 10 10 0.4 0.01 400 400 ;demo of grain size
i 1 42 20 50 50 0.02 0.02 100 5000 ;demo of changing formant
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Csound has a number of opcodes that make working with FOF synthesis easier. We will be using
fof.
Information regarding frequency, bandwidth and intensity values that will produce various vowel
sounds for different voice types can be found in the appendix of the Csound manual here. These
values are stored in function tables in the FOF synthesis example. GEN07, which produces linear
break point envelopes, is chosen as we will then be able to morph continuously between vowels.
EXAMPLE 04F03_Fof_vowels.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
368
04 F. GRANULAR SYNTHESIS Granular Synthesis of Vowels: FOF
<CsInstruments>
sr = 44100
ksmps = 16
nchnls = 2
0dbfs = 1
;TENOR
giTF1 ftgen 0, 0, -5, -2, 650, 400, 290, 400, 350
giTF2 ftgen 0, 0, -5, -2, 1080, 1700, 1870, 800, 600
giTF3 ftgen 0, 0, -5, -2, 2650, 2600, 2800, 2600, 2700
giTF4 ftgen 0, 0, -5, -2, 2900, 3200, 3250, 2800, 2900
giTF5 ftgen 0, 0, -5, -2, 3250, 3580, 3540, 3000, 3300
;COUNTER TENOR
giCTF1 ftgen 0, 0, -5, -2, 660, 440, 270, 430, 370
giCTF2 ftgen 0, 0, -5, -2, 1120, 1800, 1850, 820, 630
giCTF3 ftgen 0, 0, -5, -2, 2750, 2700, 2900, 2700, 2750
giCTF4 ftgen 0, 0, -5, -2, 3000, 3000, 3350, 3000, 3000
giCTF5 ftgen 0, 0, -5, -2, 3350, 3300, 3590, 3300, 3400
369
Granular Synthesis of Vowels: FOF 04 F. GRANULAR SYNTHESIS
;ALTO
giAF1 ftgen 0, 0, -5, -2, 800, 400, 350, 450, 325
giAF2 ftgen 0, 0, -5, -2, 1150, 1600, 1700, 800, 700
giAF3 ftgen 0, 0, -5, -2, 2800, 2700, 2700, 2830, 2530
giAF4 ftgen 0, 0, -5, -2, 3500, 3300, 3700, 3500, 2500
giAF5 ftgen 0, 0, -5, -2, 4950, 4950, 4950, 4950, 4950
;SOPRANO
giSF1 ftgen 0, 0, -5, -2, 800, 350, 270, 450, 325
giSF2 ftgen 0, 0, -5, -2, 1150, 2000, 2140, 800, 700
giSF3 ftgen 0, 0, -5, -2, 2900, 2800, 2950, 2830, 2700
giSF4 ftgen 0, 0, -5, -2, 3900, 3600, 3900, 3800, 3800
giSF5 ftgen 0, 0, -5, -2, 4950, 4950, 4950, 4950, 4950
instr 1
kFund expon p4,p3,p5 ; fundemental
kVow line p6,p3,p7 ; vowel select
kBW line p8,p3,p9 ; bandwidth factor
iVoice = p10 ; voice select
370
04 F. GRANULAR SYNTHESIS Asynchronous Granular Synthesis
</CsInstruments>
<CsScore>
; p4 = fundamental begin value (c.p.s.)
; p5 = fundamental end value
; p6 = vowel begin value (0 - 1 : a e i o u)
; p7 = vowel end value
; p8 = bandwidth factor begin (suggested range 0 - 2)
; p9 = bandwidth factor end
; p10 = voice (0=bass; 1=tenor; 2=counter_tenor; 3=alto; 4=soprano)
; p1 p2 p3 p4 p5 p6 p7 p8 p9 p10
i 1 0 10 50 100 0 1 2 0 0
i 1 8 . 78 77 1 0 1 0 1
i 1 16 . 150 118 0 1 1 0 2
i 1 24 . 200 220 1 0 0.2 0 3
i 1 32 . 400 800 0 1 0.2 0 4
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The next example is based on the design of example 04F01.csd. Two streams of grains are gener-
ated. The first stream begins as a synchronous stream but as the note progresses the periodicity
of grain generation is eroded through the addition of an increasing degree of gaussian noise. It will
be heard how the tone metamorphosizes from one characterized by steady purity to one of fuzzy
airiness. The second the applies a similar process of increasing indeterminacy to the formant
parameter (frequency of material within each grain).
371
Asynchronous Granular Synthesis 04 F. GRANULAR SYNTHESIS
Other parameters of granular synthesis such as the amplitude of each grain, grain duration, spa-
tial location etc. can be similarly modulated with random functions to offset the psychoacoustic
effects of synchronicity when using constant values.
EXAMPLE 04F04_Asynchronous_GS.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 1
nchnls = 1
0dbfs = 1
</CsInstruments>
<CsScore>
;p4 = rate
;p5 = duration
;p6 = formant
; p1 p2 p3 p4 p5 p6
i 1 0 12 200 0.02 400
i 2 12.5 12 200 0.02 400
e
372
04 F. GRANULAR SYNTHESIS Synthesis of Dynamic Sound Spectra: grain3
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Several parameters are modulated slowly using Csound’s random spline generator rspline. These
parameters are formant frequency, grain duration and grain density (rate of grain generation). The
waveform used in generating the content for each grain is randomly chosen using a slow sample
and hold random function - a new waveform will be selected every 10 seconds. Five waveforms
are provided: a sawtooth, a square wave, a triangle wave, a pulse wave and a band limited buzz-
like waveform. Some of these waveforms, particularly the sawtooth, square and pulse waveforms,
can generate very high overtones, for this reason a high sample rate is recommended to reduce
the risk of aliasing (see chapter 01A).
Current values for formant (cps), grain duration, density and waveform are printed to the terminal
every second. The key for waveforms is: 1:sawtooth; 2:square; 3:triangle; 4:pulse; 5:buzz.
EXAMPLE 04F05_grain3.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 96000
ksmps = 16
nchnls = 1
0dbfs = 1
instr 1
;random spline generates formant values in oct format
kOct rspline 4,8,0.1,0.5
;oct format values converted to cps format
kCPS = cpsoct(kOct)
;phase location is left at 0 (the beginning of the waveform)
kPhs = 0
;frequency (formant) randomization and phase randomization are not used
kFmd = 0
373
Synthesis of Dynamic Sound Spectra: grain3 04 F. GRANULAR SYNTHESIS
kPmd = 0
;grain duration and density (rate of grain generation)
kGDur rspline 0.01,0.2,0.05,0.2
kDens rspline 10,200,0.05,0.5
;maximum number of grain overlaps allowed. This is used as a CPU brake
iMaxOvr = 1000
;function table for source waveform for content of the grain
;a different waveform chosen once every 10 seconds
kFn randomh 1,5.99,0.1
;print info. to the terminal
printks "CPS:%5.2F%TDur:%5.2F%TDensity:%5.2F%TWaveform:%1.0F%n",
1, kCPS, kGDur, kDens, kFn
aSig grain3 kCPS, kPhs, kFmd, kPmd, kGDur, kDens, iMaxOvr, kFn, giWFn, 0, 0
out aSig*0.06
endin
</CsInstruments>
<CsScore>
i 1 0 300
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The final example introduces grain3’s two built-in randomizing functions for phase and pitch.
Phase refers to the location in the source waveform from which a grain will be read, pitch refers to
the pitch of the material within grains. In this example a long note is played, initially no randomiza-
tion is employed but gradually phase randomization is increased and then reduced back to zero.
The same process is applied to the pitch randomization amount parameter. This time grain size
is relatively large:0.8 seconds and density correspondingly low: 20 Hz.
EXAMPLE 04F06_grain3_random.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 16
nchnls = 1
0dbfs = 1
instr 1
kCPS = 100
kPhs = 0
kFmd transeg 0,21,0,0, 10,4,15, 10,-4,0
kPmd transeg 0,1,0,0, 10,4,1, 10,-4,0
kGDur = 0.8
kDens = 20
iMaxOvr = 1000
kFn = 1
;print info. to the terminal
374
04 F. GRANULAR SYNTHESIS Synthesis of Dynamic Sound Spectra: grain3
</CsInstruments>
<CsScore>
i 1 0 51
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
This chapter has introduced some of the concepts behind the synthesis of new sounds based on
simple waveforms by using granular synthesis techniques. Only two of Csound’s built-in opcodes
for granular synthesis, fof and grain3, have been used; it is beyond the scope of this work to cover
all of the many opcodes for granulation that Csound provides. This chapter has focused mainly
on synchronous granular synthesis; chapter 05G, which introduces granulation of recorded sound
files, makes greater use of asynchronous granular synthesis for time-stretching and pitch shifting.
This chapter will also introduce some of Csound’s other opcodes for granular synthesis.
375
Synthesis of Dynamic Sound Spectra: grain3 04 F. GRANULAR SYNTHESIS
376
04 G. PHYSICAL MODELLING
With physical modelling we employ a completely different approach to synthesis than we do with
all other standard techniques. Unusually the focus is not primarily to produce a sound, but to
model a physical process and if this process exhibits certain features such as periodic oscillation
within a frequency range of 20 to 20000 Hz, it will produce sound.
Physical modelling synthesis techniques do not build sound using wave tables, oscillators and
audio signal generators, instead they attempt to establish a model, as a system in itself, which
can then produce sound because of how the system varies with time. A physical model usually
derives from the real physical world, but could be any time-varying system. Physical modelling is
an exciting area for the production of new sounds.
Compared with the complexity of a real-world physically dynamic system a physical model will
most likely represent a brutal simplification. Nevertheless, using this technique will demand a lot
of formulae, because physical models are described in terms of mathematics. Although designing
a model may require some considerable work, once established the results commonly exhibit a
lively tone with time-varying partials and a “natural” difference between attack and release by their
very design - features that other synthesis techniques will demand more from the end user in order
to establish.
Csound already contains many ready-made physical models as opcodes but you can still build
your own from scratch. This chapter will look at how to implement two classical models from first
principles and then introduce a number of Csound’s ready made physical modelling opcodes.
Many oscillating processes in nature can be modelled as connections of masses and springs.
Imagine one mass-spring unit which has been set into motion. This system can be described as
a sequence of states, where every new state results from the two preceding ones. Assumed the
first state a0 is 0 and the second state a1 is 0.5. Without the restricting force of the spring, the
mass would continue moving unimpeded following a constant velocity:
377
The Mass-Spring Model 04 G. PHYSICAL MODELLING
As the velocity between the first two states can be described as 𝑎1 − 𝑎0 , the value of the third state
𝑎2 will be:
But, the spring pulls the mass back with a force which increases the further the mass moves away
from the point of equilibrium. Therefore the masses movement can be described as the product
of a constant factor 𝑐 and the last position 𝑎1. This damps the continuous movement of the mass
so that for a factor of c=0.4 the next position will be:
Csound can easily calculate the values by simply applying the formulae. For the first k-cycle1 , they
are set via the init opcode. After calculating the new state, a1 becomes a0 and a2 becomes a1 for
the next k-cycle. In the next csd the new values will be printed five times per second (the states are
named here as k0/k1/k2 instead of a0/a1/a2, because k-rate values are needed for printing instead
of audio samples).
EXAMPLE 04G01_Mass_spring_sine.csd
<CsoundSynthesizer>
<CsOptions>
-n ;no sound
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 8820 ;5 steps per second
1 See chapter 03A for more information about Csound’s performance loops.
378
04 G. PHYSICAL MODELLING The Mass-Spring Model
instr PrintVals
;initial values
kstep init 0
k0 init 0
k1 init 0.5
kc init 0.4
;calculation of the next value
k2 = k1 + (k1 - k0) - kc * k1
printks "Sample=%d: k0 = %.3f, k1 = %.3f, k2 = %.3f\n", 0, kstep, k0, k1, k2
;actualize values for the next step
kstep = kstep+1
k0 = k1
k1 = k2
endin
</CsInstruments>
<CsScore>
i "PrintVals" 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
So, a sine wave has been created, without the use of any of Csound’s oscillators…
EXAMPLE 04G02_MS_sine_audible.csd
<CsoundSynthesizer>
<CsOptions>
-odac
379
The Mass-Spring Model 04 G. PHYSICAL MODELLING
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 1
nchnls = 2
0dbfs = 1
instr MassSpring
;initial values
a0 init 0
a1 init 0.05
ic = 0.01 ;spring constant
;calculation of the next value
a2 = a1+(a1-a0) - ic*a1
outs a0, a0
;actualize values for the next step
a0 = a1
a1 = a2
endin
</CsInstruments>
<CsScore>
i "MassSpring" 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz, after martin neukom
As the next sample is calculated in the next control cycle, either ksmps has to be set to 1, or a
setksmps statement must be set in the instrument, with the same effect. The resulting frequency
depends on the spring constant: the higher the constant, the higher the frequency. The resulting
amplitude depends on both, the starting value and the spring constant.
This simple model shows the basic principle of a physical modelling synthesis: creating a system
which produces sound because it varies in time. Certainly it is not the goal of physical modelling
synthesis to reinvent the wheel of a sine wave. But modulating the parameters of a model may
lead to interesting results. The next example varies the spring constant, which is now no longer a
constant:
EXAMPLE 04G03_MS_variable_constant.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr MassSpring
;set ksmps=1 in this instrument
setksmps 1
;initial values
a0 init 0
a1 init 0.05
kc randomi .001, .05, 8, 3
;calculation of the next value
a2 = a1+(a1-a0) - kc*a1
outs a0, a0
380
04 G. PHYSICAL MODELLING Implementing Simple Physical Systems
Working with physical modelling demands thought in more physical or mathematical terms: ex-
amples of this might be if you were to change the formula when a certain value of 𝑐 had been
reached, or combine more than one spring.
We get for 𝑇 = 1
𝑣𝑡 = 𝑥𝑡 − 𝑥𝑡−1
𝑎𝑡 = 𝑣𝑡 − 𝑣𝑡−1
If we know the position and velocity of a point at time 𝑡−1 and are able to calculate its acceleration
at time 𝑡 we can calculate the velocity 𝑣𝑡 and the position 𝑥𝑡 at time 𝑡:
𝑣𝑡 = 𝑣𝑡−1 + 𝑎𝑡 and
𝑥𝑡 = 𝑥𝑡−1 + 𝑣𝑡
381
Implementing Simple Physical Systems 04 G. PHYSICAL MODELLING
1. init x and v
2. calculate a
3. v += a ; v = v + a
4. x += v ; x = x + v
Example 1: The acceleration of gravity is constant (g = –9.81ms-2 ). For a mass with initial position
x = 300m (above ground) and velocity v = 70ms-1 (upwards) we get the following trajectory (path)
g = -9.81; x = 300; v = 70; Table[v += g; x += v, {16}];
Example 2: The acceleration a of a mass on a spring is proportional (with factor –c) to its position
(deflection) x.
x = 0; v = 1; c = .3; Table[a = -c*x; v += a; x += v, {22}];
Introducing damping
Since damping is proportional to the velocity we reduce velocity at every time step by a certain
amount d:
v *= (1 - d)
382
04 G. PHYSICAL MODELLING Implementing Simple Physical Systems
Introducing excitation
In the examples 2 and 3 the systems oscillate because of their initial velocity v = 1. The resultant
oscillation is the impulse response of the systems. We can excite the systems continuously by
adding a value exc to the velocity at every time step.
v += exc;
Example 4: Damped spring with random excitation (resonator with noise as input)
d = .01; s = 0; v = 0;
Table[a = -.3*s; v += a; v += RandomReal[{-1, 1}];
v *= (1 - d); s += v, {61}];
EXAMPLE 04G04_lin_reson.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
aexc rand p4
aout lin_reson aexc,p5,p6
out aout
endin
</CsInstruments>
<CsScore>
; p4 p5 p6
; excitaion freq damping
i1 0 5 .0001 440 .0001
383
Implementing Simple Physical Systems 04 G. PHYSICAL MODELLING
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
The following trajectory shows that the frequency decreases with encreasing amplitude and that
the pendulum can turn around.
d = .003; s = 0; v = 0;
Table[a = f[s]; v += a; v += RandomReal[{-.09, .1}]; v *= (1 - d);
s += v, {400}];
We can implement systems with accelerations that are arbitrary functions of position x.
384
04 G. PHYSICAL MODELLING Implementing Simple Physical Systems
EXAMPLE 04G05_nonlin_reson.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
kenv oscil p4,.5,1
aexc rand kenv
aout nonlin_reson aexc,p5,p6,p7
out aout
endin
</CsInstruments>
<CsScore>
f1 0 1024 10 1
f2 0 1024 7 -1 510 .15 4 -.15 510 1
f3 0 1024 7 -1 350 .1 100 -.3 100 .2 100 -.1 354 1
; p4 p5 p6 p7
; excitation c1 damping ifn
i1 0 20 .0001 .01 .00001 3
;i1 0 20 .0001 .01 .00001 2
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
385
Implementing Simple Physical Systems 04 G. PHYSICAL MODELLING
The equation describes a linear oscillator d2 x/dt2 = –ω2 x with an additional nonlinear term μ(1 –
x2 )dx/dt. When |x| > 1, the nonlinear term results in damping, but when |x| < 1, negative damping
results, which means that energy is introduced into the system.
Such oscillators compensating for energy loss by an inner energy source are called self-sustained
oscillators.
v = 0; x = .001; ω = 0.1; μ = 0.25;
snd = Table[v += (-ω^2*x + μ*(1 - x^2)*v); x += v, {200}];
The constant ω is the angular frequency of the linear oscillator (μ = 0). For a simulation with
sampling rate sr we calculate the frequency f in Hz as
𝑓 = 𝜔 · 𝑠𝑟/2𝜋
Since the simulation is only an approximation of the oscillation this formula gives good results
only for low frequencies. The exact frequency of the simulation is
2 − 2𝑐𝑜𝑠(𝑓 · 2𝜋/𝑠𝑟)
386
04 G. PHYSICAL MODELLING Implementing Simple Physical Systems
With increasing μ the oscillations nonlinearity becomes stronger and more overtones arise (and
at the same time the frequency becomes lower). The following figure shows the spectrum of the
oscillation for various values of μ.
Certain oscillators can be synchronized either by an external force or by mutual influence. Exam-
ples of synchronization by an external force are the control of cardiac activity by a pace maker and
the adjusting of a clock by radio signals. An example for the mutual synchronization of oscillating
systems is the coordinated clapping of an audience. These systems have in common that they
are not linear and that they oscillate without external excitation (self-sustained oscillators).
The UDO v_d_p represents a Van der Pol oscillator with a natural frequency kfr and a nonlinearity
factor kmu. It can be excited by a sine wave of frequency kfex and amplitude kaex. The range of
frequency within which the oscillator is synchronized to the exciting frequency increases as kmu
and kaex increase.
EXAMPLE 04G06_van_der_pol.csd
<CsoundSynthesizer>
<CsOptions>
-odac
387
Implementing Simple Physical Systems 04 G. PHYSICAL MODELLING
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
kaex = .001
kfex = 830
kamp = .15
kf = 455
kmu linseg 0,p3,.7
a1 poscil kaex,kfex
aout v_d_p a1,kf,kmu
out kamp*aout,a1*100
endin
</CsInstruments>
<CsScore>
i1 0 20
</CsScore>
</CsoundSynthesizer>
;example by martin neukom, adapted by joachim heintz
The variation of the phase difference between excitation and oscillation, as well as the transitions
between synchronous, beating and asynchronous behaviors, can be visualized by showing the
sum of the excitation and the oscillation signals in a phase diagram. The following figures show
to the upper left the waveform of the Van der Pol oscillator, to the lower left that of the excitation
(normalized) and to the right the phase diagram of their sum. For these figures, the same values
were always used for kfr, kmu and kaex. Comparing the first two figures, one sees that the oscilla-
tor adopts the exciting frequency kfex within a large frequency range. When the frequency is low
(figure a), the phases of the two waves are nearly the same. Hence there is a large deflection along
the x-axis in the phase diagram showing the sum of the waveforms. When the frequency is high,
the phases are nearly inverted (figure b) and the phase diagram shows only a small deflection.
The figure c shows the transition to asynchronous behavior. If the proportion between the natural
frequency of the oscillator kfr and the excitation frequency kfex is approximately simple ( kfex/kfr
≅ m/n ), then within a certain range the frequency of the Van der Pol oscillator is synchronized so
that kfex/kfr = m/n. Here one speaks of higher order synchronization (figure d).
388
04 G. PHYSICAL MODELLING The Karplus-Strong Algorithm: Plucked String
This is what happens for a buffer of five values, for the first five steps:
389
The Karplus-Strong Algorithm: Plucked String 04 G. PHYSICAL MODELLING
initial state 1 -1 1 1 -1
step 1 0 1 -1 1 1
step 2 1 0 1 -1 1
step 3 0 1 0 1 -1
step 4 0 0 1 0 1
step 5 0.5 0 0 1 0
The next Csound example represents the content of the buffer in a function table, implements and
executes the algorithm, and prints the result after each five steps which here is referred to as one
cycle:
EXAMPLE 04G07_KarplusStrong.csd
<CsoundSynthesizer>
<CsOptions>
-n
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
opcode KS, 0, ii
;performs the karplus-strong algorithm
iTab, iTbSiz xin
;calculate the mean of the last two values
iUlt tab_i iTbSiz-1, iTab
iPenUlt tab_i iTbSiz-2, iTab
iNewVal = (iUlt + iPenUlt) / 2
;shift values one position to the right
indx = iTbSiz-2
loop:
iVal tab_i indx, iTab
tabw_i iVal, indx+1, iTab
loop_ge indx, 1, 0, loop
;fill the new value at the beginning of the table
tabw_i iNewVal, 0, iTab
endop
instr ShowBuffer
;fill the function table
iTab ftgen 0, 0, -5, -2, 1, -1, 1, 1, -1
iTbLen tableng iTab
390
04 G. PHYSICAL MODELLING The Karplus-Strong Algorithm: Plucked String
</CsInstruments>
<CsScore>
i "ShowBuffer" 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
It can be seen clearly that the values get smoothed more and more from cycle to cycle. As the
buffer size is very small here, the values tend to come to a constant level; in this case 0.333. But
for larger buffer sizes, after some cycles the buffer content has the effect of a period which is
repeated with a slight loss of amplitude. This is how it sounds, if the buffer size is 1/100 second
(or 441 samples at sr=44100):
EXAMPLE 04G08_Plucked.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 1
nchnls = 2
0dbfs = 1
instr 1
;delay time
iDelTm = 0.01
;fill the delay line with either -1 or 1 randomly
kDur timeinsts
if kDur < iDelTm then
aFill rand 1, 2, 1, 1 ;values 0-2
aFill = floor(aFill)*2 - 1 ;just -1 or +1
else
391
Csound Opcodes for Physical Modelling 04 G. PHYSICAL MODELLING
aFill = 0
endif
;delay and feedback
aUlt init 0 ;last sample in the delay line
aUlt1 init 0 ;delayed by one sample
aMean = (aUlt+aUlt1)/2 ;mean of these two
aUlt delay aFill+aMean, iDelTm
aUlt1 delay1 aUlt
outs aUlt, aUlt
endin
</CsInstruments>
<CsScore>
i 1 0 60
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz, after martin neukom
This sound resembles a plucked string: at the beginning the sound is noisy but after a short period
of time it exhibits periodicity. As can be heard, unless a natural string, the steady state is virtually
endless, so for practical use it needs some fade-out. The frequency the listener perceives is related
to the length of the delay line. If the delay line is 1/100 of a second, the perceived frequency is 100
Hz. Compared with a sine wave of similar frequency, the inherent periodicity can be seen, and also
the rich overtone structure:
392
04 G. PHYSICAL MODELLING Csound Opcodes for Physical Modelling
are based on waveguides. A waveguide, in its broadest sense, is some sort of mechanism that
limits the extend of oscillations, such as a vibrating string fixed at both ends or a pipe. In these
sorts of physical model a delay is used to emulate these limits. One of these, wgbow, implements
an emulation of a bowed string. Perhaps the most interesting aspect of many physical models
in not specifically whether they emulate the target instrument played in a conventional way accu-
rately but the facilities they provide for extending the physical limits of the instrument and how
it is played - there are already vast sample libraries and software samplers for emulating conven-
tional instruments played conventionally. wgbow offers several interesting options for experimen-
tation including the ability to modulate the bow pressure and the bowing position at k-rate. Varying
bow pressure will change the tone of the sound produced by changing the harmonic emphasis.
As bow pressure reduces, the fundamental of the tone becomes weaker and overtones become
more prominent. If the bow pressure is reduced further the ability of the system to produce a
resonance at all collapse. This boundary between tone production and the inability to produce a
tone can provide some interesting new sound effect. The following example explores this sound
area by modulating the bow pressure parameter around this threshold. Some additional features
to enhance the example are that 7 different notes are played simultaneously, the bow pressure
modulations in the right channel are delayed by a varying amount with respect top the left channel
in order to create a stereo effect and a reverb has been added.
EXAMPLE 04G09_wgbow.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
gaSendL,gaSendR init 0
393
Csound Opcodes for Physical Modelling 04 G. PHYSICAL MODELLING
instr 2 ; reverb
aRvbL,aRvbR reverbsc gaSendL,gaSendR,0.9,7000
outs aRvbL,aRvbR
clear gaSendL,gaSendR
endin
</CsInstruments>
<CsScore>
; instr. 1
; p4 = pitch (hz.)
; p5 = minimum bow pressure
; p6 = maximum bow pressure
; 7 notes played by the wgbow instrument
i 1 0 480 70 0.03 0.1
i 1 0 480 85 0.03 0.1
i 1 0 480 100 0.03 0.09
i 1 0 480 135 0.03 0.09
i 1 0 480 170 0.02 0.09
i 1 0 480 202 0.04 0.1
i 1 0 480 233 0.05 0.11
; reverb instrument
i 2 0 480
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
This time a stack of eight sustaining notes, each separated by an octave, vary their bowing posi-
tion randomly and independently. You will hear how different bowing positions accentuates and
attenuates different partials of the bowing tone. To enhance the sound produced some filtering
with tone and pareq is employed and some reverb is added.
EXAMPLE 04G10_wgbow_enhanced.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
gaSend init 0
394
04 G. PHYSICAL MODELLING Csound Opcodes for Physical Modelling
instr 2 ; reverb
aRvbL,aRvbR reverbsc gaSend,gaSend,0.9,7000
outs aRvbL,aRvbR
clear gaSend
endin
</CsInstruments>
<CsScore>
; instr. 1 (wgbow instrument)
; p4 = pitch (hertz)
; wgbow instrument
i 1 0 480 20
i 1 0 480 40
i 1 0 480 80
i 1 0 480 160
i 1 0 480 320
i 1 0 480 640
i 1 0 480 1280
i 1 0 480 2460
; reverb instrument
i 2 0 480
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
All of the wg- family of opcodes are worth exploring and often the approach taken here - explor-
ing each input parameter in isolation whilst the others retain constant values - sets the path to
understanding the model better. Tone production with wgbrass is very much dependent upon the
relationship between intended pitch and lip tension, random experimentation with this opcode is
as likely to result in silence as it is in sound and in this way is perhaps a reflection of the experience
of learning a brass instrument when the student spends most time push air silently through the
instrument. With patience it is capable of some interesting sounds however. In its case, I would
recommend building a realtime GUI and exploring the interaction of its input arguments that way.
wgbowedbar, like a number of physical modelling algorithms, is rather unstable. This is not neces-
sary a design flaw in the algorithm but instead perhaps an indication that the algorithm has been
left quite open for out experimentation - or abuse. In these situation caution is advised in order to
protect ears and loudspeakers. Positive feedback within the model can result in signals of enor-
mous amplitude very quickly. Employment of the clip opcode as a means of some protection is
recommended when experimenting in realtime.
395
Csound Opcodes for Physical Modelling 04 G. PHYSICAL MODELLING
evidently have more of a dramatic effect on the sound produced than other and again it is recom-
mended to create a realtime GUI for exploration. Nonetheless, a fixed example is provided below
that should offer some insight into the kinds of sounds possible.
Probably the most important parameter for us is the stiffness of the bar. This actually provides
us with our pitch control and is not in cycle-per-second so some experimentation will be required
to find a desired pitch. There is a relationship between stiffness and the parameter used to define
the width of the strike - when the stiffness coefficient is higher a wider strike may be required in
order for the note to sound. Strike width also impacts upon the tone produced, narrower strikes
generating emphasis upon upper partials (provided a tone is still produced) whilst wider strikes
tend to emphasize the fundamental).
The parameter for strike position also has some impact upon the spectral balance. This effect may
be more subtle and may be dependent upon some other parameter settings, for example, when
strike width is particularly wide, its effect may be imperceptible. A general rule of thumb here is
that in order to achieve the greatest effect from strike position, strike width should be as low as
will still produce a tone. This kind of interdependency between input parameters is the essence of
working with a physical model that can be both intriguing and frustrating.
An important parameter that will vary the impression of the bar from metal to wood is
An interesting feature incorporated into the model in the ability to modulate the point along the bar
at which vibrations are read. This could also be described as pick-up position. Moving this scan-
ning location results in tonal and amplitude variations. We just have control over the frequency at
which the scanning location is modulated.
EXAMPLE 04G11_barmodel.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
; boundary conditions 1=fixed 2=pivot 3=free
kbcL = 1
kbcR = 1
; stiffness
iK = p4
; high freq. loss (damping)
ib = p5
; scanning frequency
kscan rspline p6,p7,0.2,0.8
; time to reach 30db decay
iT30 = p3
; strike position
ipos random 0,1
; strike velocity
ivel = 1000
; width of strike
iwid = 0.1156
aSig barmodel kbcL,kbcR,iK,ib,kscan,iT30,ipos,ivel,iwid
396
04 G. PHYSICAL MODELLING Csound Opcodes for Physical Modelling
</CsInstruments>
<CsScore>
;t 0 90 1 30 2 60 5 90 7 30
; p4 = stiffness (pitch)
#define gliss(dur'Kstrt'Kend'b'scan1'scan2)
#
i 1 0 20 $Kstrt $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur > $b $scan1 $scan2
i 1 ^+0.05 $dur $Kend $b $scan1 $scan2
#
$gliss(15'40'400'0.0755'0.1'2)
b 5
$gliss(2'80'800'0.755'0'0.1)
b 10
$gliss(3'10'100'0.1'0'0)
b 15
$gliss(40'40'433'0'0.2'5)
e
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy
397
Csound Opcodes for Physical Modelling 04 G. PHYSICAL MODELLING
EXAMPLE 04G12_PhiSEM.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1 ; tambourine
iAmp = p4
iDettack = 0.01
iNum = p5
iDamp = p6
iMaxShake = 0
iFreq = p7
iFreq1 = p8
iFreq2 = p9
aSig tambourine iAmp,iDettack,iNum,iDamp,iMaxShake,iFreq,iFreq1,iFreq2
out aSig
endin
instr 2 ; bamboo
iAmp = p4
iDettack = 0.01
iNum = p5
iDamp = p6
iMaxShake = 0
iFreq = p7
iFreq1 = p8
iFreq2 = p9
aSig bamboo iAmp,iDettack,iNum,iDamp,iMaxShake,iFreq,iFreq1,iFreq2
out aSig
endin
instr 3 ; sleighbells
iAmp = p4
iDettack = 0.01
iNum = p5
iDamp = p6
iMaxShake = 0
iFreq = p7
iFreq1 = p8
iFreq2 = p9
aSig sleighbells iAmp,iDettack,iNum,iDamp,iMaxShake,iFreq,iFreq1,iFreq2
out aSig
endin
</CsInstruments>
<CsScore>
; p4 = amp.
; p5 = number of timbrels
; p6 = damping
; p7 = freq (main)
; p8 = freq 1
; p9 = freq 2
; tambourine
i 1 0 1 0.1 32 0.47 2300 5600 8100
i 1 + 1 0.1 32 0.47 2300 5600 8100
398
04 G. PHYSICAL MODELLING Csound Opcodes for Physical Modelling
b 10
; bamboo
i 2 0 1 0.4 1.25 0.0 2800 2240 3360
i 2 + 1 0.4 1.25 0.0 2800 2240 3360
i 2 + 2 0.4 1.25 0.05 2800 2240 3360
i 2 + 2 0.2 10 0.05 2800 2240 3360
i 2 + 1 0.3 16 0.01 2000 4000 8000
i 2 + 1 0.3 16 0.01 1000 2000 3000
i 2 8 2 0.1 1 0.05 1257 2653 6245
i 2 8 2 0.1 1 0.05 1073 3256 8102
i 2 8 2 0.1 1 0.05 514 6629 9756
b 20
; sleighbells
i 3 0 1 0.7 1.25 0.17 2500 5300 6500
i 3 + 1 0.7 1.25 0.17 2500 5300 6500
i 3 + 2 0.7 1.25 0.3 2500 5300 6500
i 3 + 2 0.4 10 0.3 2500 5300 6500
i 3 + 1 0.5 16 0.2 2000 4000 8000
i 3 + 1 0.5 16 0.2 1000 2000 3000
i 3 8 2 0.3 1 0.3 1257 2653 6245
i 3 8 2 0.3 1 0.3 1073 3256 8102
i 3 8 2 0.3 1 0.3 514 6629 9756
e
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy
Physical modelling can produce rich, spectrally dynamic sounds with user manipulation usually
abstracted to a small number of descriptive parameters. Csound offers a wealth of other opcodes
for physical modelling which cannot all be introduced here so the user is encouraged to explore
based on the approaches exemplified here. You can find lists in the chapters Models and Emula-
tions, Scanned Synthesis and Waveguide Physical Modeling of the Csound Manual.
399
Csound Opcodes for Physical Modelling 04 G. PHYSICAL MODELLING
400
04 H. SCANNED SYNTHESIS
Scanned Synthesis is a relatively new synthesis technique invented by Max Mathews, Rob Shaw
and Bill Verplank at Interval Research in 2000. This algorithm uses a combination of a table-
lookup oscillator and Issac Newton’s mechanical model (equation) of a mass and spring system
to dynamically change the values stored in an f-table. The sonic result is a timbral spectrum that
changes with time.
Csound has a couple opcodes dedicated to scanned synthesis, and these opcodes can be used
not only to make sounds, but also to generate dynamic f-tables for use with other Csound opcodes.
The arguments kamp and kfrq should be familiar, amplitude and frequency respectively. The other
arguments are f-table numbers containing data known in the scanned synthesis world as profiles.
Profiles
Profiles refer to variables in the mass and spring equation. Newton’s model describes a string as
a finite series of marbles connected to each other with springs.
In this example we will use 128 marbles in our system. To the Csound user, profiles are a series
of f-tables that set up the scantable opcode. To the opcode, these f-tables influence the dynamic
behavior of the table read by a table-lookup oscillator.
gipos ftgen 1, 0, 128, 10, 1 ;Position Initial Shape: Sine wave range -1 to 1
gimass ftgen 2, 0, 128, -7, 1, 128, 1 ;Masses: Constant value 1
gistiff ftgen 3, 0, 128, -7, 0, 64, 100, 64, 0 ;Stiffness: triangle
gidamp ftgen 4, 0, 128, -7, 1, 128, 1 ;Damping: Constant value 1
givel ftgen 5, 0, 128, -2, 0 ;Velocity: Initially constant value 0
401
A Quick Scanned Synth 04 H. SCANNED SYNTHESIS
All these tables need to be the same size; otherwise Csound will return an error.
Run the following .csd. Notice that the sound starts off sounding like our intial shape (a sine wave)
but evolves as if there are filters, distortions or LFO’s.
EXAMPLE 04H01_scantable_1.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
nchnls = 2
sr = 44100
ksmps = 32
0dbfs = 1
instr 1
iamp = .2
ifrq = 440
aScan scantable iamp, ifrq, gipos, gimass, gistiff, gidamp, givel
aOut linen aScan, 1, p3, 1
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 19
</CsScore>
</CsoundSynthesizer>
;example by Christopher Saunders and joachim heintz
What happens in the scantable synthesis, is a constant change in the position (table gipos) and
the velocity (table givel) of the mass particles. Here are three snapshots of these tables in the
examples above:
402
04 H. SCANNED SYNTHESIS A Quick Scanned Synth
The audio output of scantable is the result of oscillating through the gipos table. So we will achieve
the same audible result with this code:
EXAMPLE 04H02_scantable_2.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
nchnls = 2
sr = 44100
ksmps = 32
0dbfs = 1
instr 1
iamp = .2
ifrq = 440
a0 scantable 0, 0, gipos, gimass, gistiff, gidamp, givel
aScan poscil iamp, ifrq, gipos
aOut linen aScan, 1, p3, 1
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 19
</CsScore>
</CsoundSynthesizer>
;example by Christopher Saunders and joachim heintz
403
Dynamic Tables 04 H. SCANNED SYNTHESIS
Dynamic Tables
We can use table which is changed by scantable dynamically for any context. Below is an example
of using the values of an f-table generated by scantable to modify the amplitudes of an fsig, a signal
type in csound which represents a spectral signal.
EXAMPLE 04H03_Scantable_pvsmaska.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
nchnls = 2
sr = 44100
ksmps = 32
0dbfs = 1
instr 1
iamp = .2
kfrq = 110
aBuzz buzz iamp, kfrq, 32, gisin
aBuzz linen aBuzz, .1, p3, 1
out aBuzz, aBuzz
endin
instr 2
iamp = .4
kfrq = 110
a0 scantable 0, 0, gipos, gimass, gistiff, gidamp, givel
ifftsize = 128
ioverlap = ifftsize / 4
iwinsize = ifftsize
iwinshape = 1; von-Hann window
aBuzz buzz iamp, kfrq, 32, gisin
fBuzz pvsanal aBuzz, ifftsize, ioverlap, iwinsize, iwinshape ;fft
fMask pvsmaska fBuzz, gipos, 1
aOut pvsynth fMask; resynthesize
aOut linen aOut, .1, p3, 1
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 3
i 2 4 20
</CsScore>
</CsoundSynthesizer>
;Example by Christopher Saunders and joachim heintz
In this .csd, the score plays instrument 1, a normal buzz sound, and then the score plays instrument
2 — the same buzz sound re-synthesized with amplitudes of each of the 128 frequency bands,
404
04 H. SCANNED SYNTHESIS A More Flexible Scanned Synth
controlled by a dynamic function table which is generated by scantable. Compared to the first
example, two tables have been changed. The initial positions are an impulse-like wave form, and
the masses are between 1/10000 and 1/10 in exponential rise.
The opcodes scans and scanu by Paris Smaragdis give the Csound user one of the most robust
and flexible scanned synthesis environments. These opcodes work in tandem to first set up the
dynamic wavetable, and then to scan the dynamic table in ways a table-lookup oscillator cannot.
For a detailed description of what each argument does, see the Csound Reference Manual; I will
discuss the various types of arguments in the opcode.
The first set of arguments - ipos, ifnvel, ifnmass, ifnstiff, ifncenter, and ifndamp - are f-tables de-
scribing the profiles, similar to the profile arguments for scantable. Like for scantable, the same
size is required for each of these tables.
An exception to this size requirement is the ifnstiff table. This table is the size of the other profiles
squared. If the other f-tables are size 128, then ifnstiff should be of size 16384 (or 128*128). To
discuss what this table does, I must first introduce the concept of a scanned matrix.
Going back to our discussion on Newton’s mechanical model, the mass and spring model de-
scribes the behavior of a string as a finite number of masses connected by springs. As you can
405
A More Flexible Scanned Synth 04 H. SCANNED SYNTHESIS
imagine, the masses are connected sequentially, one to another, like beads on a string. Mass
#1 is connected to #2, #2 connected to #3 and so on. However, the pioneers of scanned syn-
thesis had the idea to connect the masses in a non-linear way. It’s hard to imagine, because as
musicians, we have experience with piano or violin strings (one dimensional strings), but not with
multi-dimensional strings. Fortunately, the computer has no problem working with this idea, and
the flexibility of Newton’s equation allows us to use the CPU to model mass #1 being connected
with springs not only to #2 but also to #3 and any other mass in the model.
The most direct and useful implementation of this concept is to connect mass #1 to mass #2 and
mass #128 – forming a string without endpoints, a circular string, like tying our string with beads to
make a necklace. The pioneers of scanned synthesis discovered that this circular string model is
more useful than a conventional one-dimensional string model with endpoints. In fact, scantable
uses a circular string.
The matrix is described in a simple ASCII file, imported into Csound via a GEN23 generated f-table.
f3 0 16384 -23 "string-128"
This text file must be located in the same directory as your .csd or csound will give you this error
ftable 3: error opening ASCII file
You can construct your own matrix using Stephen Yi’s Scanned Matrix editor included in the Blue
frontend for Csound.
To swap out matrices, simply type the name of a different matrix file into the double quotes, i.e.:
f3 0 16384 -23 "circularstring_2-128"
Different matrices have unique effects on the behavior of the system. Some matrices can make
the synth extremely loud, others extremely quiet. Experiment with using different matrices.
Now would be a good time to point out that Csound has other scanned synthesis opcodes pre-
ceded with an x, xscans, xscanu, that use a different matrix format than the one used by scans,
scanu, and Stephen Yi’s Scanned Matrix Editor. The Csound Reference Manual has more informa-
tion on this.
The Hammer
If the initial shape, an f-table specified by the ipos argument determines the shape of the initial
contents in our dynamic table. What if we want to “reset” or “pluck” the table, perhaps with a
shape of a square wave instead of a sine wave, while the instrument is playing?
With scantable, there is an easy way to to this, send a score event changing the contents of the
dynamic f-table. You can do this with the Csound score by adjusting the start time of the f-events
in the score.
EXAMPLE 04H04_Hammer.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
406
04 H. SCANNED SYNTHESIS More on Profiles
</CsOptions>
<CsInstruments>
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
ipos ftgen 1, 0, 128, 10, 1 ; Initial Shape, sine
imass ftgen 2, 0, 128, -7, 1, 128, 1 ;Masses(adj.), constant value 1
istiff ftgen 3, 0, 128, -7, 0, 64, 100, 64, 0 ;Stiffness triangle
idamp ftgen 4, 0, 128, -7, 1, 128, 1; ;Damping; constant value 1
ivel ftgen 5, 0, 128, -7, 0, 128, 0 ;Initial Velocity 0
iamp = 0.2
a1 scantable iamp, 60, ipos, imass, istiff, idamp, ivel
outs a1, a1
endin
</CsInstruments>
<CsScore>
i 1 0 14
f 1 1 128 10 1 1 1 1 1 1 1 1 1 1 1
f 1 2 128 10 1 1 0 0 0 0 0 0 0 1 1
f 1 3 128 10 1 1 1 1 1
f 1 4 128 10 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
f 1 5 128 10 1 1
f 1 6 128 13 1 1 0 0 0 -.1 0 .3 0 -.5 0 .7 0 -.9 0 1 0 -1 0
f 1 7 128 21 6 5.745
</CsScore>
</CsoundSynthesizer>
;example by Christopher Saunders
which means this method of hammering the string is working. In fact you could use this method
to explore and hammer every possible GEN routine in Csound. GEN10 (sines), GEN 21 (noise) and
GEN 27 (breakpoint functions) could keep you occupied for a while.
Unipolar waves have a different sound but a loss in volume can occur. There is a way to do this
with scanu, but I do not use this feature and just use these values instead.
ileft = 0. iright = 1. kpos = 0. kstrngth = 0.
More on Profiles
One of the biggest challenges in understanding scanned synthesis is the concept of profiles.
Setting up the opcode scanu requires 3 profiles - Centering, Mass and Damping. The pioneers of
scanned synthesis discovered early on that the resultant timbre is far more interesting if marble
#1 had a different centering force than mass #64.
The farther our model gets away from a physical real-world string that we know and pluck on
our guitars and pianos, the more interesting the sounds for synthesis. Therefore, instead of one
mass, and damping, and centering value for all 128 of the marbles each marble can have its own
407
Control Rate Profile Scalars 04 H. SCANNED SYNTHESIS
conditions. How the centering, mass, and damping profiles make the system behave is up to the
user to discover through experimentation (more on how to experiment safely later in this chapter).
Scanu gives us 4 k-rate arguments kmass, kstif, kcentr, kdamp, to scale these forces. One could
scale mass to volume, or have an envelope controlling centering.
Caution! These parameters can make the scanned system unstable in ways that could make
extremely loud sounds come out of your computer. It is best to experiment with small changes
in range and keep your headphones off. A good place to start experimenting is with different
values for kcentr while keeping kmass, kstiff, and kdamp constant. You could also scale mass and
stiffness to MIDI velocity.
Audio Injection
Instead of using the hammer method to move the marbles around, we could use audio to add
motion to the mass and spring model. Scanu lets us do this with a simple audio rate argument.
Be careful with the amplitude again.
Connecting to Scans
The p-field id is an arbitrary integer label that tells the scans opcode which scanu to read. By
making the value of id negative, the arbitrary numerical label becomes the number of an f-table
that can be used by any other opcode in Csound, like we did with scantable earlier in this chapter.
We could then use poscil to perform a table lookup algorithm to make sound out of scanu (as long
as id is negative), but scanu has a companion opcode, scans which has 1 more argument than
oscil. This argument is the number of an f-table containing the scan trajectory.
408
04 H. SCANNED SYNTHESIS Scan Trajectories
Scan Trajectories
One thing we have taken for granted so far with poscil is that the wave table is read front to back.
If you regard poscil as a phasor and table pair, the first index of the table is always read first and
the last index is always read last as in the example below:
EXAMPLE 04H05_Scan_trajectories.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
andx phasor 440
a1 table andx*8192, 1
outs a1*.2, a1*.2
endin
</CsInstruments>
<CsScore>
f1 0 8192 10 1
i 1 0 4
</CsScore>
</CsoundSynthesizer>
;example by Christopher Saunders
But what if we wanted to read the table indices back to front, or even “out of order”? Well we could
do something like this:
EXAMPLE 04H06_Scan_trajectories2.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr=44100
ksmps=32
nchnls=2
0dbfs=1
instr 1
andx phasor 440
andx table andx*8192, 2 ; read the table out of order!
aOut table andx*8192, 1
outs aOut*.2, aOut*.2
endin
</CsInstruments>
<CsScore>
f1 0 8192 10 1
f2 0 8192 -5 .001 8192 1;
409
Scan Trajectories 04 H. SCANNED SYNTHESIS
i 1 0 4
</CsScore>
</CsoundSynthesizer>
;example by Christopher Saunders
We are still dealing with 1-dimensional arrays, or f-tables as we know them. But if we remember
back to our conversation about the scanned matrix, matrices are multi-dimensional.
The opcode scans gives us the flexibility of specifying a scan trajectory, analogous to telling the
phasor/table combination to read values non-consecutively. We could read these values, not left
to right, but in a spiral order, by specifying a table to be the ifntraj argument of scans.
a3 scans iamp, kpch, ifntraj ,id , interp
An f-table for the spiral method can generated by reading the ASCII file spiral-8,16,128,2,1over2 by
GEN23
f2 0 128 -23 "spiral-8,16,128,2,1over2"
The following .csd requires that the files circularstring-128 and spiral-8,16, 128,2,1over2 be located
in the same directory as the .csd.
EXAMPLE 04H07_Scan_matrices.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
nchnls = 2
sr = 44100
ksmps = 32
0dbfs = 1
instr 1
ipos ftgen 1, 0, 128, 10, 1
irate = .005
ifnvel ftgen 6, 0, 128, -7, 0, 128, 0
ifnmass ftgen 2, 0, 128, -7, 1, 128, 1
ifnstif ftgen 3, 0, 16384,-23,"circularstring-128"
ifncentr ftgen 4, 0, 128, -7, 0, 128, 2
ifndamp ftgen 5, 0, 128, -7, 1, 128, 1
imass = 2
istif = 1.1
icentr = .1
idamp = -0.01
ileft = 0.
iright = .5
ipos = 0.
istrngth = 0.
ain = 0
idisp = 0
id = 8
scanu 1, irate, ifnvel, ifnmass, ifnstif, ifncentr, ifndamp, imass, istif,
icentr, idamp, ileft, iright, ipos, istrngth, ain, idisp, id
scanu 1,.007,6,2,3,4,5, 2, 1.10 ,.10 ,0 ,.1 ,.5, 0, 0,ain,1,2;
iamp = .2
ifreq = 200
a1 scans iamp, ifreq, 7, id
outs a1, a1
endin
410
04 H. SCANNED SYNTHESIS Table Size and Interpolation
</CsInstruments>
<CsScore>
f7 0 128 -7 0 128 128
i 1 0 5
f7 5 128 -23 "spiral-8,16,128,2,1over2"
i 1 5 5
f7 10 128 -7 127 64 0 64 127
i 1 10 5
</CsScore>
</CsoundSynthesizer>
;example by Christopher Saunders
Notice that the scan trajectory has an FM-like effect on the sound. These are the three different f7
tables which are started in the score:
One can use larger or smaller tables, but their sizes must agree in this way or Csound will give
you an error. Larger tables, of course significantly increase CPU usage and slow down real-time
performance.
When using smaller size tables it may be necessary to use interpolation to avoid the artifacts of a
small table. scans gives us this option as a fifth optional argument, iorder, detailed in the reference
manual and worth experimenting with.
Using the opcodes scanu and scans require that we fill in 22 arguments and create at least 7 f-
tables, including at least one external ASCII file (because no one wants to fill in 16,384 arguments
to an f-statement). This a very challenging pair of opcodes. The beauty of scanned synthesis is
that there is no scanned synthesis “sound”.
411
Using Balance to Tame Amplitudes 04 H. SCANNED SYNTHESIS
Warning: the following .csd is hot, it produces massively loud amplitude values. Be very cau-
tious about rendering this .csd, I highly recommend rendering to a file instead of real-time. Only
uncomment line 43 when you know what you do!
EXAMPLE 04H08_Scan_extreme_amplitude.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
nchnls = 2
sr = 44100
ksmps = 32
0dbfs = 1
;NOTE THIS CSD WILL NOT RUN UNLESS
;IT IS IN THE SAME FOLDER AS THE FILE "STRING-128"
instr 1
ipos ftgen 1, 0, 128 , 10, 1
irate = .007
ifnvel ftgen 6, 0, 128 , -7, 0, 128, 0.1
ifnmass ftgen 2, 0, 128 , -7, 1, 128, 1
ifnstif ftgen 3, 0, 16384, -23, "string-128"
ifncentr ftgen 4, 0, 128 , -7, 1, 128, 2
ifndamp ftgen 5, 0, 128 , -7, 1, 128, 1
kmass = 1
kstif = 0.1
kcentr = .01
kdamp = 1
ileft = 0
iright = 1
kpos = 0
kstrngth = 0.
ain = 0
idisp = 1
id = 22
scanu ipos, irate, ifnvel, ifnmass, \
ifnstif, ifncentr, ifndamp, kmass, \
kstif, kcentr, kdamp, ileft, iright,\
kpos, kstrngth, ain, idisp, id
kamp = 0dbfs*.2
kfreq = 200
ifn ftgen 7, 0, 128, -5, .001, 128, 128.
a1 scans kamp, kfreq, ifn, id
a1 dcblock2 a1
iatt = .005
idec = 1
islev = 1
irel = 2
aenv adsr iatt, idec, islev, irel
412
04 H. SCANNED SYNTHESIS Using Balance to Tame Amplitudes
The extreme volume of this .csd comes from a value given to scanu
kdamp = .1
0.1 is not exactly a safe value for this argument, in fact, any value above 0 for this argument can
cause chaos.
It would take a skilled mathematician to map out safe possible ranges for all the arguments of
scanu. I figured out these values through a mix of trial and error and studying other .csd.
We can use the opcode balance to listen to sine wave (a signal with consistent, safe amplitude)
and squash down our extremely loud scanned synth output (which is loud only because of our
intentional carelessness.)
EXAMPLE 04H09_Scan_balanced_amplitudes.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
nchnls = 2
sr = 44100
ksmps = 256
0dbfs = 1
;NOTE THIS CSD WILL NOT RUN UNLESS
;IT IS IN THE SAME FOLDER AS THE FILE "STRING-128"
instr 1
ipos ftgen 1, 0, 128 , 10, 1
irate = .007
ifnvel ftgen 6, 0, 128 , -7, 0, 128, 0.1
ifnmass ftgen 2, 0, 128 , -7, 1, 128, 1
ifnstif ftgen 3, 0, 16384, -23, "string-128"
ifncentr ftgen 4, 0, 128 , -7, 1, 128, 2
ifndamp ftgen 5, 0, 128 , -7, 1, 128, 1
kmass = 1
kstif = 0.1
kcentr = .01
kdamp = -0.01
ileft = 0
iright = 1
kpos = 0
kstrngth = 0.
ain = 0
idisp = 1
id = 22
scanu ipos, irate, ifnvel, ifnmass, \
ifnstif, ifncentr, ifndamp, kmass, \
413
Using Balance to Tame Amplitudes 04 H. SCANNED SYNTHESIS
It must be emphasized that this is merely a safeguard. We still get samples out of range when we
run this .csd, but many less than if we had not used balance. It is recommended to use balance
if you are doing real-time mapping of k-rate profile scalar arguments for scans; mass stiffness,
damping, and centering.
414
05 A. ENVELOPES
Envelopes are used to define how a value evolves over time. In early synthesisers, envelopes were
used to define the changes in amplitude in a sound across its duration thereby imbuing sounds
characteristics such as percussive, or sustaining. Envelopes are also commonly used to modulate
filter cutoff frequencies and the frequencies of oscillators but in reality we are only limited by our
imaginations in regard to what they can be used for.
Csound offers a wide array of opcodes for generating envelopes including ones which emulate
the classic ADSR (attack-decay-sustain-release) envelopes found on hardware and commercial
software synthesizers. A selection of these opcodes types shall be introduced here.
line
The simplest opcode for defining an envelope is line. It describes a single envelope segment as a
straight line between a start value ia and an end value ib which has a given duration idur.
ares *line* ia, idur, ib
kres *line* ia, idur, ib
In the following example line is used to create a simple envelope which is then used as the ampli-
tude control of a poscil oscillator. This envelope starts with a value of 0.5 then over the course of
2 seconds descends in linear fashion to zero.
EXAMPLE 05A01_line.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
aEnv line 0.5, 2, 0 ; amplitude envelope
aSig poscil aEnv, 500 ; audio oscillator
out aSig, aSig ; audio sent to output
415
line 05 A. ENVELOPES
endin
</CsInstruments>
<CsScore>
i 1 0 2 ; instrument 1 plays a note for 2 seconds
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The envelope in the above example assumes that all notes played by this instrument will be 2
seconds long. In practice it is often beneficial to relate the duration of the envelope to the duration
of the note (p3) in some way. In the next example the duration of the envelope is replaced with
the value of p3 retrieved from the score, whatever that may be. The envelope will be stretched or
contracted accordingly.
EXAMPLE 05A02_line_p3.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
; A single segment envelope. Time value defined by note duration.
aEnv line 0.5, p3, 0
aSig poscil aEnv, 500
out aSig, aSig
endin
</CsInstruments>
<CsScore>
; p1 p2 p3
i 1 0 1
i 1 2 0.2
i 1 3 4
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
It may not be disastrous if a envelope’s duration does not match p3 and indeed there are many
occasions when we want an envelope duration to be independent of p3 but we need to remain
aware that if p3 is shorter than an envelope’s duration then that envelope will be truncated before
it is allowed to complete and if p3 is longer than an envelope’s duration then the envelope will
complete before the note ends (the consequences of this latter situation will be looked at in more
detail later on in this section).
line (and most of Csound’s envelope generators) can output either k or a-rate variables. k-rate
envelopes are computationally cheaper than a-rate envelopes but in envelopes with fast moving
segments quantisation can occur if they output a k-rate variable, particularly when the control rate
is low, which in the case of amplitude envelopes can lead to clicking artefacts or distortion.
416
05 A. ENVELOPES linseg
linseg
linseg is an elaboration of line and allows us to add an arbitrary number of segments by adding
further pairs of time durations followed envelope values. Provided we always end with a value and
not a duration we can make this envelope as long as we like.
ares *linseg* ia, idur1, ib [, idur2] [, ic] [...]
kres *linseg* ia, idur1, ib [, idur2] [, ic] [...]
In the next example a more complex amplitude envelope is employed by using the linseg opcode.
This envelope is also note duration (p3) dependent but in a more elaborate way. An attack-decay
stage is defined using explicitly declared time durations. A release stage is also defined with an
explicitly declared duration. The sustain stage is the p3 dependent stage but to ensure that the
duration of the entire envelope still adds up to p3, the explicitly defined durations of the attack,
decay and release stages are subtracted from the p3 dependent sustain stage duration. For this
envelope to function correctly it is important that p3 is not less than the sum of all explicitly defined
envelope segment durations. If necessary, additional code could be employed to circumvent this
from happening.
EXAMPLE 05A03_linseg.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
; a more complex amplitude envelope:
; |-attack-|-decay--|---sustain---|-release-|
aEnv linseg 0, 0.01, 1, 0.1, 0.1, p3-0.21, 0.1, 0.1, 0
aSig poscil aEnv, 500
out aSig, aSig
endin
</CsInstruments>
<CsScore>
i 1 0 1
i 1 2 5
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The next example illustrates an approach that can be taken whenever it is required that more than
one envelope segment duration be p3 dependent. This time each segment is a fraction of p3. The
sum of all segments still adds up to p3 so the envelope will complete across the duration of each
note regardless of duration.
417
Different behaviour in linear continuation 05 A. ENVELOPES
EXAMPLE 05A04_linseg_p3_fractions.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
aEnv linseg 0, p3*0.5, .2, p3*0.5, 0 ; rising then falling envelope
aSig poscil aEnv, 500
out aSig, aSig
endin
</CsInstruments>
<CsScore>
; 3 notes of different durations are played
i 1 0 1
i 1 2 0.1
i 1 3 5
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
When a note continues beyond the end of the final value of a linseg defined envelope the final value
of that envelope is held. A line defined envelope behaves differently in that instead of holding its
final value it continues in the trajectory defined by its one and only segment.
This difference is illustrated in the following example. The linseg and line envelopes of instruments
1 and 2 appear to be the same but the difference in their behaviour as described above when they
continue beyond the end of their final segment is clear. The linseg envelope stays at zero, whilst
the line envelope continues through zero to negative range, thus ending at -0.2.1
EXAMPLE 05A05_line_vs_linseg.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
1 Negative values for the envelope have the same loudness. Only the phase of the signal is inverted.
418
05 A. ENVELOPES expon and expseg
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
i 1 0 4 ; linseg envelope
i 2 5 4 ; line envelope
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy and joachim heintz
The following example illustrates the difference between line and expon when applied as amplitude
envelopes.
419
expon and expseg 05 A. ENVELOPES
EXAMPLE 05A06_line_vs_expon.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
i 1 0 2 ; line envelope
i 2 2 2 ; expon envelope
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The nearer our near-zero values are to zero the quicker the curve will appear to reach zero. In the
next example smaller and smaller envelope end values are passed to the expon opcode using p4
values in the score. The percussive ping sounds are perceived to be increasingly short.
EXAMPLE 05A07_expon_pings.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
420
05 A. ENVELOPES Envelopes with release segment
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
;p1 p2 p3 p4
i 1 0 1 0.001
i 1 1 1 0.000001
i 1 2 1 0.000000000000001
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Note that expseg does not behave like linseg in that it will not hold its last final value if p3 exceeds
its entire duration, instead it continues its curving trajectory in a manner similar to line (and expon).
This could have dangerous results if used as an amplitude envelope.
The typical situation for using one of these opcodes is when using a midi keyboard. When a key is
released, a fade-out must be applied to avoid clicks. Another typical situation is shown in the next
example: a running instrument is turned off by another instrument. Similar to the midi keyboard
421
Envelopes with release segment 05 A. ENVELOPES
usage, the instrument must add in this case a release segment to avoid clicks. The example
shows both: how it sounds without release, and how it sounds with release. This is done via the
last parameter of the turnoff2 opcode. If krelease is set to zero, it will not allow the instrument
it terminates to performs its release segment. This results in audible clicks on the first run. The
second run (after creating a new cluster of random notes in the middle range) instead allows the
release segment (in setting krelease to 1), so each of the notes gets a soft fade-out. (This example
can be changed to be used via midi keyboard; follow the comments in the code in this case.)
EXAMPLE 05A08_linsegr.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
seed 0
instr Generate
//create and start 10 instances of instr Play
indx = 0
while indx < 10 do
schedule "Play", 0, 15
indx += 1
od
endin
instr Play
iMidiNote random 60, 72
//use the following line instead when using midi input
;iMidiNote notnum
; attack-|sustain-|-release
aEnv linsegr 0, 0.01, 0.1, 0.5,0; envelope that senses note releases
aSig poscil aEnv, mtof:i(iMidiNote); audio oscillator
out aSig, aSig ; audio sent to output
endin
instr TurnOff_noRelease
//turn off the ten instances from instr Play starting from the oldest one
//and do not allow the release segment to be performed (result: clicks)
kTrigFreq init 1
kTrig metro kTrigFreq
if kTrig == 1 then
kRelease = 0 ;no release allowed
turnoff2 "Play", 1, kRelease
endif
endin
instr TurnOff_withRelease
//turn off the ten instances from instr Play starting from the oldest one
//and do allow the release segment to be performed (no clicks any more)
kTrigFreq init 1
kTrig metro kTrigFreq
if kTrig == 1 then
kRelease = 1 ;release allowed
turnoff2 "Play", 1, 1
422
05 A. ENVELOPES Envelopes in Function Tables
endif
endin
</CsInstruments>
<CsScore>
//for real-time midi input, comment out all score lines
i "Generate" 0 0
i "TurnOff_noRelease" 1 11
i "Generate" 15 0
i "TurnOff_withRelease" 16 11
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The following example generates an amplitude envelope which uses the shape of the first half of
a sine wave.
EXAMPLE 05A09_sine_env.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
; read the envelope once during the note's duration:
aEnv poscil .5, 1/p3, giEnv
aSig poscil aEnv, 500 ; audio oscillator
out aSig, aSig ; audio sent to output
endin
</CsInstruments>
<CsScore>
; 7 notes, increasingly short
i 1 0 2
i 1 2 1
i 1 3 0.5
i 1 4 0.25
i 1 5 0.125
i 1 6 0.0625
i 1 7 0.03125
423
Comparison of the Standard Envelope Opcodes 05 A. ENVELOPES
e 7.1
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
If we consider a basic envelope that ramps up across ¼ of the duration of a note, then sustains
for ½ the durations of the and finally ramps down across the remaining ¼ duration of the note, we
can implement this envelope using linseg thus:
kEnv linseg 0, p3/4, 0.9, p3/2, 0.9, p3/4, 0
When employed as an amplitude control, the resulting sound may seem to build rather too quickly,
then crescendo in a slightly mechanical fashion and finally arrive at its sustain portion with abrupt
stop in the crescendo. Similar critcism could be levelled at the latter part of the envelope going
from sustain to ramping down.
The expseg opcode, introduced sometime after linseg, attempted to address the issue of dynamic
response when mapping an envelope to amplitude. Two caveats exist in regard to the use of
expseg: firstly a single expseg definition cannot cross from the positive domain to the negative
domain (and vice versa), and secondly it cannot reach zero. This second caveat means that an
amplitude envelope created using expseg cannot express silence unless we remove the offset
away form zero that the envelope employs. An envelope with similar input values to the linseg
envelope above but created with expseg could use the following code:
kEnv expseg 0.001, p3/4, 0.901, p3/2, 0.901, p3/4, 0.001
kEnv = kEnv – 0.001
424
05 A. ENVELOPES Comparison of the Standard Envelope Opcodes
In this example the offset above zero has been removed. This time we can see that the sound will
build in a rather more natural and expressive way, however the change from crescendo to sustain
is even more abrupt this time. Adding some lowpass filtering to the envelope signal can smooth
these abrupt changes in direction. This could be done with, for example, the port opcode given a
half-point value of 0.05.
kEnv port kEnv, 0.05
The changes to and from the sustain portion have clearly been improved but close examination
of the end of the envelope reveals that the use of port has prevented the envelope from reaching
zero. Extending the duration of the note or overlaying a second anti-click envelope should obviate
this issue.
xtratim 0.1
will provide a quick ramp down at the note conclusion if multiplied to the previously created enve-
lope.
A more recently introduced alternative is the cosseg opcode which applies a cosine transfer func-
tion to each segment of the envelope. Using the following code:
kEnv cosseg 0, p3/4, 0.9, p3/2, 0.9, p3/4, 0
425
lpshold, loopseg and looptseg - A Csound TB303 05 A. ENVELOPES
It can be observed that this envelope provides a smooth gradual building up from silence and a
gradual arrival at the sustain level. This opcode has no restrictions relating to changing polarity or
passing through zero.
Another alternative that offers enhanced user control and that might in many situations provide
more natural results is the transeg opcode. transeg allows us to specify the curvature of each
segment but it should be noted that the curvature is dependent upon whether the segment is rising
or falling. For example a positive curvature will result in a concave segment in a rising segment
but a convex segment in a falling segment. The following code:
kEnv transeg 0, p3/4, -4, 0.9, p3/2, 0, 0.9, p3/4, -4, 0
This looks perhaps rather lopsided but in emulating acoustic instruments can actually produce
more natural results. Considering an instrument such as a clarinet, it is in reality very difficult to
fade a note in smoothly from silence. It is more likely that a note will start slightly abruptly in
spite of the player’s efforts. This aspect is well represented by the attack portion of the envelope
above. When the note is stopped, its amplitude will decay quickly and exponentially as reflected
in the envelope also. Similar attack and release characteristics can be observed in the slight pitch
envelopes expressed by wind instruments.
These opcodes generate envelopes which are looped at a rate corresponding to a defined fre-
quency. What they each do could also be accomplished using the envelope from table technique
426
05 A. ENVELOPES lpshold, loopseg and looptseg - A Csound TB303
outlined in an earlier example but these opcodes provide the added convenience of encapsulating
all the required code in one line without the need for phasors, tables and ftgens. Furthermore all
of the input arguments for these opcodes can be modulated at k-rate.
lpshold generates an envelope in which each break point is held constant until a new break point
is encountered. The resulting envelope will contain horizontal line segments. In our example this
opcode will be used to generate the notes (as MIDI note numbers) for a looping bassline in the
fashion of a Roland TB303. Because the duration of the entire envelope is wholly dependent upon
the frequency with which the envelope repeats - in fact it is the reciprocal of the frequency – values
for the durations of individual envelope segments are not defining times in seconds but instead
represent proportions of the entire envelope duration. The values given for all these segments do
not need to add up to any specific value as Csound rescales the proportionality according to the
sum of all segment durations. You might find it convenient to contrive to have them all add up to
1, or to 100 – either is equally valid. The other looping envelope opcodes discussed here use the
same method for defining segment durations.
loopseg allows us to define a looping envelope with linear segments. In this example it is used
to define the amplitude envelope for each individual note. Take note that whereas the lpshold
envelope used to define the pitches of the melody repeats once per phrase, the amplitude envelope
repeats once for each note of the melody, therefore its frequency is 16 times that of the melody
envelope (there are 16 notes in our melodic phrase).
looptseg is an elaboration of loopseg in that is allows us to define the shape of each segment
individually, whether that be convex, linear or concave. This aspect is defined using the type pa-
rameters. A type value of 0 denotes a linear segement, a positive value denotes a convex segment
with higher positive values resulting in increasingly convex curves. Negative values denote con-
cave segments with increasing negative values resulting in increasingly concave curves. In this
example looptseg is used to define a filter envelope which, like the amplitude envelope, repeats for
every note. The addition of the type parameter allows us to modulate the sharpness of the decay
of the filter envelope. This is a crucial element of the TB303 design.
Other crucial features of this instrument, such as note on/off and hold for each step, are also
implemented using lpshold.
A number of the input parameters of this example are modulated automatically using the randomi
opcode in order to keep it interesting. It is suggested that these modulations could be replaced by
linkages to other controls such as CsoundQt/Cabbage/Blue widgets, FLTK widgets or MIDI con-
trollers. Suggested ranges for each of these values are given in the .csd.
EXAMPLE 05A10_lpshold_loopseg.csd
<CsoundSynthesizer>
<CsOptions>
-odac ;activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 4
nchnls = 2
0dbfs = 1
427
lpshold, loopseg and looptseg - A Csound TB303 05 A. ENVELOPES
; filter envelope
kCfOct looptseg kBtFreq,0,0,kCfBase+kCfEnv+kOct,kDecay,1,kCfBase+kOct
; if hold is off, use filter envelope, otherwise use steady state value:
kCfOct = (kHold=0?kCfOct:kCfBase+kOct)
kCfOct limit kCfOct, 4, 14 ; limit the cutoff frequency (oct format)
aSig vco2 0.4, kCps, i(kWaveform)*2, 0.5 ; VCO-style oscillator
aFilt lpf18 aSig, cpsoct(kCfOct), kRes, (kDist^2)*10 ; filter audio
aSig balance aFilt,aSig ; balance levels
kOn port kOn, 0.006 ; smooth on/off switching
; audio sent to output, apply amp. envelope,
; volume control and note On/Off status
aAmpEnv interp kAmpEnv*kOn*kVol
aOut = aSig * aAmpEnv
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 3600 ; instr 1 plays for 1 hour
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Hopefully this final example has provided some idea as to the extend of parameters that can be
controlled using envelopes and also an allusion to their importance in the generation of musical
gesture.
428
05 B. PANNING AND SPATIALIZATION
This is shown at a very simple example. First we hear a percussive sound from both speakers. We
will not recognize any pattern. Then we hear one beat from left speaker followed by three beats
from right speaker. We will recognize this as 3/4 beats, with the first beat on the left speaker.
Finally we hear a random sequence of left and right channel. We will hear this as something like a
dialog between two players.
EXAMPLE 05B01_routing.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 2
seed 1
instr Equal
kTrig metro 80/60
schedkwhen kTrig, 0, 0, "Perc", 0, 1, .4, 1
schedkwhen kTrig, 0, 0, "Perc", 0, 1, .4, 2
endin
instr Beat
429
Simple Stereo Panning 05 B. PANNING AND SPATIALIZATION
kRoutArr[] fillarray 1, 2, 2
kIndex init 0
if metro:k(80/60) == 1 then
event "i", "Perc", 0, 1, .6, kRoutArr[kIndex]
kIndex = (kIndex+1) % 3
endif
endin
instr Dialog
if metro:k(80/60) == 1 then
event "i", "Perc", 0, 1, .6, int(random:k(1,2.999))
endif
endin
instr Perc
iAmp = p4
iChannel = p5
aBeat pluck iAmp, 100, 100, 0, 3, .5
aOut Resonator aBeat, 300, 5
outch iChannel, aOut
endin
</CsInstruments>
<CsScore>
i "Equal" 0 9.5
i "Beat" 11 9.5
i "Dialog" 22 9.5
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz and philipp henkel
The spatialization technique used in this example is called routing. In routing we connect an audio
signal directly to one speaker. This is a somehow brutal method which knows only black or white,
only right or left. Usually we want to have a more refined way to locate sounds, with different
positions between pure left and pure right. This is often compared to a panorama - a sound horizon
on which certain sounds have a location between left and right. So we look first into this panning
for a stereo setup. Then we will discuss the extension of panning in a multi-channl setup. The
last part of this chapter is dedicated to the Ambisonics technique which offers a different way to
locate sound sources.
The simplest method that is typically encountered is to multiply one channel of audio (aSig) by a
panning variable (kPan) and to multiply the other side by 0 minus the same variable like this:
aSigL = aSig * (1 – kPan)
aSigR = aSig * kPan
outs aSigL, aSigR
kPan should be a value within the range zero and one. If kPan is 0 all of the signal will be in the
left channel, if it is 1, all of the signal will be in the right channel and if it is 0.5 there will be signal
430
05 B. PANNING AND SPATIALIZATION Simple Stereo Panning
of equal amplitude in both the left and the right channels. This way the signal can be continuously
panned between the left and right channels.
The problem with this method is that the overall power drops as the sound is panned to the middle.1
One possible solution to this problem is to take the square root of the panning variable for each
channel before multiplying it to the audio signal like this:
aSigL = aSig * sqrt((1 – kPan))
aSigR = aSig * sqrt(kPan)
outs aSigL, aSigR
By doing this, the straight line function of the input panning variable becomes a convex curve, so
that less power is lost as the sound is panned centrally.
Using 90º sections of a sine wave for the mapping produces a more convex curve and a less imme-
diate drop in power as the sound is panned away from the extremities. This can be implemented
using the code shown below.
aSigL = aSig * cos(kPan*$M_PI_2)
aSigR = aSig * sin(kPan*$M_PI_2)
outs aSigL, aSigR
(Note that \$M_PI_2 is one of Csound’s built in macros and is equivalent to 𝜋/2.)
A fourth method, devised by Michael Gogins, places the point of maximum power for each channel
slightly before the panning variable reaches its extremity. The result of this is that when the sound
is panned dynamically it appears to move beyond the point of the speaker it is addressing. This
method is an elaboration of the previous one and makes use of a different 90 degree section of a
sine wave. It is implemented using the following code:
aSigL = aSig * cos((kPan + 0.5) * $M_PI_2)
aSigR = aSig * sin((kPan + 0.5) * $M_PI_2)
outs aSigL, aSigR
The following example demonstrates all these methods one after the other for comparison. Pan-
ning movement is controlled by a slow moving LFO. The input sound is filtered pink noise.
EXAMPLE 05B02_Pan_stereo.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
imethod = p4 ; read panning method variable from score (p4)
1 The
reason has been touched in chapter 01C: The sound intensity is not proportional to the amplitude but to the squared
amplitude.
431
Simple Stereo Panning 05 B. PANNING AND SPATIALIZATION
if imethod=1 then
;------------------------ method 1 --------------------------
aPanL = 1 - aPan
aPanR = aPan
;------------------------------------------------------------
endif
if imethod=2 then
;------------------------ method 2 --------------------------
aPanL = sqrt(1 - aPan)
aPanR = sqrt(aPan)
;------------------------------------------------------------
endif
if imethod=3 then
;------------------------ method 3 --------------------------
aPanL = cos(aPan*$M_PI_2)
aPanR = sin(aPan*$M_PI_2)
;------------------------------------------------------------
endif
if imethod=4 then
;------------------------ method 4 --------------------------
aPanL = cos((aPan + 0.5) * $M_PI_2)
aPanR = sin((aPan + 0.5) * $M_PI_2)
;------------------------------------------------------------
endif
</CsInstruments>
<CsScore>
; 4 notes one after the other to demonstrate 4 different methods of panning
; p1 p2 p3 p4(method)
i 1 0 4.5 1
i 1 5 4.5 2
i 1 10 4.5 3
i 1 15 4.5 4
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The opcode pan2 makes it easier for us to implement various methods of panning. The following
example demonstrates the three methods that this opcode offers one after the other. The first is
the equal power method, the second square root and the third is simple linear.
EXAMPLE 05B03_pan2.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
432
05 B. PANNING AND SPATIALIZATION 3D Binaural Encoding
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
imethod = p4 ; read panning method variable from score (p4)
;----------------------- generate a source sound ------------------------
aSig pinkish 0.1 ; pink noise
aSig reson aSig, 500, 30, 2 ; bandpass filtered
;------------------------------------------------------------------------
</CsInstruments>
<CsScore>
; 3 notes one after the other to demonstrate 3 methods used by pan2
;p1 p2 p3 p4
i 1 0 4.5 0 ; equal power (harmonic)
i 1 5 4.5 1 ; square root method
i 1 10 4.5 2 ; linear
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
3D Binaural Encoding
3D binaural encoding is available through a number of opcodes that make use of spectral data
files that provide information about the filtering and inter-aural delay effects of the human head.
The oldest one of these is hrtfer. Newer ones are hrtfmove, hrtfmove2 and hrtfstat. The main pa-
rameters for control of the opcodes are azimuth (the horizontal direction of the source expressed
as an angle formed from the direction in which we are facing) and elevation (the angle by which
the sound deviates from this horizontal plane, either above or below). Both these parameters are
defined in degrees. Binaural infers that the stereo output of this opcode should be listened to us-
ing headphones so that no mixing in the air of the two channels occurs before they reach our ears
(although a degree of effect is still audible through speakers).
The following example take a monophonic source sound of noise impulses and processes it using
the hrtfmove2 opcode. First of all the sound is rotated around us in the horizontal plane then it is
raised above our head then dropped below us and finally returned to be level and directly in front
of us. This example uses the files hrtf-44100-left.dat and hrtf-44100-right.dat. In case they are not
loaded, they can be downloaded from the Csound sources.
433
Going Multichannel 05 B. PANNING AND SPATIALIZATION
EXAMPLE 05B04_hrtfmove.csd
<CsoundSynthesizer>
<CsOptions>
--env:SADIR+=../SourceMaterials
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
gS_HRTF_left = "hrtf-44100-left.dat"
gS_HRTF_right = "hrtf-44100-right.dat"
instr 1
; create an audio signal (noise impulses)
krate oscil 30,0.2,giLFOShape ; rate of impulses
; amplitude envelope: a repeating pulse
kEnv loopseg krate+3,0, 0,1, 0.05,0, 0.95,0,0
aSig pinkish kEnv ; noise pulses
</CsInstruments>
<CsScore>
i 1 0 24 ; instr 1 plays a note for 24 seconds
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Going Multichannel
So far we have only considered working in 2-channels (stereo), but Csound is extremely flexible
at working in more that 2 channels. By changing nchnls in the orchestra header we can specify
any number of channels but we also need to ensure that we choose an audio hardware device
using the -odac option that can handle multichannel audio. Audio channels sent from Csound,
that do not address hardware channels, will simply not be reproduced. There may be some need
to make adjustments to the software settings of your soundcard using its own software or the
operating system’s software, but due to the variety of sound hardware options available, it would
be impossible to offer further specific advice here.
If you do not use the real-time option -o dac but render a file with -o myfilename.wav, there are no
434
05 B. PANNING AND SPATIALIZATION Sending Multichannel Sound to the Loudspeakers
restrictions though. Csound will render any multi-channel file independently from your sound card.
So out can replace the opcodes outs, outq. outh and outo which were designed for exactly 2, 4,
6 and 8 output channels. out can also be used to work with odd channel numbers like 3, 5 or 7
although many soundcards work much better when a channel number of 2, 4 or 8 is used.
The only limitation of out is that it always counts from channel number 1. Imagine you have a
soundcard with 8 analog outputs (counting 1-8) and 8 digital outputs (counting 9-16), and you
want to use only the digital outputs. Here and in similar situations the outch opcode is the means
of choice. It allows us to direct audio to a specific channel or list of channels and takes the form:
outch kchan1, asig1 [, kchan2] [, asig2] [...]
So we would write here nchnls=16 to open the channels on the sound card, and then
outch 9,a1, 10,a2, 11,a3, 12,a4, 13,a5, 14,a6, 15,a7, 16,a8
Note that for outch channel numbers can be changed at k-rate thereby opening the possibility of
changing the speaker configuration dynamically during performance. Channel numbers do not
need to be sequential and unrequired channels can be left out completely. This can make life
much easier when working with complex systems employing many channels.
435
VBAP 05 B. PANNING AND SPATIALIZATION
In case we work for a 4-channel setup, we will write this IO Setup on top of our program:
nchnls = 4
giOutChn_1 = 1
giOutChn_2 = 2
giOutChn_3 = 3
giOutChn_4 = 4
And in the output section of our program we will use the variable names instead of the numbers,
for instance:
outch giOutChn_1,a1, giOutChn_2,a2, giOutChn_3,a3, giOutChn_4,a4
In case we can at any time only work with a stereo soundcard, all we have to do is to change the
IO Setup like this:
nchnls = 2
giOutChn_1 = 1
giOutChn_2 = 2
giOutChn_3 = 2
giOutChn_4 = 1
The output section will work as before, so it is a matter of some seconds to connect with another
hardware setup.
VBAP
Vector Base Amplitude Panning2 can be described as a method which extends stereo panning to
more than two speakers. The number of speakers is, in general, arbitrary. Standard layouts such
as quadrophonic, octophonic or 5.1 configuration can be used, but in fact any number of speakers
can be positioned even in irregular distances from each other. Speakers arranged at different
heights can as well be part of an VBAP loudspeaker array.
VBAP is robust and simple, and has proven its flexibility and reliability. Csound offers different op-
codes which have evolved from the original implementation to flexibel setups using audio arrays.
The introduction here will explain the usage from the first steps on.
Basic Steps
At first the VBAP system needs to know where the loudspeakers are positioned. This job is done
with the opcode vbaplsinit. Let us assume we have seven speakers in the positions and number-
ings outlined below (M = middle/centre):
2 First
described by Ville Pulkki in 1997: Ville Pulkki, Virtual source positioning using vector base amplitude panning, in:
Journal of the Audio Engeneering Society, 45(6), 456-466
436
05 B. PANNING AND SPATIALIZATION VBAP
The vbaplsinit opcode which is usually placed in the header of a Csound orchestra, defines these
positions as follows:
vbaplsinit 2, 7, -40, 40, 70, 140, 180, -110, -70
The first number determines the number of dimensions (here 2). The second number states the
overall number of speakers, then followed by the positions in degrees (clockwise).
All that is required now is to provide vbap with a monophonic sound source to be distributed
amongst the speakers according to information given about the position. Horizontal position (az-
imuth) is expressed in degrees clockwise just as the initial locations of the speakers were. The
following would be the Csound code to play the sound file ClassGuit.wav once while moving it
counterclockwise:
EXAMPLE 05B05_VBAP_circle.csd
<CsoundSynthesizer>
<CsOptions>
-odac
--env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 8 ;only channels 1-7 used
instr 1
Sfile = "ClassGuit.wav"
p3 filelen Sfile
aSnd[] diskin Sfile
kAzim line 0, p3, -360 ;counterclockwise
aVbap[] vbap aSnd[0], kAzim
out aVbap ;7 channel output via array
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
437
VBAP 05 B. PANNING AND SPATIALIZATION
ä -odac enables realtime output. Choose -o 05B04_out.wav if you don’t a multichannel audio
card, and Csound will render the output to the file 05B04_out.wav.
ä –env:SSDIR+=../SourceMaterials This statement will add the folder SourceMaterials which is
placed in the top directory to Csound’s search path for this file.
ä nchnls = 8 sets the number of channels to 8. nchnls = 7 would be more consistent as we
only use 7 channels. I chose 8 channels because some sound cards have problems to open
7 channels.
ä p3 filelen Sfile sets the duration of the instrument (p3) to the length of the soundfile Sfile
which in turn has been set to the “ClassGuit.wav” sample (you can use any other file here).
ä aSnd[] diskin Sfile The opcode diskin reads the sound file Sfile and creates an audio array.
The first channel of the file will be found in aSnd[0], the second (if any) in aSnd[1], and so on.
ä kAzim line 0,p3,-360 This creates an azimuth signal which starts at center (0°) and moves
counterclockwise during the whole duration of the instrument call (p3) to center again (-360°
is also in front).
ä aVbap[] vbap aSnd[0],kAzim The opcode vbap creates here an audio array which contains as
many audio signals as are set with the vbaplsinit statement; in this case seven. These seven
signals represent the seven loud speakers. Right hand side, vbap gets two inputs: the first
channel of the aSnd array, and the kAzim signal which contains the location of the sound.
ä out aVbap Note that aVbap is an audio array here which contains seven audio signals. The
whole array it written to channels 1-7 of the output, either in realtime or as audio file.
To alleviate this tendency, Ville Pulkki has introduced an additional parameter, called spread, which
has a range of zero to hundred percent.3 The “ascetic” form of VBAP we have seen in the previous
example, means: no spread (0%). A spread of 100% means that all speakers are active, and the
information about where the sound comes from is nearly lost.
The kspread input parameter is the second of three optionel parameters of the vbap opcode:
array[] *vbap* asig, kazim [,kelev] [,kspread] [,ilayout]
So to set kspread, we first have to provide the first one. kelev defines the elevation of the sound
- it is always zero for two dimensions, as in the speaker configuration in our example. The next
example adds a spread movement to the previous one. The spread starts at zero percent, then
increases to hundred percent, and then decreases back down to zero.
3 Ville
Pulkki, Uniform spreading of amplitude panned virtual sources, in: Proceedings of the 1999 IEEE Workshop on Appli-
cations of Signal Processing to Audio and Acoustics, Mohonk Montain House, New Paltz
438
05 B. PANNING AND SPATIALIZATION VBAP
EXAMPLE 05B06_VBAP_spread.csd
<CsoundSynthesizer>
<CsOptions>
-odac
--env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 8 ;only channels 1-7 used
instr 1
Sfile = "ClassGuit.wav"
p3 filelen Sfile
aSnd[] diskin Sfile
kAzim line 0, p3, -360 ;counterclockwise
kSpread linseg 0, p3/2, 100, p3/2, 0
aVbap[] vbap aSnd[0], kAzim, 0, kSpread
out aVbap
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
By this it is possible to switch between different layouts during performance and to provide more
flexibility in the number of output channels used. Here is an example for three different layouts
which are called in three different instruments:
EXAMPLE 05B07_VBAP_layouts.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 8
439
Ambisonics I: bformenc1 and bformdec1 05 B. PANNING AND SPATIALIZATION
instr 1
aNoise pinkish 0.5
aVbap[] vbap aNoise, line:k(0,p3,-360), 0, 0, 1
out aVbap ;layout 1: 7 channel output
endin
instr 2
aNoise pinkish 0.5
aVbap[] vbap aNoise, line:k(0,p3,-360), 0, 0, 2
out aVbap ;layout 2: 4 channel output
endin
instr 3
aNoise pinkish 0.5
aVbap[] vbap aNoise, line:k(0,p3,-360), 0, 0, 3
out aVbap ;layout 3: 3 channel output
endin
</CsInstruments>
<CsScore>
i 1 0 6
i 2 6 6
i 3 12 6
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
In addition to the vbap opcode, vbapg has been written. The idea is to have an opcode which
returns the gains (amplitudes) of the speakers instead of the audio signal:
k1[, k2...] vbapg kazim [,kelev] [, kspread] [, ilayout]
There are excellent sources for the discussion of Ambisonics online which explain its background
and parameters.4 These topice are also covered later in this chapter when Ambisoncs UDOs are
introduced. We will focus here first on the basic practicalities of using the Ambisonics opcodes
bformenc1 and bformdec1 in Csound.
Two steps are required for distributing a sound via Ambisonics. At first the sound source and its
localisation are encoded. The result of this step is a so-called B-format. In the second step this
B-format is decoded to match a certain loudspeaker setup.
It is possible to save the B-format as its own audio file, to preserve the spatial information or
you can immediately do the decoding after the encoding thereby dealing directly only with audio
signals instead of Ambisonic files. The next example takes the latter approach by implementing
440
05 B. PANNING AND SPATIALIZATION Ambisonics I: bformenc1 and bformdec1
EXAMPLE 05B08_Ambi_circle.csd
<CsoundSynthesizer>
<CsOptions>
-odac
--env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 8
instr 1
Sfile = "ClassGuit.wav"
p3 filelen Sfile
aSnd[] diskin Sfile
kAzim line 0, p3, 360 ;counterclockwise (!)
iSetup = 4 ;octogon
aw, ax, ay, az bformenc1 aSnd[0], kAzim, 0
a1, a2, a3, a4, a5, a6, a7, a8 bformdec1 iSetup, aw, ax, ay, az
out a1, a2, a3, a4, a5, a6, a7, a8
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The first thing to note is that for a counterclockwise circle, the azimuth now has the line 0 -> 360,
instead of 0 -> -360 as was used in the VBAP example. This is because Ambisonics usually reads
the angle in a mathematical way: a positive angle is counterclockwise. Next, the encoding process
is carried out in the line:
aw, ax, ay, az bformenc1 aSnd, kAzim, 0
Input arguments are the monophonic sound source aSnd[0], the xy-angle kAzim, and the elevation
angle which is set to zero. Output signals are the spatial information in x-, y- and z- direction (ax,
ay, az), and also an omnidirectional signal called aw.
The inputs for the decoder are the same aw, ax, ay, az, which were the results of the encoding
process, and an additional iSetup parameter. Currently the Csound decoder only works with some
standard setups for the speaker: iSetup = 4 refers to an octogon.5 So the final eight audio signals
a1, …, a8 are being produced using this decoder, and are then sent to the speakers.
441
Ambisonics I: bformenc1 and bformdec1 05 B. PANNING AND SPATIALIZATION
Different Orders
What we have seen in this example is called first order ambisonics. This means that the encoding
process leads to the four basic dimensions w, x, y, z as described above. In second order am-
bisonics, there are additional directions called r, s, t, u, v. And in third order ambisonics again the
additional k, l, m, n, o, p, q directions are applied. The final example in this section shows the three
orders, each of them in one instrument. If you have eight speakers in octophonic setup, you can
compare the results.
EXAMPLE 05B09_Ambi_orders.csd
<CsoundSynthesizer>
<CsOptions>
-odac
--env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 8
442
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
Introduction
We will explain here the principles of ambisonics step by step and write a UDO for every step.
Since the two-dimensional analogy to Ambisonics is easier to understand and to implement with
a simple equipment, we shall fully explain it first.
The loudspeaker feeds are obtained by decoding the B-format signal. The resulting panning is
amplitude panning, and only the direction to the sound source is taken into account.
The illustration below shows the principle of Ambisonics. First a sound is generated and its posi-
tion determined. The amplitude and spectrum are adjusted to simulate distance, the latter using a
low-pass filter. Then the Ambisonic encoding is computed using the sound’s coordinates. Encod-
ing 𝑚th order B-format requires 𝑛 = (𝑚 + 1)2 channels (𝑛 = 2𝑚 + 1 channels in Ambisonics2D).
6 These files can be downloaded together with the entire examples (some of them for CsoundQt) from
https://www.zhdk.ch/5382
443
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
By decoding the B-format, one can obtain the signals for any number (>= 𝑛) of loudspeakers in any
arrangement. Best results are achieved with symmetrical speaker arrangements.
If the B-format does not need to be recorded the speaker signals can be calculated at low cost and
arbitrary order using so-called ambisonics equivalent panning (AEP).
Ambisonics2D
We will first explain the encoding process in Ambisonics2D. The position of a sound source in the
horizontal plane is given by two coordinates. In Cartesian coordinates (x, y) the listener is at the
origin of the coordinate system (0, 0), and the x-coordinate points to the front, the y-coordinate to
the left. The position of the sound source can also be given in polar coordinates by the angle ψ
between the line of vision of the listener (front) and the direction to the sound source, and by their
distance r.
Cartesian coordinates can be converted to polar coordinates by the formulae:
444
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
The 0th order B-Format of a signal S of a sound source on the unit circle is just the mono signal:
W0 = W = S. The first order B-Format contains two additional channels: W1,1 = X = S·cos(ψ) = S·x
and W1,2 = Y = S·sin(ψ) = S·y, i.e. the product of the Signal S with the sine and the cosine of the
direction ψ of the sound source. The B-Format higher order contains two additional channels per
order m: Wm,1 = S·cos(mψ) and Wm,2 = S·sin(mψ).
𝑊0 = 𝑆
𝑊1,1 = 𝑋 = 𝑆 · 𝑐𝑜𝑠(𝜓) = 𝑆 · 𝑥 and
𝑊1,2 = 𝑌 = 𝑆 · 𝑠𝑖𝑛(𝜓) = 𝑆 · 𝑦
𝑊2,1 = 𝑆 · 𝑐𝑜𝑠(2𝜓) and
𝑊2,2 = 𝑆 · 𝑠𝑖𝑛(2𝜓)
...
𝑊𝑚,1 = 𝑆 · 𝑐𝑜𝑠(𝑚𝜓) and 𝑊𝑚,2 = 𝑆 · 𝑠𝑖𝑛(𝑚𝜓)
From the n = 2m + 1 B-Format channels the loudspeaker signals pi of n loudspeakers which are
set up symmetrically on a circle (with angle �i ) are:
1
𝑝𝑖 = 𝑛 · (𝑊0 + 2𝑊1,1 𝑐𝑜𝑠(𝜙𝑖 ) + 2𝑊1,2 𝑠𝑖𝑛(𝜙𝑖 ) + 2𝑊2,1 𝑐𝑜𝑠(2𝜙𝑖 ) + 2𝑊2,2 𝑠𝑖𝑛(2𝜙𝑖 ) + ...)
2
= 𝑛 · ( 21 𝑊0 + 𝑊1,1 𝑐𝑜𝑠(𝜙𝑖 ) + 𝑊1,2 𝑠𝑖𝑛(𝜙𝑖 ) + 𝑊2,1 𝑐𝑜𝑠(2𝜙𝑖 ) + 𝑊2,2 𝑠𝑖𝑛(2𝜙𝑖 ) + ...)
(If more than n speakers are used, we can use the same formula.)
EXAMPLE 05B10_udo_ambisonics2D_1.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 8
0dbfs = 1
445
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
instr 1
asnd rand .05
kaz line 0,p3,3*360 ;turns around 3 times in p3 seconds
a0,a11,a12 ambi2D_encode_1a asnd,kaz
a1,a2,a3,a4,a5,a6,a7,a8 \
ambi2D_decode_1_8 a0,a11,a12,
0,45,90,135,180,225,270,315
outc a1,a2,a3,a4,a5,a6,a7,a8
endin
</CsInstruments>
<CsScore>
i1 0 40
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
The B-format for all signals in all instruments can be summed before decoding. Thus in the next
example we create a zak space with 21 channels (zakinit 21, 1) for the 2D B-format up to 10th
order where the encoded signals are accumulated. The UDO ambi2D_encode_3 shows how to
produce the 7 B-format channels a0, a11, a12, …, a32 for third order. The opcode ambi2D_encode_n
produces the 2(n+1) channels a0, a11, a12, …, a32 for any order n (needs zakinit 2(n+1), 1). The UDO
ambi2D_decode_basic is an overloaded function i.e. it decodes to n speaker signals depending on
the number of in- and outputs given (in this example only for 1 or 2 speakers). Any number of
instruments can be played arbitrarily often. Instrument 10 decodes for the first 4 speakers of an
18 speaker setup.
446
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
EXAMPLE 05B11_udo_ambisonics2D_2.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 4
0dbfs = 1
kaz = $M_PI*kaz/180
zawm asnd,0
zawm cos(kaz)*asnd,1 ;a11
zawm sin(kaz)*asnd,2 ;a12
zawm cos(2*kaz)*asnd,3 ;a21
zawm sin(2*kaz)*asnd,4 ;a22
zawm cos(3*kaz)*asnd,5 ;a31
zawm sin(3*kaz)*asnd,6 ;a32
endop
if kk > 0 goto c1
zawm asnd,0
endop
447
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
xout igain*a1
endop
kk = iorder
a2 = .5*zar(0)
c2:
a2 += cos(kk*iaz2)*zar(2*kk-1)
a2 += sin(kk*iaz2)*zar(2*kk)
kk = kk-1
if kk > 0 goto c2
xout igain*a1,igain*a2
endop
instr 1
asnd rand p4
ares reson asnd,p5,p6,1
kaz line 0,p3,p7*360 ;turns around p7 times in p3 seconds
ambi2D_encode_n asnd,10,kaz
endin
instr 2
asnd oscil p4,p5,1
kaz line 0,p3,p7*360 ;turns around p7 times in p3 seconds
ambi2D_encode_n asnd,10,kaz
endin
</CsInstruments>
<CsScore>
f1 0 32768 10 1
; amp cf bw turns
i1 0 3 .7 1500 12 1
i1 2 18 .1 2234 34 -8
; amp fr 0 turns
i2 0 3 .1 440 0 2
i10 0 3
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
448
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
In-phase Decoding
The left figure below shows a symmetrical arrangement of 7 loudspeakers. If the virtual sound
source is precisely in the direction of a loudspeaker, only this loudspeaker gets a signal (center
figure). If the virtual sound source is between two loudspeakers, these loudspeakers receive the
strongest signals; all other loudspeakers have weaker signals, some with negative amplitude, that
is, reversed phase (right figure).
To avoid having loudspeaker sounds that are far away from the virtual sound source and to ensure
that negative amplitudes (inverted phase) do not arise, the B-format channels can be weighted
before being decoded. The weighting factors depend on the highest order used (M) and the order
of the particular channel being decoded (m).
(𝑀!)2
𝑔𝑚 = ((𝑀+𝑚)!·(𝑀−𝑚)!)
2·(2𝑀)!
𝑔𝑛𝑜𝑟𝑚 (𝑀 ) = 4𝑀 ·(𝑀!)2
The illustration below shows a third-order B-format signal decoded to 13 loudspeakers first uncor-
rected (so-called basic decoding, left), then corrected by weighting (so-called in-phase decoding,
right).
449
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
The following example shows in-phase decoding. The weights and norms up to 12th order are
saved in the arrays iWeight2D[][] and iNorm2D[] respectively. Instrument 11 decodes third order for
4 speakers in a square.
EXAMPLE 05B12_udo_ambisonics2D_3.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 4
0dbfs = 1
if kk > 0 goto c1
zawm asnd,0
endop
;in-phase-decoding
opcode ambi2D_dec_inph, a, ii
; weights and norms up to 12th order
iNorm2D[] array 1,0.75,0.625,0.546875,0.492188,0.451172,0.418945,
0.392761,0.370941,0.352394,0.336376,0.322360
iWeight2D[][] init 12,12
iWeight2D array 0.5,0,0,0,0,0,0,0,0,0,0,0,
0.666667,0.166667,0,0,0,0,0,0,0,0,0,0,
0.75,0.3,0.05,0,0,0,0,0,0,0,0,0,
0.8,0.4,0.114286,0.0142857,0,0,0,0,0,0,0,0,
0.833333,0.47619,0.178571,0.0396825,0.00396825,0,0,0,0,0,0,0,
0.857143,0.535714,0.238095,0.0714286,0.012987,0.00108225,0,0,0,0,0,0,
0.875,0.583333,0.291667,0.1060601,0.0265152,0.00407925,0.000291375,
0,0,0,0,0, 0.888889,0.622222,0.339394,0.141414,0.043512,
0.009324,0.0012432, 0.0000777,0,0,0,0,
0.9,0.654545,0.381818,0.176224,0.0629371,0.0167832,0.00314685,
0.000370218,0.0000205677,0,0,0,
0.909091,0.681818,0.41958,0.20979,0.0839161,0.0262238,0.0061703,
0.00102838,0.000108251,0.00000541254,0,0,
0.916667,0.705128,0.453297,0.241758,0.105769,0.0373303,0.0103695,
450
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
0.00218306,0.000327459,0.0000311866,0.00000141757,0,
0.923077,0.725275,0.483516,0.271978,0.12799,0.0497738,0.015718,
0.00392951,0.000748478,0.000102065,0.00000887523,0.000000369801
iorder,iaz1 xin
iaz1 = $M_PI*iaz1/180
kk = iorder
a1 = .5*zar(0)
c1:
a1 += cos(kk*iaz1)*iWeight2D[iorder-1][kk-1]*zar(2*kk-1)
a1 += sin(kk*iaz1)*iWeight2D[iorder-1][kk-1]*zar(2*kk)
kk = kk-1
if kk > 0 goto c1
xout iNorm2D[iorder-1]*a1
endop
zakinit 7, 1
instr 1
asnd rand p4
ares reson asnd,p5,p6,1
kaz line 0,p3,p7*360 ;turns around p7 times in p3 seconds
ambi2D_encode_n asnd,3,kaz
endin
instr 11
a1 ambi2D_dec_inph 3,0
a2 ambi2D_dec_inph 3,90
a3 ambi2D_dec_inph 3,180
a4 ambi2D_dec_inph 3,270
outc a1,a2,a3,a4
zacl 0,6 ; clear the za variables
endin
</CsInstruments>
<CsScore>
; amp cf bw turns
i1 0 3 .1 1500 12 1
i11 0 3
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
Distance
In order to simulate distances and movements of sound sources, the signals have to be treated
before being encoded. The main perceptual cues for the distance of a sound source are reduction
of the amplitude, filtering due to the absorbtion of the air and the relation between direct and
indirect sound. We will implement the first two of these cues. The amplitude arriving at a listener
is inversely proportional to the distance of the sound source. If the distance is larger than the
unit circle (not necessarily the radius of the speaker setup, which does not need to be known
when encoding sounds) we can simply divide the sound by the distance. With this calculation
inside the unit circle the amplitude is amplified and becomes infinite when the distance becomes
zero. Another problem arises when a virtual sound source passes the origin. The amplitude of
the speaker signal in the direction of the movement suddenly becomes maximal and the signal of
451
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
the opposite speaker suddenly becomes zero. A simple solution for these problems is to limit the
gain of the channel W inside the unit circle to 1 (f1 in the figure below) and to fade out all other
channels (f2). By fading out all channels except channel W the information about the direction of
the sound source is lost and all speaker signals are the same and the sum of the speaker signals
reaches its maximum when the distance is 0.
Now, we are looking for gain functions that are smoother at d = 1. The functions should be dif-
ferentiable and the slope of f1 at distance d = 0 should be 0. For distances greater than 1 the
functions should be approximately 1/d. In addition the function f1 should continuously grow with
decreasing distance and reach its maximum at d = 0. The maximal gain must be 1. The function
atan(d·π/2)/(d·π/2) fulfills these constraints. We create a function f2 for the fading out of the other
channels by multiplying f1 by the factor (1 - e-d ).
In the next example the UDO ambi2D_enc_dist_n encodes a sound at any order with distance cor-
rection. The inputs of the UDO are asnd, iorder, kazimuth and kdistance. If the distance becomes
negative the azimuth angle is turned to its opposite (kaz += π) and the distance taken positive.
EXAMPLE 05B13_udo_ambisonics2D_4.csd
<CsoundSynthesizer>
<CsOptions>
--env:SSDIR+=../SourceMaterials -odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
452
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
ksmps = 32
nchnls = 8
0dbfs = 1
#include "../SourceMaterials/ambisonics2D_udos.txt"
; distance encoding
; with any distance (includes zero and negative distance)
if kk > 0 goto c1
zawm asndW,0
endop
zakinit 17, 1
instr 1
asnd rand p4
;asnd soundin "/Users/user/csound/ambisonic/violine.aiff"
kaz line 0,p3,p5*360 ;turns around p5 times in p3 seconds
kdist line p6,p3,p7
ambi2D_enc_dist_n asnd,8,kaz,kdist
endin
instr 10
a1,a2,a3,a4,
a5,a6,a7,a8 ambi2D_decode 8,0,45,90,135,180,225,270,315
outc a1,a2,a3,a4,a5,a6,a7,a8
zacl 0,16
endin
</CsInstruments>
<CsScore>
f1 0 32768 10 1
; amp turns dist1 dist2
i1 0 4 1 0 2 -2
;i1 0 4 1 1 1 1
i10 0 4
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
In order to simulate the absorption of the air we introduce a very simple lowpass filter with a
distance depending cutoff frequency. We produce a Doppler-shift with a distance dependent delay
of the sound. Now, we have to determine our unit since the delay of the sound wave is calculated
as distance divided by sound velocity. In our example udo_ambisonics2D_5.csd we set the unit
to 1 meter. These procedures are performed before the encoding. In instrument 1 the movement
453
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
of the sound source is defined in Cartesian coordinates. The UDO xy_to_ad transforms them into
polar coordinates. The B-format channels can be written to a sound file with the opcode fout. The
UDO write_ambi2D_2 writes the channels up to second order into a sound file.
EXAMPLE 05B14_udo_ambisonics2D_5.csd
<CsoundSynthesizer>
<CsOptions>
--env:SSDIR+=../SourceMaterials -odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 8
0dbfs = 1
#include "../SourceMaterials/ambisonics2D_udos.txt"
#include "../SourceMaterials/ambisonics_utilities.txt" ;Absorb and Doppler
opcode Absorb, a, ak
asnd,kdist xin
aabs tone 5*asnd,20000*exp(-.1*kdist)
xout aabs
endop
opcode Doppler, a, ak
asnd,kdist xin
abuf delayr .5
adop deltapi interp(kdist)*0.0029137529 + .01 ; 1/343.2
delayw asnd
xout adop
endop
*/
opcode write_ambi2D_2, 0, S
Sname xin
fout Sname,12,zar(0),zar(1),zar(2),zar(3),zar(4)
endop
instr 1
asnd buzz p4,p5,50,1
;asnd soundin "/Users/user/csound/ambisonic/violine.aiff"
kx line p7,p3,p8
ky line p9,p3,p10
kaz,kdist xy_to_ad kx,ky
aabs absorb asnd,kdist
adop Doppler .2*aabs,kdist
ambi2D_enc_dist adop,5,kaz,kdist
endin
454
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
outc a1,a2,a3,a4,a5,a6,a7,a8
; fout "B_format2D.wav",12,zar(0),zar(1),zar(2),zar(3),zar(4),
; zar(5),zar(6),zar(7),zar(8),zar(9),zar(10)
write_ambi2D_2 "ambi_ex5.wav"
zacl 0,16 ; clear the za variables
endin
</CsInstruments>
<CsScore>
f1 0 32768 10 1
; amp f 0 x1 x2 y1 y2
i1 0 5 .8 200 0 40 -20 1 .1
i10 0 5
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
The channels of the Ambisonic B-format are computed as the product of the sounds themselves
and the so-called spherical harmonics representing the direction to the virtual sound sources.
The spherical harmonics can be normalised in various ways. We shall use the so-called semi-
normalised spherical harmonics. The following table shows the encoding functions up to the third
order as function of azimuth and elevation Ymn(θ,δ) and as function of x, y and z Ymn(x,y,z) for
sound sources on the unit sphere. The decoding formulae for symmetrical speaker setups are the
same.
455
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
In the first three of the following examples we will not produce sound but display in number boxes
(for example using CsoundQt widgets) the amplitude of 3 speakers at positions (1, 0, 0), (0, 1, 0)
and (0, 0, 1) in Cartesian coordinates. The position of the sound source can be changed with the
two scroll numbers. The example udo_ambisonics_1.csd shows encoding up to second order. The
decoding is done in two steps. First we decode the B-format for one speaker. In the second step,
we create a overloaded opcode for n speakers. The number of output signals determines which
version of the opcode is used. The UDOs ambi_encode and ambi_decode up to 8th order are saved
in the text file ambisonics_udos.txt.
EXAMPLE 05B15_udo_ambisonics_1.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
zawm asnd,0 ; W
zawm kcos_el*ksin_az*asnd,1 ; Y = Y(1,-1)
zawm ksin_el*asnd,2 ; Z = Y(1,0)
zawm kcos_el*kcos_az*asnd,3 ; X = Y(1,1)
i2 = sqrt(3)/2
kcos_el_p2 = kcos_el*kcos_el
ksin_el_p2 = ksin_el*ksin_el
kcos_2az = cos(2*kaz)
456
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
ksin_2az = sin(2*kaz)
kcos_2el = cos(2*kel)
ksin_2el = sin(2*kel)
endop
ic2 = sqrt(3)/2
icos_el_p2 = icos_el*icos_el
isin_el_p2 = isin_el*isin_el
icos_2az = cos(2*iaz)
isin_2az = sin(2*iaz)
icos_2el = cos(2*iel)
isin_2el = sin(2*iel)
i4 = ic2*icos_el_p2*isin_2az ; V = Y(2,-2)
i5 = ic2*isin_2el*isin_az ; S = Y(2,-1)
i6 = .5*(3*isin_el_p2 - 1) ; R = Y(2,0)
i7 = ic2*isin_2el*icos_az ; S = Y(2,1)
i8 = ic2*icos_el_p2*icos_2az ; U = Y(2,2)
end:
xout aout
457
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
endop
instr 1
asnd init 1
;kdist init 1
kaz invalue "az"
kel invalue "el"
ambi_encode asnd,2,kaz,kel
</CsInstruments>
<CsScore>
;f1 0 1024 10 1
f17 0 64 -2 0 0 0 90 0 0 90 0 0 0 0 0 0
i1 0 100
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
The next example shows in-phase decoding. The weights up to 8th order are stored in the array
iWeight3D[][].
EXAMPLE 05B16_udo_ambisonics_2.csd
<CsoundSynthesizer>
<CsOptions>
--env:SSDIR+=../SourceMaterials -odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
zakinit 81, 1 ; zak space for up to 81 channels of the 8th order B-format
458
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
iorder,iaz,iel xin
iaz = $M_PI*iaz/180
iel = $M_PI*iel/180
a0=zar(0)
if iorder > 0 goto c0
aout = a0
goto end
c0:
a1=iWeight3D[iorder-1][0]*zar(1)
a2=iWeight3D[iorder-1][0]*zar(2)
a3=iWeight3D[iorder-1][0]*zar(3)
icos_el = cos(iel)
isin_el = sin(iel)
icos_az = cos(iaz)
isin_az = sin(iaz)
i1 = icos_el*isin_az ; Y = Y(1,-1)
i2 = isin_el ; Z = Y(1,0)
i3 = icos_el*icos_az ; X = Y(1,1)
if iorder > 1 goto c1
aout = (3/4)*(a0 + i1*a1 + i2*a2 + i3*a3)
goto end
c1:
a4=iWeight3D[iorder-1][1]*zar(4)
a5=iWeight3D[iorder-1][1]*zar(5)
a6=iWeight3D[iorder-1][1]*zar(6)
a7=iWeight3D[iorder-1][1]*zar(7)
a8=iWeight3D[iorder-1][1]*zar(8)
ic2 = sqrt(3)/2
icos_el_p2 = icos_el*icos_el
isin_el_p2 = isin_el*isin_el
icos_2az = cos(2*iaz)
isin_2az = sin(2*iaz)
icos_2el = cos(2*iel)
isin_2el = sin(2*iel)
i4 = ic2*icos_el_p2*isin_2az ; V = Y(2,-2)
i5 = ic2*isin_2el*isin_az ; S = Y(2,-1)
i6 = .5*(3*isin_el_p2 - 1) ; R = Y(2,0)
i7 = ic2*isin_2el*icos_az ; S = Y(2,1)
i8 = ic2*icos_el_p2*icos_2az ; U = Y(2,2)
aout = (1/3)*(a0 + 3*i1*a1 + 3*i2*a2 + 3*i3*a3 + 5*i4*a4 + 5*i5*a5 + \
5*i6*a6 + 5*i7*a7 + 5*i8*a8)
end:
xout aout
459
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
endop
instr 1
asnd init 1
kdist init 1
kaz invalue "az"
kel invalue "el"
ambi_encode asnd,8,kaz,kel
ao1,ao2,ao3 ambi_dec_inph 8,17
outvalue "sp1", downsamp(ao1)
outvalue "sp2", downsamp(ao2)
outvalue "sp3", downsamp(ao3)
zacl 0,80
endin
</CsInstruments>
<CsScore>
f1 0 1024 10 1
f17 0 64 -2 0 0 0 90 0 0 90 0 0 0 0 0 0 0 0 0 0
i1 0 100
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
460
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
EXAMPLE 05B17_udo_ambisonics_3.csd
<CsoundSynthesizer>
<CsOptions>
--env:SSDIR+=../SourceMaterials -odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
#include "../SourceMaterials/ambisonics_udos.txt"
endop
instr 1
asnd init 1
kaz invalue "az"
kel invalue "el"
kdist invalue "dist"
ambi_enc_dist asnd,5,kaz,kel,kdist
ao1,ao2,ao3,ao4 ambi_decode 5,17
outvalue "sp1", downsamp(ao1)
outvalue "sp2", downsamp(ao2)
outvalue "sp3", downsamp(ao3)
outvalue "sp4", downsamp(ao4)
outc 0*ao1,0*ao2;,2*ao3,2*ao4
zacl 0,80
endin
</CsInstruments>
<CsScore>
f17 0 64 -2 0 0 0 90 0 180 0 0 90 0 0 0 0
i1 0 100
461
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
EXAMPLE 05B18_udo_ambisonics_4.csd
<CsoundSynthesizer>
<CsOptions>
--env:SSDIR+=../SourceMaterials -odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 8
0dbfs = 1
zakinit 16, 1
#include "../SourceMaterials/ambisonics_udos.txt"
#include "../SourceMaterials/ambisonics_utilities.txt"
instr 1
asnd buzz p4,p5,p6,1
kt line 0,p3,p3
kaz,kel,kdist xyz_to_aed 10*sin(kt),10*sin(.78*kt),10*sin(.43*kt)
adop Doppler asnd,kdist
ambi_enc_dist adop,3,kaz,kel,kdist
a1,a2,a3,a4,a5,a6,a7,a8 ambi_decode 3,17
;k0 ambi_write_B "B_form.wav",8,14
outc a1,a2,a3,a4,a5,a6,a7,a8
zacl 0,15
endin
</CsInstruments>
<CsScore>
f1 0 32768 10 1
f17 0 64 -2 0 -45 35.2644 45 35.2644 135 35.2644 225 35.2644 \
-45 -35.2644 .7854 -35.2644 135 -35.2644 225 -35.2644
i1 0 40 .5 300 40
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
𝑃 (𝛾, 𝑚) = ( 12 + 1
2 cos 𝛾)𝑚
where γ denotes the angle between a sound source and a speaker and m denotes the order. If
the speakers are positioned on a unit sphere the cosine of the angle γ is calculated as the scalar
product of the vector to the sound source (x, y, z) and the vector to the speaker (xs , ys , zs ).
462
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
In contrast to Ambisonics the order indicated in the function does not have to be an integer. This
means that the order can be continuously varied during decoding. The function can be used in
both Ambisonics and Ambisonics2D.
This system of panning is called Ambisonics Equivalent Panning. It has the disadvantage of not
producing a B-format representation, but its implementation is straightforward and the compu-
tation time is short and independent of the Ambisonics order simulated. Hence it is particularly
useful for real-time applications, for panning in connection with sequencer programs and for ex-
perimentation with high and non-integral Ambisonic orders.
The opcode AEP1 in the next example shows the calculation of ambisonics equivalent panning
for one speaker. The opcode AEP then uses AEP1 to produce the signals for several speakers. In
the text file AEP_udos.txt AEP ist implemented for up to 16 speakers. The position of the speakers
must be written in a function table. As the first parameter in the function table the maximal speaker
distance must be given.
EXAMPLE 05B19_udo_AEP.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 4
0dbfs = 1
;#include "ambisonics_udos.txt"
463
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
table(6,ifn)*sin(($M_PI/180)*table(5,ifn)),
table(9,ifn)*cos(($M_PI/180)*table(8,ifn))*cos(($M_PI/180)*table(7,ifn)),
table(9,ifn)*cos(($M_PI/180)*table(8,ifn))*sin(($M_PI/180)*table(7,ifn)),
table(9,ifn)*sin(($M_PI/180)*table(8,ifn)),
table(12,ifn)*cos(($M_PI/180)*table(11,ifn))*\
cos(($M_PI/180)*table(10,ifn)),
table(12,ifn)*cos(($M_PI/180)*table(11,ifn))*\
sin(($M_PI/180)*table(10,ifn)),
table(12,ifn)*sin(($M_PI/180)*table(11,ifn))
a1 AEP1 ain,korder,ispeaker[1],ispeaker[2],ispeaker[3],
idsmax,kx,ky,kz,kdist,kfade,kgain
a2 AEP1 ain,korder,ispeaker[4],ispeaker[5],ispeaker[6],
idsmax,kx,ky,kz,kdist,kfade,kgain
a3 AEP1 ain,korder,ispeaker[7],ispeaker[8],ispeaker[9],
idsmax,kx,ky,kz,kdist,kfade,kgain
a4 AEP1 ain,korder,ispeaker[10],ispeaker[11],ispeaker[12],
idsmax,kx,ky,kz,kdist,kfade,kgain
xout a1,a2,a3,a4
endop
instr 1
ain rand 1
;ain soundin "/Users/user/csound/ambisonic/violine.aiff"
kt line 0,p3,360
korder init 24
;kdist Dist kx, ky, kz
a1,a2,a3,a4 AEP ain,korder,17,kt,0,1
outc a1,a2,a3,a4
endin
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
464
05 B. PANNING AND SPATIALIZATION Ambisonics II: UDOs
ambisonics2D_udos.txt
ambi2D_encode asnd, iorder, kazimuth (any order) (azimuth in degrees)
ambi2D_enc_dist asnd, iorder, kazimuth, kdistance
a1 [, a2] ... [, a8] ambi2D_decode iorder, iaz1 [, iaz2] ... [, iaz8]
a1 [, a2] ... [, a8] ambi2D_dec_inph iorder, iaz1 [, iaz2] ... [, iaz8]
(order <= 12)
ambi2D_write_B "name", iorder, ifile_format
ambi2D_read_B "name", iorder (order <= 19)
kaz, kdist xy_to_ad kx, ky
ambi_utilities.txt
kdist dist kx, ky
kdist dist kx, ky, kz
ares Doppler asnd, kdistance
ares absorb asnd, kdistance
kx, ky, kz aed_to_xyz kazimuth, kelevation, kdistance
ix, iy, iz aed_to_xyz iazimuth, ielevation, idistance
a1 [, a2] ... [, a16] dist_corr a1 [, a2] ... [, a16], ifn
f ifn 0 32 -2 max_speaker_distance dist1, dist2, ... (distances in m)
irad radiani idegree
krad radian kdegree
arad radian adegree
idegree degreei irad
kdegree degree krad
adegree degree arad
465
Ambisonics II: UDOs 05 B. PANNING AND SPATIALIZATION
466
05 C. FILTERS
Audio filters can range from devices that subtly shape the tonal characteristics of a sound to ones
that dramatically remove whole portions of a sound spectrum to create new sounds. Csound
includes several versions of each of the commonest types of filters and some more esoteric ones
also. The full list of Csound’s standard filters can be found here. A list of the more specialised
filters can be found here.
Lowpass Filters
The first type of filter encountered is normally the lowpass filter. As its name suggests it al-
lows lower frequencies to pass through unimpeded and therefore filters higher frequencies. The
crossover frequency is normally referred to as the cutoff frequency. Filters of this type do not really
cut frequencies off at the cutoff point like a brick wall but instead attenuate increasingly according
to a cutoff slope. Different filters offer cutoff slopes of different steepness. Another aspect of a
lowpass filter that we may be concerned with is a ripple that might emerge at the cutoff point. If
this is exaggerated intentionally it is referred to as resonance or Q.
In the following example, three lowpass filters filters are demonstrated: tone, butlp and mooglad-
der. tone offers a quite gentle cutoff slope and therefore is better suited to subtle spectral en-
hancement tasks. butlp is based on the Butterworth filter design and produces a much sharper
cutoff slope at the expense of a slightly greater CPU overhead. moogladder is an interpretation of
an analogue filter found in a moog synthesizer – it includes a resonance control.
In the example a sawtooth waveform is played in turn through each filter. Each time the cutoff fre-
quency is modulated using an envelope, starting high and descending low so that more and more
of the spectral content of the sound is removed as the note progresses. A sawtooth waveform
has been chosen as it contains strong higher frequencies and therefore demonstrates the filters
characteristics well; a sine wave would be a poor choice of source sound on account of its lack of
spectral richness.
EXAMPLE 05C01_tone_butlp_moogladder.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
467
Highpass Filters 05 C. FILTERS
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
prints "tone%n" ; indicate filter type in console
aSig vco2 0.5, 150 ; input signal is a sawtooth waveform
kcf expon 10000,p3,20 ; descending cutoff frequency
aSig tone aSig, kcf ; filter audio signal
out aSig ; filtered audio sent to output
endin
instr 2
prints "butlp%n" ; indicate filter type in console
aSig vco2 0.5, 150 ; input signal is a sawtooth waveform
kcf expon 10000,p3,20 ; descending cutoff frequency
aSig butlp aSig, kcf ; filter audio signal
out aSig ; filtered audio sent to output
endin
instr 3
prints "moogladder%n" ; indicate filter type in console
aSig vco2 0.5, 150 ; input signal is a sawtooth waveform
kcf expon 10000,p3,20 ; descending cutoff frequency
aSig moogladder aSig, kcf, 0.9 ; filter audio signal
out aSig ; filtered audio sent to output
endin
</CsInstruments>
<CsScore>
; 3 notes to demonstrate each filter in turn
i 1 0 3; tone
i 2 4 3; butlp
i 3 8 3; moogladder
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Highpass Filters
A highpass filter is the converse of a lowpass filter; frequencies higher than the cutoff point are
allowed to pass whilst those lower are attenuated. atone and buthp are the analogues of tone
and butlp. Resonant highpass filters are harder to find but Csound has one in bqrez. bqrez is
actually a multi-mode filter and could also be used as a resonant lowpass filter amongst other
things. We can choose which mode we want by setting one of its input arguments appropriately.
Resonant highpass is mode 1. In this example a sawtooth waveform is again played through each
of the filters in turn but this time the cutoff frequency moves from low to high. Spectral content is
increasingly removed but from the opposite spectral direction.
468
05 C. FILTERS Bandpass Filters
EXAMPLE 05C02_atone_buthp_bqrez.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
prints "atone%n" ; indicate filter type in console
aSig vco2 0.2, 150 ; input signal is a sawtooth waveform
kcf expon 20, p3, 20000 ; define envelope for cutoff frequency
aSig atone aSig, kcf ; filter audio signal
out aSig ; filtered audio sent to output
endin
instr 2
prints "buthp%n" ; indicate filter type in console
aSig vco2 0.2, 150 ; input signal is a sawtooth waveform
kcf expon 20, p3, 20000 ; define envelope for cutoff frequency
aSig buthp aSig, kcf ; filter audio signal
out aSig ; filtered audio sent to output
endin
instr 3
prints "bqrez(mode:1)%n" ; indicate filter type in console
aSig vco2 0.03, 150 ; input signal is a sawtooth waveform
kcf expon 20, p3, 20000 ; define envelope for cutoff frequency
aSig bqrez aSig, kcf, 30, 1 ; filter audio signal
out aSig ; filtered audio sent to output
endin
</CsInstruments>
<CsScore>
; 3 notes to demonstrate each filter in turn
i 1 0 3 ; atone
i 2 5 3 ; buthp
i 3 10 3 ; bqrez(mode 1)
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Bandpass Filters
A bandpass filter allows just a narrow band of sound to pass through unimpeded and as such is a
little bit like a combination of a lowpass and highpass filter connected in series. We normally ex-
pect at least one additional parameter of control: control over the width of the band of frequencies
allowed to pass through, or bandwidth.
In the next example cutoff frequency and bandwidth are demonstrated independently for two dif-
ferent bandpass filters offered by Csound. First of all a sawtooth waveform is passed through a
reson filter and a butbp filter in turn while the cutoff frequency rises (bandwidth remains static).
469
Bandpass Filters 05 C. FILTERS
Then pink noise is passed through reson and butbp in turn again but this time the cutoff frequency
remains static at 5000Hz while the bandwidth expands from 8 to 5000Hz. In the latter two notes
it will be heard how the resultant sound moves from almost a pure sine tone to unpitched noise.
butbp is obviously the Butterworth based bandpass filter. reson can produce dramatic variations
in amplitude depending on the bandwidth value and therefore some balancing of amplitude in the
output signal may be necessary if out of range samples and distortion are to be avoided. Fortu-
nately the opcode itself includes two modes of amplitude balancing built in but by default neither
of these methods are active and in this case the use of the balance opcode may be required. Mode
1 seems to work well with spectrally sparse sounds like harmonic tones while mode 2 works well
with spectrally dense sounds such as white or pink noise.
EXAMPLE 05C03_reson_butbp.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
prints "reson%n" ; indicate filter type in console
aSig vco2 0.5, 150 ; input signal: sawtooth waveform
kcf expon 20,p3,10000 ; rising cutoff frequency
aSig reson aSig,kcf,kcf*0.1,1 ; filter audio signal
out aSig ; send filtered audio to output
endin
instr 2
prints "butbp%n" ; indicate filter type in console
aSig vco2 0.5, 150 ; input signal: sawtooth waveform
kcf expon 20,p3,10000 ; rising cutoff frequency
aSig butbp aSig, kcf, kcf*0.1 ; filter audio signal
out aSig ; send filtered audio to output
endin
instr 3
prints "reson%n" ; indicate filter type in console
aSig pinkish 0.5 ; input signal: pink noise
kbw expon 10000,p3,8 ; contracting bandwidth
aSig reson aSig, 5000, kbw, 2 ; filter audio signal
out aSig ; send filtered audio to output
endin
instr 4
prints "butbp%n" ; indicate filter type in console
aSig pinkish 0.5 ; input signal: pink noise
kbw expon 10000,p3,8 ; contracting bandwidth
aSig butbp aSig, 5000, kbw ; filter audio signal
out aSig ; send filtered audio to output
endin
</CsInstruments>
<CsScore>
i 1 0 3 ; reson - cutoff frequency rising
470
05 C. FILTERS Comb Filtering
Comb Filtering
A comb filter is a special type of filter that creates a harmonically related stack of resonance peaks
on an input sound file. A comb filter is really just a very short delay effect with feedback. Typically
the delay times involved would be less than 0.05 seconds. Many of the comb filters documented
in the Csound Manual term this delay time, loop time. The fundamental of the harmonic stack of
resonances produced will be 1/loop time. Loop time and the frequencies of the resonance peaks
will be inversely proportional – as loop time gets smaller, the frequencies rise. For a loop time of
0.02 seconds, the fundamental resonance peak will be 50Hz, the next peak 100Hz, the next 150Hz
and so on. Feedback is normally implemented as reverb time – the time taken for amplitude to
drop to 1/1000 of its original level or by 60dB. This use of reverb time as opposed to feedback
alludes to the use of comb filters in the design of reverb algorithms. Negative reverb times will
result in only the odd numbered partials of the harmonic stack being present.
The following example demonstrates a comb filter using the vcomb opcode. This opcode allows
for performance time modulation of the loop time parameter. For the first 5 seconds of the demon-
stration the reverb time increases from 0.1 seconds to 2 while the loop time remains constant at
0.005 seconds. Then the loop time decreases to 0.0005 seconds over 6 seconds (the resonant
peaks rise in frequency), finally over the course of 10 seconds the loop time rises to 0.1 seconds
(the resonant peaks fall in frequency). A repeating noise impulse is used as a source sound to
best demonstrate the qualities of a comb filter.
EXAMPLE 05C04_comb.csd
<CsoundSynthesizer>
<CsOptions>
-odac ;activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
; -- generate an input audio signal (noise impulses) --
; repeating amplitude envelope:
kEnv loopseg 1,0, 0,1,0.005,1,0.0001,0,0.9949,0
aSig pinkish kEnv*0.6 ; pink noise pulses
471
Other Filters Worth Investigating 05 C. FILTERS
</CsInstruments>
<CsScore>
i 1 0 25
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
eqfil is essentially a parametric equaliser but multiple iterations could be used as modules in a
graphic equaliser bank. In addition to the capabilities of eqfil, pareq adds the possibility of creating
low and high shelving filtering which might prove useful in mastering or in spectral adjustment of
more developed sounds.
rbjeq offers a quite comprehensive multimode filter including highpass, lowpass, bandpass, ban-
dreject, peaking, low-shelving and high-shelving, all in a single opcode.
statevar offers the outputs from four filter types - highpass, lowpass, bandpass and bandreject -
simultaneously so that the user can morph between them smoothly. svfilter does a similar thing
but with just highpass, lowpass and bandpass filter types.
phaser1 and phaser2 offer algorithms containing chains of first order and second order allpass
filters respectively. These algorithms could conceivably be built from individual allpass filters, but
these ready-made versions provide convenience and added efficiency.
For those wishing to devise their own filter using coefficients Csound offers filter2 and zfilter2.
Filter Comparision
The following example shows a nice comparision between a number of common used filters.
EXAMPLE 05C05_filter_compar.csd
<CsoundSynthesizer>
<CsOptions>
472
05 C. FILTERS Filter Comparision
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
gaOut init 0
giSpb init 0.45
; Filter types
#define MOOG_LADDER #1#
#define MOOG_VCF #2#
#define LPF18 #3#
#define BQREZ #4#
#define CLFILT #5#
#define BUTTERLP #6#
#define LOWRES #7#
#define REZZY #8#
#define SVFILTER #9#
#define VLOWRES #10#
#define STATEVAR #11#
#define MVCLPF1 #12#
#define MVCLPF2 #13#
#define MVCLPF3 #14#
opcode Echo, 0, S
Smsg xin
printf_i "\n%s\n\n", 1, Smsg
endop
opcode EchoFilterName, 0, i
iType xin
473
Filter Comparision 05 C. FILTERS
else
endif
endop
opcode Wave, a, k
kcps xin
474
05 C. FILTERS Filter Comparision
xout aout
endop
instr Bass
iCoeff = p4
iCps = p5
iFilterType = p6
instr Notes
iFilterType = p4
EchoFilterName iFilterType
turnoff
endin
475
Filter Comparision 05 C. FILTERS
opcode TrigNotes, 0, ii
iNum, iFilterType xin
idt = 20
event_i "i", "Notes", idt * iNum, 0, iFilterType
endop
instr PlayAll
iMixLevel = p4
event_i "i", "Main", 0, (14 * 20), iMixLevel
TrigNotes 0, $MOOG_LADDER
TrigNotes 1, $MOOG_VCF
TrigNotes 2, $LPF18
TrigNotes 3, $BQREZ
TrigNotes 4, $CLFILT
TrigNotes 5, $BUTTERLP
TrigNotes 6, $LOWRES
TrigNotes 7, $REZZY
TrigNotes 8, $SVFILTER
TrigNotes 9, $VLOWRES
TrigNotes 10, $STATEVAR
TrigNotes 11, $MVCLPF1
TrigNotes 12, $MVCLPF2
TrigNotes 13, $MVCLPF3
turnoff
endin
instr DumpAll
iMixLevel = p4
turnoff
endin
instr Dump
SFile = p4
iMixLevel = p5
iVolume = 0.2
iReverbFeedback = 0.85
476
05 C. FILTERS Filter Comparision
instr Main
iVolume = 0.2
iReverbFeedback = 0.3
iMixLevel = p4
gaOut = 0
endin
</CsInstruments>
<CsScore>
; the fourth parameter is a reverb mix level
i "PlayAll" 0 1 0.35
; uncomment to save output to wav files
;i "DumpAll" 0 1 0.35
</CsScore>
</CsoundSynthesizer>
;example by Anton Kholomiov
;based on the Jacob Joaquin wobble bass sound
477
Filter Comparision 05 C. FILTERS
478
05 D. DELAY AND FEEDBACK
A delay in DSP is a special kind of buffer, sometimes called a circular buffer. The length of this
buffer is finite and must be declared upon initialization as it is stored in RAM. One way to think of
the circular buffer is that as new items are added at the beginning of the buffer the oldest items
at the end of the buffer are being “shoved” out.
Besides their typical application for creating echo effects, delays can also be used to implement
chorus, flanging, pitch shifting and filtering effects.
Csound offers many opcodes for implementing delays. Some of these offer varying degrees of
quality - often balanced against varying degrees of efficiency whilst some are for quite specialized
purposes.
When using delayr and delayw the establishement of a delay buffer is broken down into two steps:
reading from the end of the buffer using delayr (and by doing this defining the length or duration
of the buffer) and then writing into the beginning of the buffer using delayw.
where aSigIn is the input signal written into the beginning of the buffer and aSigOut is the output
signal read from the end of the buffer. The fact that we declare reading from the buffer before
writing to it is sometimes initially confusing but, as alluded to before, one reason this is done is
to declare the length of the buffer. The buffer length in this case is 1 second and this will be the
apparent time delay between the input audio signal and audio read from the end of the buffer.
The following example implements the delay described above in a .csd file. An input sound of
sparse sine tone pulses is created. This is written into the delay buffer from which a new audio
signal is created by read from the end of this buffer. The input signal (sometimes referred to as
479
Delay with Feedback 05 D. DELAY AND FEEDBACK
the dry signal) and the delay output signal (sometimes referred to as the wet signal) are mixed and
set to the output. The delayed signal is attenuated with respect to the input signal.
EXAMPLE 05D01_delay.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
; -- create an input signal: short 'blip' sounds --
kEnv loopseg 0.5, 0, 0, 0,0.0005, 1 , 0.1, 0, 1.9, 0, 0
kCps randomh 400, 600, 0.5
aEnv interp kEnv
aSig poscil aEnv, kCps
; -- send audio to output (input and output to the buffer are mixed)
aOut = aSig + (aBufOut*0.4)
out aOut/2, aOut/2
endin
</CsInstruments>
<CsScore>
i 1 0 25
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
EXAMPLE 05D02_delay_feedback.csd
<CsoundSynthesizer>
<CsOptions>
-odac ;activates real time sound output
480
05 D. DELAY AND FEEDBACK Delay with Feedback
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
; -- create an input signal: short 'blip' sounds --
kEnv loopseg 0.5,0,0,0,0.0005,1,0.1,0,1.9,0,0 ; repeating envelope
kCps randomh 400, 600, 0.5 ; 'held' random values
aEnv interp kEnv ; a-rate envelope
aSig poscil aEnv, kCps ; generate audio
; send audio to output (mix the input signal with the delayed signal)
aOut = aSig + (aBufOut*0.4)
out aOut/2, aOut/2
endin
</CsInstruments>
<CsScore>
i 1 0 25
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
An alternative for implementing a simple delay-feedback line in Csound would be to use the delay
opcode. This is the same example done in this way:
EXAMPLE 05D03_delay_feedback_2.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
kEnv loopseg 0.5,0,0,0,0.0005,1,0.1,0,1.9,0,0
kCps randomh 400, 600, 0.5
aSig poscil a(kEnv), kCps
</CsInstruments>
481
Tap Delay Line 05 D. DELAY AND FEEDBACK
<CsScore>
i 1 0 25
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy and joachim heintz
The user must take care that the delay time demanded from the delay tap does not exceed the
length of the buffer as defined in the delayr line. If it does it will attempt to read data beyond the
end of the RAM buffer – the results of this are unpredictable. The user must also take care that
the delay time does not go below zero, in fact the minumum delay time that will be permissible
will be the duration of one k cycle (ksmps/sr).
EXAMPLE 05D04_deltapi.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
; -- create an input signal: short 'blip' sounds --
kEnv loopseg 0.5,0,0,0,0.0005,1,0.1,0,1.9,0,0
aEnv interp kEnv
aSig poscil aEnv, 500
482
05 D. DELAY AND FEEDBACK Tap Delay Line
; send audio to the output (mix the input signal with the delayed signal)
aOut linen aSig + (aTap*0.4), .1, p3, 1
out aOut/2, aOut/2
endin
</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
We are not limited to inserting only a single delay tap within the buffer. If we add further taps we
create what is known as a multi-tap delay. The following example implements a multi-tap delay
with three delay taps. Note that only the final delay (the one closest to the end of the buffer) is
fed back into the input in order to create feedback but all three taps are mixed and sent to the
output. There is no reason not to experiment with arrangements other than this, but this one is
most typical.
EXAMPLE 05D05_multi-tap_delay.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
; -- create an input signal: short 'blip' sounds --
kEnv loopseg 0.5,0,0,0,0.0005,1,0.1,0,1.9,0,0; repeating envelope
kCps randomh 400, 1000, 0.5 ; 'held' random values
aEnv interp kEnv ; a-rate envelope
aSig poscil aEnv, kCps ; generate audio
; send audio to the output (mix the input signal with the delayed signals)
aOut linen aSig + ((aTap1+aTap2+aTap3)*0.4), .1, p3, 1
out aOut/2, aOut/2
endin
</CsInstruments>
<CsScore>
i 1 0 25
</CsScore>
483
Flanger 05 D. DELAY AND FEEDBACK
</CsoundSynthesizer>
;example by Iain McCurdy
Flanger
As mentioned at the top of this section many familiar effects are actually created from using delay
buffers in various ways. We will briefly look at one of these effects: the flanger. Flanging derives
from a phenomenon which occurs when the delay time becomes so short that we begin to no
longer perceive individual echoes. Instead a stack of harmonically related resonances are per-
ceived whichs frequencies are in simple ratio with 1/delay_time. This effect is known as a comb
filter and is explained in the previous chapter. When the delay time is slowly modulated and the
resonances shifting up and down in sympathy the effect becomes known as a flanger. In this ex-
ample the delay time of the flanger is modulated using an LFO that employs an U-shaped parabola
as its waveform as this seems to provide the smoothest comb filter modulations.
EXAMPLE 05D06_flanger.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
aSig pinkish 0.1 ; pink noise
; send audio to the output (mix the input signal with the delayed signal)
aOut linen (aSig + aTap)/2, .1, p3, 1
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 25
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
484
05 D. DELAY AND FEEDBACK Flanger
As alternative to using the deltap group of opcodes, Csound provides opcodes which start with
vdel (for variable delay line). They establish one single delay line per opcode. This may be easier to
write for one or few taps, whereas for a large number of taps the method which has been described
in the previous examples is preferable.
Some caution must be given to the unit in argument 2 and 3: vdelay and vdelay3 use milliseconds
here, whereas vdelayx uses seconds (as nearly every other opcode in Csound).
This is an identical version of the previous flanger example which uses vdelayx instead of deltap3.
The vdelayx opcode has an additional parameter which allows the user to set the number of sam-
ples to be used for interpolation between 4 and 1024. The higher the number, the better the quality,
requiring yet more rendering power.
EXAMPLE 05D07_flanger_2.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
aSig pinkish 0.1
aDelay init 0
aDelay vdelayx aSig+aDelay*kFdback, aMod+iOffset, 0.5, 128
</CsInstruments>
<CsScore>
i 1 0 25
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy and joachim heintz
485
Custom Delay Line 05 D. DELAY AND FEEDBACK
ä Line 15: The array is created with the size delay-time times sample-rate, in our case 0.25 *
44100 = 11025. So 11025 samples can be stored in this array.
ä Line 16-17: The read pointer kread_ptr is set to the second element (index=1), the write pointer
kwrite_ptr is set to the first element (index=0) at beginning.
ä Line 19-20: The audio signal as input for the delay line — it can be anything.
ä Line 22-23, 30-31: The while loop iterates through each sample of the audio vector: from
kindx=0 to kindx=31 if ksmps is 32.
ä Line 24: Each element of the audio vector is copied into the appropriate position of the array.
At the beginning, the first element of the audio vector is copied to position 0, the second
element to position 1, and so on.
ä Line 25: The element in the array to which the read index “kread_ptr* points is copied to the
appropriate element of the delayed audio signal. As *kread_ptr* starts with 1 (not 0), at first
it can only copy zeros.
ä Line 27-28: Both pointers are incremented by one and then the modulo is taken. This ensures
that the array is not read or written beyond its boundaries, but used as a circular buffer.
EXAMPLE 05D08_custom_delay_line.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr CustomDelayLine
kindx = 0
while (kindx < ksmps) do
kdelay_line[kwrite_ptr] = asig[kindx]
adel[kindx] = kdelay_line[kread_ptr]
486
05 D. DELAY AND FEEDBACK Custom Delay Line
kindx += 1
od
out(linen:a(asig,0,p3,1),linen:a(adel,0,p3,1))
endin
</CsInstruments>
<CsScore>
i "CustomDelayLine" 0 10
</CsScore>
</CsoundSynthesizer>
;example by Steven Yi
487
Custom Delay Line 05 D. DELAY AND FEEDBACK
488
05 E. REVERBERATION
Reverb is the effect a room or space has on a sound where the sound we perceive is a mixture of
the direct sound and the dense overlapping echoes of that sound reflecting off walls and objects
within the space.
This is done so that if no sound generating instruments are playing at the beginning of the perfor-
mance this variable still exists and has a value. An error would result otherwise and Csound would
489
General considerations about using reverb in Csound 05 E. REVERBERATION
not run. When audio is written into this variable in the sound generating instrument it is added to
the current value of the global variable.
This is done in order to permit polyphony and so that the state of this variable created by other
sound producing instruments is not overwritten. Finally it is important that the global variable is
cleared (assigned a value of zero) when it is finished with at the end of the reverb instrument. If
this were not done then the variable would quickly explode (get astronomically high) as all previous
instruments are merely adding values to it rather that redeclaring it. Clearing could be done simply
by setting to zero but the clear opcode might prove useful in the future as it provides us with the
opportunity to clear many variables simultaneously.
This example uses the freeverb opcode and is based on a plugin of the same name. Freeverb has
a smooth reverberant tail and is perhaps similar in sound to a plate reverb. It provides us with
two main parameters of control: room size which is essentially a control of the amount of internal
feedback and therefore reverb time, and high frequency damping which controls the amount of
attenuation of high frequencies. Both these parameters should be set within the range 0 to 1. For
room size a value of zero results in a very short reverb and a value of 1 results in a very long reverb.
For high frequency damping a value of zero provides minimum damping of higher frequencies
giving the impression of a space with hard walls, a value of 1 provides maximum high frequency
damping thereby giving the impression of a space with soft surfaces such as thick carpets and
heavy curtains.
EXAMPLE 05E01_freeverb.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
i 1 0 300 ; noise pulses (input sound)
490
05 E. REVERBERATION General considerations about using reverb in Csound
The audio from the sound generating instrument is mixed into a zak audio channel the zawm
opcode like this:
zawm aSig * iRvbSendAmt, 1
This channel is read from in the reverb instrument using the zar opcode like this:
aInSig zar 1
Because audio is begin mixed into our zak channel but it is never redefined (only mixed into) it
needs to be cleared after we have finished with it. This is accomplished at the bottom of the
reverb instrument using the zacl opcode like this:
zacl 0, 1
This example uses the reverbsc opcode. It too has a stereo input and output. The arguments that
define its character are feedback level and cutoff frequency. Feedback level should be in the range
zero to 1 and controls reverb time. Cutoff frequency should be within the range of human hearing
(20Hz -20kHz) and less than the Nyqvist frequency (sr/2) - it controls the cutoff frequencies of low
pass filters within the algorithm.
EXAMPLE 05E02_reverbsc.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
491
General considerations about using reverb in Csound 05 E. REVERBERATION
</CsInstruments>
<CsScore>
i 1 0 10 ; noise pulses (input sound)
i 5 0 12 ; start reverb
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
reverbsc contains a mechanism to modulate delay times internally which has the effect of harmon-
ically blurring sounds the longer they are reverberated. This contrasts with freeverb’s rather static
reverberant tail. On the other hand reverbsc’s tail is not as smooth as that of freeverb, inidividual
echoes are sometimes discernible so it may not be as well suited to the reverberation of percus-
sive sounds. Also be aware that as well as reducing the reverb time, the feedback level parameter
reduces the overall amplitude of the effect to the point where a setting of 1 will result in silence
from the opcode.
EXAMPLE 05E03_reverb_with_chn.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activates real time sound output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
492
05 E. REVERBERATION The Schroeder Reverb Design
0dbfs = 1
</CsInstruments>
<CsScore>
i 1 0 10 ; noise pulses (input sound)
i 5 0 12 ; start reverb
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Many reverb algorithms including Csound’s freeverb, reverb and nreverb are based on what is
known as the Schroeder reverb design. This was a design proposed in the early 1960s by the
physicist Manfred Schroeder. In the Schroeder reverb a signal is passed into four parallel comb
filters the outputs of which are summed and then passed through two allpass filters as shown in
the diagram below. Essentially the comb filters provide the body of the reverb effect and the all-
pass filters smear their resultant sound to reduce ringing artefacts the comb filters might produce.
More modern designs might extent the number of filters used in an attempt to create smoother
results. The freeverb opcode employs eight parallel comb filters followed by four series allpass
filters on each channel. The two main indicators of poor implementations of the Schoeder reverb
are individual echoes being excessively apparent and ringing artefacts. The results produced by
the freeverb opcode are very smooth but a criticism might be that it is lacking in character and is
more suggestive of a plate reverb than of a real room.
493
The Schroeder Reverb Design 05 E. REVERBERATION
The next example implements the basic Schroeder reverb with four parallel comb filters followed
by three series allpass filters. This also proves a useful exercise in routing audio signals within
Csound. Perhaps the most crucial element of the Schroeder reverb is the choice of loop times
for the comb and allpass filters – careful choices here should obviate the undesirable artefacts
mentioned in the previous paragraph. If loop times are too long individual echoes will become
apparent, if they are too short the characteristic ringing of comb filters will become apparent. If
loop times between filters differ too much the outputs from the various filters will not fuse. It
is also important that the loop times are prime numbers so that echoes between different filters
do not reinforce each other. It may also be necessary to adjust loop times when implementing
very short reverbs or very long reverbs. The duration of the reverb is effectively determined by the
reverb times for the comb filters. There is certainly scope for experimentation with the design of
this example and exploration of settings other than the ones suggested here.
This example consists of five instruments. The fifth instrument implements the reverb algorithm
described above. The first four instruments act as a kind of generative drum machine to provide
source material for the reverb. Generally sharp percussive sounds provide the sternest test of a
reverb effect. Instrument 1 triggers the various synthesized drum sounds (bass drum, snare and
closed hi-hat) produced by instruments 2 to 4.
EXAMPLE 05E04_schroeder_reverb.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
; activate real time sound output and suppress note printing
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 1
nchnls = 2
0dbfs = 1
494
05 E. REVERBERATION The Schroeder Reverb Design
; print some information about current settings gleaned from the score
prints "Type:"
prints p6
prints "\\nReverb Time:%2.1f\\nDry/Wet Mix:%2.1f\\n\\n",p4,p5
495
The Schroeder Reverb Design 05 E. REVERBERATION
</CsInstruments>
<CsScore>
; room reverb
i 1 0 10 ; start drum machine trigger instr
i 5 0 11 1 0.5 "Room Reverb" ; start reverb
; tight ambience
i 1 11 10 ; start drum machine trigger instr
i 5 11 11 0.3 0.9 "Tight Ambience" ; start reverb
This chapter has introduced some of the more recent Csound opcodes for delay-line based reverb
algorithms which in most situations can be used to provide high quality and efficient reverberation.
Convolution offers a whole new approach for the creation of realistic reverbs that imitate actual
spaces - this technique is demonstrated in the Convolution chapter.
496
05 F. AM / RM / WAVESHAPING
The spectrum of the carrier sound is shifted by plus and minus the modulator frequency. As this
is happening for each part of the spectrum, the source sound often seems to loose its center. A
piano sound easily becomes bell-like, and a voice can become gnomic.
In the following example, first three static modulating frequencies are applied. As the voice itself
has a somehow floating pitch, we already hear an always moving artificial speactrum component.
This effect is emphasized in the second instrument which applies a random glissando for the
1 This is the same for Granular Synthesis which can either be “pure” synthesis or applied on sampled sound.
497
WAVESHAPING 05 F. AM / RM / WAVESHAPING
modulating frequency. If the random movements are slow (first with 1 Hz, then 10 Hz), the pitch
movements are still recognizable. If they are fast (100 Hz in the last call), the sound becomes
noisy.
EXAMPLE 05F01_RM_modification.csd
<CsoundSynthesizer>
<CsOptions>
-o dac --env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr RM_static
aMod poscil 1, p4
aCar diskin "fox.wav"
aRM = aMod * aCar
out aRM, aRM
endin
instr RM_moving
aMod poscil 1, randomi:k(400,1000,p4,3)
aCar diskin "fox.wav"
aRM = aMod * aCar
out aRM, aRM
endin
</CsInstruments>
<CsScore>
i "RM_static" 0 3 400
i . + . 800
i . + . 1600
i "RM_moving" 10 3 1
i . + . 10
i . + . 100
</CsScore>
</CsoundSynthesizer>
;written by Alex Hofmann and joachim heintz
In instrument RM_static, the fourth parameter of the score line (p4) directly yields the frequency
of the modulator. In instrument RM_moving, this frequency is a random movement between 400
and 1000 Hz, and p4 here yields the rate in which new random values are generated.
For amplitude modulation, a constant part - the DC offset - is added to the modulating signal. The
result is a mixture of unchanged and ring modulated sound, in different weights. The most simple
way to implement this is to add a part of the source signal to the ring modulated signal.
WAVESHAPING
In chapter 04E waveshaping has been described as a method of applying a transfer function to
an incoming signal. It has been discussed that the table which stores the transfer function must
498
05 F. AM / RM / WAVESHAPING WAVESHAPING
be read with an interpolating table reader to avoid degradation of the signal. On the other hand,
degradation can be a nice thing for sound modification. So let us start with this branch here.
Bit Depth = 2
EXAMPLE 05F02_Wvshp_bit_crunch.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
499
WAVESHAPING 05 F. AM / RM / WAVESHAPING
0dbfs = 1
instr 1
aAmp soundin "fox.wav"
aIndx = (aAmp + 1) / 2
aWavShp table aIndx, giTrnsFnc, 1
out aWavShp, aWavShp
endin
</CsInstruments>
<CsScore>
i 1 0 2.767
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
EXAMPLE 05F03_Wvshp_different_transfer_funs.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
iTrnsFnc = p4
kEnv linseg 0, .01, 1, p3-.2, 1, .01, 0
aL, aR soundin "ClassGuit.wav"
aIndxL = (aL + 1) / 2
aWavShpL tablei aIndxL, iTrnsFnc, 1
aIndxR = (aR + 1) / 2
500
05 F. AM / RM / WAVESHAPING WAVESHAPING
</CsInstruments>
<CsScore>
i 1 0 7 1 ;natural though waveshaping
i 1 + . 2 ;rather heavy distortion
i 1 + . 3 ;chebychev for 1st partial
i 1 + . 4 ;chebychev for 2nd partial
i 1 + . 5 ;chebychev for 3rd partial
i 1 + . 6 ;chebychev for 4th partial
i 1 + . 7 ;after dodge/jerse p.136
i 1 + . 8 ;fox
i 1 + . 9 ;guitar
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Instead of using the “self-built” method which has been described here, you can use the Csound
opcode distort. It performs the actual waveshaping process and gives a nice control about the
amount of distortion in the kdist parameter. Here is a simple example, using rather different tables:
EXAMPLE 05F04_distort.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
501
WAVESHAPING 05 F. AM / RM / WAVESHAPING
ifn = p4
ivol = p5
kdist line 0, p3, 1 ;increase the distortion over p3
aL, aR soundin "ClassGuit.wav"
aout1 distort aL, kdist, ifn
aout2 distort aR, kdist, ifn
outs aout1*ivol, aout2*ivol
endin
</CsInstruments>
<CsScore>
i 1 0 7 1 1
i . + . 2 .3
i . + . 3 1
i . + . 4 .5
i . + . 5 .15
i . + . 6 .04
i . + . 7 .02
i . + . 8 .02
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
502
05 G. GRANULAR SYNTHESIS
This chapter will focus upon granular synthesis used as a DSP technique upon recorded sound
files and will introduce techniques including time stretching, time compressing and pitch shifting.
The emphasis will be upon asynchronous granulation. For an introduction to synchronous granular
synthesis using simple waveforms please refer to chapter 04 F.
We will start with a self-made granulator which we build step by step. It may help to understand
the main parameters, and to see upon which decisions the different opcode designs are built. In
the second part of this chapter we will introduce some of the many Csound opcodes for granular
synthesis, in typical use cases.
A Self-Made Granulator
It is perfectly possible to build one’s own granular machine in Csound code, without using one
of the many opcodes for granular synthesis. This machine will certainly run slower than a native
opcode. But for understanding what is happening, and being able to implement own ideas, this is
a very instructive approach.
Granular synthesis can be described as a sequence of small sound snippets. So we can think of
two units: One unit is managing the sequence, the other unit is performing one grain. Let us call
the first unit Granulator, and the second unit Grain. The Granulator will manage the sequence of
grains in calling the Grain unit again and again, with different parameters:
503
A Self-Made Granulator 05 G. GRANULAR SYNTHESIS
In Csound, we implement this architecture as two instruments. We will start with the instrument
which performs one grain.
The Grain instrument needs the following information in order to play back a single grain:
1. Sound. In the most simple version this is a sound file on the hard disk. More flexible and
fast is a sample which has been stored in a buffer (function table). We can also record this
buffer in real time and through this perform live granular synthesis.
2. Point in Sound to start playback. In the most simple version, this is the same as the skiptime
for playing back sound from hard disk via diskin. Usually we will choose seconds as unit for
this parameter.
3. Duration. The duration for one grain is usually in the range 20-50 ms, but can be smaller or
bigger for special effects. In Csound this parameter is passed to the instrument as p3 in its
call, measured in seconds.
4. Speed of Playback. This parameter is used by diskin and similar opcodes: 1 means the
normal speed, 2 means double speed, 1/2 means half speed. This would result in no pitch
change (1), octave higher (2) and octave lower(1/2). Negative numbers mean reverse play-
back.
5. Volume. We will measure it in dB, where 0 dB means to play back the sound as it is recorded.
6. Envelope. Each grain needs an envelope which starts and ends at zero, to ensure that there
will be no clicks. These are some frequently used envelopes:1
7. Spatial Position. Each grain will be send to a certain point in space. For stereo, it will be a
panning position between 0 (left) and 1 (right).
504
05 G. GRANULAR SYNTHESIS A Self-Made Granulator
We start with the most simple implementation. We play back the sound with diskin and apply a
triangular envelope with the linen opcode. We pass the grain duration as p3, the playback start
as p4 and the playback speed as p5. We choose a constant grain duration of 50 ms, but in the
first five examples different starting points, then in the other five examples from one starting point
different playback speeds.
EXAMPLE 05G01_simple_grain.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Grain
//input parameters
Sound = "fox.wav"
iStart = p4 ;position in sec to read the sound
iSpeed = p5
iVolume = -3 ;dB
iPan = .5 ;0=left, 1=right
//perform
aSound = diskin:a(Sound,iSpeed,iStart,1)
aOut = linen:a(aSound,p3/2,p3,p3/2)
aL, aR pan2 aOut*ampdb(iVolume), iPan
out(aL,aR)
endin
</CsInstruments>
<CsScore>
; start speed
i "Grain" 0 .05 .05 1
i . 1 . .2 .
i . 2 . .42 .
i . 3 . .78 .
i . 4 . 1.2 .
i . 6 . .2 1
i . 7 . . 2
i . 8 . . 0.5
i . 9 . . 10
i . 10 . .25 -1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
It is a tiring job to write a score line for each grain … — no one will do this. But with but a small
change we can read through the whole sound file by calling our Grain instrument only once! The
technique we use in the next example is to start a new instance of the Grain instrument by the
running instance, as long as the end of the sound file has not yet been reached. (This technique
has been described in paragraph Self-Triggering and Recursion of chapter 03 C.)
505
A Self-Made Granulator 05 G. GRANULAR SYNTHESIS
EXAMPLE 05G02_simple_grain_continuous.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Grain
//input parameters
Sound = "fox.wav"
iStart = p4 ;position in sec to read the sound
iSpeed = 1
iVolume = -3 ;dB
iPan = .5 ;0=left, 1=right
//perform
aSound = diskin:a(Sound,iSpeed,iStart,1)
aOut = linen:a(aSound,p3/2,p3,p3/2)
aL, aR pan2 aOut*ampdb(iVolume), iPan
out(aL,aR)
//call next grain until sound file has reached its end
if iStart < filelen(Sound) then
schedule("Grain",p3,p3,iStart+p3)
endif
endin
schedule("Grain",0,50/1000,0)
</CsInstruments>
<CsScore>
e 5 ;stops performance after 5 seconds
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Improvements
ä Rather than being played back from disk, the sound should be put in a buffer (function table)
and played back from there. This is faster and gives more flexibility, for instance in filling the
buffer with real-time recording.
ä The envelope should also be read from a function table. Again, this is faster and offers more
flexibility. In case we want to change the envelope, we simply use another function table,
without changing any code of the instrument.
Table reading can be done by different methods in Csound. Have a look at chapter 03 D for details.
We will use reading the tables with the poscil3 oscillator here. This should give a very good result
in sound quality.
In the next example we reproduce the first example above to check the new code to the Grain
instrument.
506
05 G. GRANULAR SYNTHESIS A Self-Made Granulator
EXAMPLE 05G03_simple_grain_optimized.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Grain
//input parameters
iStart = p4 ;position in sec to read the sound
iSpeed = p5
iVolume = -3 ;dB
iPan = .5 ;0=left, 1=right
//perform
aEnv = poscil3:a(ampdb(iVolume),1/p3,giEnv)
aSound = poscil3:a(aEnv,iSpeed/giSampleLen,giSample,iStart/giSampleLen)
aL, aR pan2 aSound, iPan
out(aL,aR)
endin
</CsInstruments>
<CsScore>
; start speed
i "Grain" 0 .05 .05 1
i . 1 . .2 .
i . 2 . .42 .
i . 3 . .78 .
i . 4 . 1.2 .
i . 6 . .2 1
i . 7 . . 2
i . 8 . . 0.5
i . 9 . . 10
i . 10 . .25 -1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
ä Line 12-13: The sample fox.wav is loaded into function table giSample via GEN routine 01.
Note that we are using here -1 instead of 1 because we don’t want to normalize the sound.2
The triangular shape is loaded via GEN 20 which offers a good selection of different envelope
shapes.
ä Line 14: giSampleLen = ftlen(giSample)/sr. This calculates the length of the sam-
ple in seconds, as length of the function table divided by the sample rate. It makes sense
to store this in a global variable because we are using it in the Grain instrument again and
again.
ä Line 23:aEnv = poscil3:a(ampdb(iVolume),1/p3,giEnv). The envelope (as audio
507
A Self-Made Granulator 05 G. GRANULAR SYNTHESIS
signal) is reading the table giEnv in which a triangular shape is stored. We set the amplitude
of the oscillator to ampdb(iVolume), so to the amplitude equivalent of the iVolume decibel
value. The frequency of the oscillator is 1/p3 because we want to read the envelope exactly
once during the performance time of this instrument instance.
ä Line 24: aSound = poscil3:a(aEnv,iSpeed/giSampleLen, \\ giSample,iStart/giSampleLen).
Again this is a poscil3 oscillator reading a table. The table is here giSample; the amplitude of
the oscillator is the aEnv signal we produced. The frequency of reading the table in normal
speed is 1/giSampleLen; if we include the speed changes, it is iSpeed/giSampleLen. The
starting point to read the table is given to the oscillator as phase value (0=start to 1=end of
the table). So we must divide iStart by giSampleLen to get this value.
As granular synthesis is a mass structure, one of its main features is to deal with variations. Each
parameter usually deviates in a given range. We will at first build the Granulator in the most simple
way, without these deviations, and then proceed to more interesting, variative structures.
The first seven parameters are similar to the parameters for the Grain unit. Grain density and grain
distribution are added at the end of the list.
1. Sound. The sound must be loaded in a function table. We pass the variable name or number
of this table to the Grain instrument.
2. Pointer in Sound. Usually we will have a moving pointer position here. We will use a simple
line in the next example, moving from start to end of the sound in a certain duration. Later
we will implement a moving pointer driven by speed.
3. Duration. We will use milliseconds as unit here and then change it to seconds when we call
the Grain instrument.
4. Pitch Shift (Transposition). This is the speed of reading the sound in the Grain units, re-
sulting in a pitch shift or transposition. We will use Cent as unit here, and change the value
internally to the corresponding speed: cent=0 -> speed=1, cent=1200 -> speed=2, cent=-1200
-> speed=0.5.
5. Volume. We will measure it in dB as for the Grain unit. But the resulting volume will also
depend on the grain density, as overlapping grains will add their amplitudes.
6. Envelope. The grain envelope must be stored in a function table. We will pass the name or
number of the table to the Grain instrument.
7. Spatial Position. For now, we will use a fixed pan position between 0 (left) and 1 (right), as
we did for the Grain instrument.
8. Density. This is the number of grains per second, so the unit is Hz.
9. Distribution. This is a continuum between sychronous granular synthesis, in which all grains
are equally distributed, and asynchronous, in which the distribution is irregular or scattered.3
3 As maximum irregularity we will consider a random position between the regular position of a grain and the regular position
508
05 G. GRANULAR SYNTHESIS A Self-Made Granulator
For triggering the single grains, we use the metro opcode. We call a grain on each trigger tick of
the metro. This is a basic example; the code will be condensed later, but is kept here more explicit
to show the functionality.
EXAMPLE 05G04_simple_granulator.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Granulator
//input parameters as described in text
iSndTab = giSample ;table with sample
iSampleLen = ftlen(iSndTab)/sr
kPointer = linseg:k(0,iSampleLen,iSampleLen)
iGrainDur = 30 ;milliseconds
iTranspos = -100 ;cent
iVolume = -6 ;dB
iEnv = giHalfSine ;table with envelope
iPan = .5 ;panning 0-1
iDensity = 50 ;Hz (grains per second)
iDistribution = .5 ;0-1
//perform: call grains over time
kTrig = metro(iDensity)
if kTrig==1 then
kOffset = random:k(0,iDistribution/iDensity)
schedulek("Grain", kOffset, iGrainDur/1000, iSndTab, iSampleLen,
kPointer, cent(iTranspos), iVolume, iEnv, iPan)
endif
endin
instr Grain
//input parameters
iSndTab = p4
iSampleLen = p5
iStart = p6
iSpeed = p7
iVolume = p8 ;dB
iEnvTab = p9
iPan = p10
//perform
aEnv = poscil3:a(ampdb(iVolume),1/p3,iEnvTab)
aSound = poscil3:a(aEnv,iSpeed/iSampleLen,iSndTab,iStart/iSampleLen)
aL, aR pan2 aSound, iPan
of the next neighbouring grain. (Half of this irregularity will be a random position between own regular and half of the
distance to the neighbouring regular position.)
509
A Self-Made Granulator 05 G. GRANULAR SYNTHESIS
out(aL,aR)
endin
</CsInstruments>
<CsScore>
i "Granulator" 0 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Some comments:
ä Line 13: We use a half sine here, generated by GEN09. Other envelopes are available via
GEN20. As we used a high-quality interpolating oscillator in the Grain instrument for reading
the envelope, the table size is kept rather small.
ä Line 18: The length of the sample is calculated here not as a global variable, but once in this
instrument. This would allow to pass the iSndTab as p-field without changing the code.
ä Line 30: The irregularity is applied as random offset to the regular position of the grain. The
range of the offset is from zero to iDistribution/iDensity. For iDensity=50Hz and iDistribu-
tion=1, for instance, the maximum offset would be 1/50 seconds.
ä Line 31-32: The schedulek opcode is used here. It has been introduced in Csound 6.14; for
older versions of Csound, event can be used as well: event("i","Grain",kOffset,
...) would be the code then. Note that we divide the iGrainDur by 1000, because it was
given in milliseconds. For the transformation of the cent input to a multiplier, we simply use
the cent opcode.
It is suggested to change some values in this example, and to listen to the result; for instance:
2. Change iGrainDur (line 20) from 30 ms to a bigger or smaller value. For very small values
(below 10 ms) artifacts arise.
3. Set iDensity (line 29) to 10 Hz or less and change the iDistribution (line 26). A distribution of
0 should give a perfectly regular sequence of grains, whereas 1 should result in irregularity.
The preferred method for the moving pointer in the Granulator instrument is a phasor. It is the
best approach for real-time use. It can run for an unlimited time and can easily move backwards.
As input for the phasor, technically its frequency, we will put the speed in the usual way: 1 means
normal speed, 0 is freeze, -1 is backwards reading in normal speed. As optional parameter we can
set a start position of the pointer.
All we have to do for implementing this in Csound is to take the sound file length in account for
both, the pointer position and the start position:
510
05 G. GRANULAR SYNTHESIS A Self-Made Granulator
iFileLen = 2 ;sec
iStart = 1 ;sec
kSpeed = 1
kPhasor = phasor:a(kSpeed/iFileLen,iStart/iFileLen)
kPointer = kPhasor*iFileLen
In this example, the phasor will start with an initial phase of iStart/iFileLen = 0.5. The kPhasor
signal which is always 0-1, will move in the frequency kSpeed/iFileLen, here 1/2. The kPhasor will
then be multiplied by two, so will become 0-2 for kPointer.
It is very useful to add random deviations to some of the parameters for granular synthesis. This
opens the space for many different structures and possibilities. We will apply here random devia-
tions to these parameters of the Granulator:
ä Pointer. The pointer will “tremble” or “jump” depending on the range of the random deviation.
The range is given in seconds. It is implemented in line 36 of the next example as
kPointer = kPhasor*iSampleLen + rnd31:k(iPointerRndDev,0)
The opcode rnd31 is a bipolar random generator which will output values between -iPointerRndDev
and +iPointerRndDev. This is then added to the normal pointer position.
ä Duration. We will define here a maximum deviation in percent, related to the medium grain
duration. 100% would mean that a grain duration can deviate between half and twice the
medium duration. A medium duration of 20 ms would yield a random range of 10-40 ms in
this case.
ä Transposition. We can add to the main transposition a bipolar random range. If, for example,
the main transposition is 500 cent and the maximum random transposition is 300 cent, each
grain will choose a value between 200 and 800 cent.
ä Volume. A maximum decibel deviation (also bipolar) can be added to the main volume.
ä Spatial Position. In addition to the main spatial position (in the stereo field 0-1), we can add
a bipolar maximum deviation. If the main position is 0.5 and the maximum deviation is 0.2,
each grain will have a panning position between 0.3 and 0.7.
The next example demonstrates the five possibilities one by one, each parameter in three steps:
at first with no random deviations, then with slight deviations, then with big ones.
EXAMPLE 05G05_random_deviations.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Granulator
//standard input parameter
iSndTab = giSample
iSampleLen = ftlen(iSndTab)/sr
511
A Self-Made Granulator 05 G. GRANULAR SYNTHESIS
iStart = 0 ;sec
kPointerSpeed = 2/3
iGrainDur = 30 ;milliseconds
iTranspos = -100 ;cent
iVolume = -6 ;dB
iEnv = giHalfSine ;table with envelope
iPan = .5 ;panning 0-1
iDensity = 50 ;Hz (grains per second)
iDistribution = .5 ;0-1
//random deviations (for demonstration set to p-fields)
iPointerRndDev = p4 ;sec
iGrainDurRndDev = p5 ;percent
iTransposRndDev = p6 ;cent
iVolumeRndDev = p7 ;dB
iPanRndDev = p8 ;as in iPan
//perform
kPhasor = phasor:k(kPointerSpeed/iSampleLen,iStart/iSampleLen)
kTrig = metro(iDensity)
if kTrig==1 then
kPointer = kPhasor*iSampleLen + rnd31:k(iPointerRndDev,0)
kOffset = random:k(0,iDistribution/iDensity)
kGrainDurDiff = rnd31:k(iGrainDurRndDev,0) ;percent
kGrainDur = iGrainDur*2^(kGrainDurDiff/100) ;ms
kTranspos = cent(iTranspos+rnd31:k(iTransposRndDev,0))
kVol = iVolume+rnd31:k(iVolumeRndDev,0)
kPan = iPan+rnd31:k(iPanRndDev,0)
schedulek("Grain",kOffset,kGrainDur/1000,iSndTab,
iSampleLen,kPointer,kTranspos,kVol,iEnv,kPan)
endif
endin
instr Grain
//input parameters
iSndTab = p4
iSampleLen = p5
iStart = p6
iSpeed = p7
iVolume = p8 ;dB
iEnvTab = p9
iPan = p10
//perform
aEnv = poscil3:a(ampdb(iVolume),1/p3,iEnvTab)
aSound = poscil3:a(aEnv,iSpeed/iSampleLen,iSndTab,iStart/iSampleLen)
aL, aR pan2 aSound, iPan
out(aL,aR)
endin
</CsInstruments>
<CsScore>
t 0 40
; Random Deviations: Pointer GrainDur Transp Vol Pan
;RANDOM POINTER DEVIATIONS
i "Granulator" 0 2.7 0 0 0 0 0 ;normal pointer
i . 3 . 0.1 0 0 0 0 ;slight trembling
i . 6 . 1 0 0 0 0 ;chaotic jumps
;RANDOM GRAIN DURATION DEVIATIONS
i . 10 . 0 0 0 0 0 ;no deviation
i . 13 . 0 100 0 0 0 ;100%
i . 16 . 0 200 0 0 0 ;200%
;RANDOM TRANSPOSITION DEVIATIONS
i . 20 . 0 0 0 0 0 ;no deviation
i . 23 . 0 0 300 0 0 ;±300 cent maximum
i . 26 . 0 0 1200 0 0 ;±1200 cent maximum
512
05 G. GRANULAR SYNTHESIS A Self-Made Granulator
It sounds like for normal use, the pointer, transposition and pan deviation are most interesting to
apply.
Final Example
After first prsenting the more instructional examples, this final one shows some of the potential
applications for granular sounds. It uses the same parts of The quick brown fox as in the first
example of this chapter, each which different sounds and combination of the parameters.
EXAMPLE 05G06_the_fox_universe.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
opcode Chan,S,Si
Sname, id xin
Sout sprintf "%s_%d", Sname, id
xout Sout
endop
instr Quick
id = gi_ID
gi_ID += 1
iStart = .2
chnset(1/100, Chan("PointerSpeed",id))
chnset(linseg:k(10,p3,1), Chan("GrainDur",id))
chnset(randomi:k(15,20,1/3,3), Chan("Density",id))
chnset(linseg:k(7000,p3/2,6000), Chan("Transpos",id))
chnset(600,Chan("TransposRndDev",id))
chnset(linseg:k(-10,p3-3,-10,3,-30), Chan("Volume",id))
chnset(randomi:k(.2,.8,1,3), Chan("Pan",id))
chnset(.2,Chan("PanRndDev",id))
schedule("Granulator",0,p3,id,iStart)
schedule("Output",0,p3,id,0)
endin
513
A Self-Made Granulator 05 G. GRANULAR SYNTHESIS
instr Brown
id = gi_ID
gi_ID += 1
iStart = .42
chnset(1/100, Chan("PointerSpeed",id))
chnset(50, Chan("GrainDur",id))
chnset(50, Chan("Density",id))
chnset(100,Chan("TransposRndDev",id))
chnset(linseg:k(-50,3,-10,12,-10,3,-50), Chan("Volume",id))
chnset(.5, Chan("Pan",id))
schedule("Granulator",0,p3,id,iStart)
schedule("Output",0,p3+3,id,.3)
endin
instr F
id = gi_ID
gi_ID += 1
iStart = .68
chnset(50, Chan("GrainDur",id))
chnset(40, Chan("Density",id))
chnset(100,Chan("TransposRndDev",id))
chnset(linseg:k(-30,3,-10,p3-6,-10,3,-30)+randomi:k(-10,10,1/3),
Chan("Volume",id))
chnset(.5, Chan("Pan",id))
chnset(.5, Chan("PanRndDev",id))
schedule("Granulator",0,p3,id,iStart)
schedule("Output",0,p3+3,id,.9)
endin
instr Ox
id = gi_ID
gi_ID += 1
iStart = .72
chnset(1/100,Chan("PointerSpeed",id))
chnset(50, Chan("GrainDur",id))
chnset(40, Chan("Density",id))
chnset(-2000,Chan("Transpos",id))
chnset(linseg:k(-20,3,-10,p3-6,-10,3,-30)+randomi:k(-10,0,1/3),
Chan("Volume",id))
chnset(randomi:k(.2,.8,1/5,2,.8), Chan("Pan",id))
schedule("Granulator",0,p3,id,iStart)
schedule("Output",0,p3+3,id,.9)
endin
instr Jum
id = gi_ID
gi_ID += 1
iStart = 1.3
chnset(0.01,Chan("PointerRndDev",id))
chnset(50, Chan("GrainDur",id))
chnset(40, Chan("Density",id))
chnset(transeg:k(p4,p3/3,0,p4,p3/2,5,3*p4),Chan("Transpos",id))
chnset(linseg:k(0,1,-10,p3-7,-10,6,-50)+randomi:k(-10,0,1,3),
Chan("Volume",id))
chnset(p5, Chan("Pan",id))
schedule("Granulator",0,p3,id,iStart)
schedule("Output",0,p3+3,id,.7)
if p4 < 300 then
schedule("Jum",0,p3,p4+500,p5+.3)
endif
endin
instr Whole
514
05 G. GRANULAR SYNTHESIS A Self-Made Granulator
id = gi_ID
gi_ID += 1
iStart = 0
chnset(1/2,Chan("PointerSpeed",id))
chnset(5, Chan("GrainDur",id))
chnset(20, Chan("Density",id))
chnset(.5, Chan("Pan",id))
chnset(.3, Chan("PanRndDev",id))
schedule("Granulator",0,p3,id,iStart)
schedule("Output",0,p3+1,id,0)
endin
instr Granulator
//get ID for resolving string channels
id = p4
//standard input parameter
iSndTab = giSample
iSampleLen = ftlen(iSndTab)/sr
iStart = p5
kPointerSpeed = chnget:k(Chan("PointerSpeed",id))
kGrainDur = chnget:k(Chan("GrainDur",id))
kTranspos = chnget:k(Chan("Transpos",id))
kVolume = chnget:k(Chan("Volume",id))
iEnv = giSinc
kPan = chnget:k(Chan("Pan",id))
kDensity = chnget:k(Chan("Density",id))
iDistribution = 1
//random deviations
kPointerRndDev = chnget:k(Chan("PointerRndDev",id))
kTransposRndDev = chnget:k(Chan("TransposRndDev",id))
kPanRndDev = chnget:k(Chan("PanRndDev",id))
//perform
kPhasor = phasor:k(kPointerSpeed/iSampleLen,iStart/iSampleLen)
kTrig = metro(kDensity)
if kTrig==1 then
kPointer = kPhasor*iSampleLen + rnd31:k(kPointerRndDev,0)
kOffset = random:k(0,iDistribution/kDensity)
kTranspos = cent(kTranspos+rnd31:k(kTransposRndDev,0))
kPan = kPan+rnd31:k(kPanRndDev,0)
schedulek("Grain",kOffset,kGrainDur/1000,iSndTab,iSampleLen,
kPointer,kTranspos,kVolume,iEnv,kPan,id)
endif
endin
instr Grain
//input parameters
iSndTab = p4
iSampleLen = p5
iStart = p6
iSpeed = p7
iVolume = p8
iEnvTab = p9
iPan = p10
id = p11
//perform
aEnv = poscil3:a(ampdb(iVolume),1/p3,iEnvTab)
aSound = poscil3:a(aEnv,iSpeed/iSampleLen,iSndTab,iStart/iSampleLen)
aL, aR pan2 aSound, iPan
//write audio to channels for id
chnmix(aL,Chan("L",id))
chnmix(aR,Chan("R",id))
endin
515
A Self-Made Granulator 05 G. GRANULAR SYNTHESIS
instr Output
id = p4
iRvrbTim = p5
aL_dry = chnget:a(Chan("L",id))
aR_dry = chnget:a(Chan("R",id))
aL_wet, aR_wet reverbsc aL_dry, aR_dry, iRvrbTim,sr/2
out(aL_dry+aL_wet,aR_dry+aR_wet)
chnclear(Chan("L",id),Chan("R",id))
endin
</CsInstruments>
<CsScore>
i "Quick" 0 20
i "Brown" 10 20
i "F" 20 50
i "Ox" 30 40
i "Jum" 72 30 -800 .2
i "Quick" 105 10
i "Whole" 118 5.4
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Some comments:
ä Line 12-16: The User-Defined Opcode (UDO) Chan puts a string and an ID together to a com-
bined string: Chan("PointerSpeed",1) returns "PointerSpeed_1". This is nothing
but a more readable version of sprintf("%s_%d", "PointerSpeed", 1).
ä Line 20-24: The whole architecture of this example is based on software channels. The instr
Quick schedules one instance of instr Granulator. While this instance is still running, the instr
Brown schedules another instance of instr Granulator. Both, Quick and Brown want to send
their specific values to their instance of instr Granulator. This is done by an ID which is added
to the channel name. For the pointer speed, instr Quick uses the channel “PointerSpeed*1”
whereas instr Brown uses the channel “PointerSpeed*2”. So each of the instruments Quick,
Brown etc. have to get a unique ID. This is done with the global variable gi_ID. When instr
Quick starts, it sets its own variable id to the value of gi_ID (which is 1 in this moment), and
then sets gi_ID to 2. So when instr Brown starts, it sets its own id as 2 and sets gi_ID to
3 for future use by instrument F.
ä Line 34: Each of the instruments which provide the different parameters, like instr Quick here,
call an instance of instr Granulator and pass the ID to it, as well as the pointer start in the
sample: schedule("Granulator",0,p3,id,iStart). The id is passed here as fourth
parameter, so instr Granulator will read id = p4 in line 112 to receive th ID, and iStart =
p5 in line 116, to receive the pointer start.
ä Line 35: As we want to add some reverb, but with different reverb time for each structure, we
start one instance of instr Output here. Again it will pass the own ID to the instance of instr
Output, and also the reverb time. In line 162-163 we see how these values are received: id
= p4 and iRvrbTim = p5
ä Line 157-158: Instr Grain does not output the audio signal directly, but sends it via chnmix to
the instance of instr Output with the same ID. See line 164-165 for the complementary code in
instr Output. Note that we must use chnmix not chnset here because we muss add all audio
in the overlapping grains (try to substitute chnmix by chnset to hear the difference). The
zeroing of each audio channel at the end of the chain by chnclear is also important (cmment
out line 168 to hear the difference).
516
05 G. GRANULAR SYNTHESIS A Self-Made Granulator
Live Input
Instead of using prerecorded samples, granular synthesis can also be applied to live input. Basi-
cally what we have to do is to add an instrument which writes the live input continuously to a table.
When we ensure that writing and reading the table is done in a circular way, the table can be very
short.
The time interval between writing and reading can be very short. If we do not transpose, or only
downwards, we can read immediately. Only if we tranpose upwards, we must wait. Imagine a grain
duration of 50 ms, a delay between writing and reading of 20 ms, and a pitch shift of one octave
upwards. The reading pointer will move twice as fast as the writing pointer, so after 40 ms of the
grain, it will get ahead of the writing pointer.
So, in the following example, we will set the desired delay time to a small value. It has to be
adjusted by the user depending on maximal tranposition and grain size.
EXAMPLE 05G07_live_granular.csd
<CsoundSynthesizer>
<CsOptions>
-odac -iadc -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Record
aIn = inch(1)
gaWritePointer = phasor(1)
tablew(aIn,gaWritePointer,giTable,1)
endin
schedule("Record",0,-1)
instr Granulator
kGrainDur = 30 ;milliseconds
kTranspos = -300 ;cent
kDensity = 50 ;Hz (grains per second)
kDistribution = .5 ;0-1
kTrig = metro(kDensity)
if kTrig==1 then
kPointer = k(gaWritePointer)-giDelay/1000
kOffset = random:k(0,kDistribution/kDensity)
schedulek("Grain",kOffset,kGrainDur/1000,kPointer,cent(kTranspos))
endif
endin
schedule("Granulator",giDelay/1000,-1)
instr Grain
iStart = p4
iSpeed = p5
aOut = poscil3:a(poscil3:a(.3,1/p3,giHalfSine),iSpeed,giTable,iStart)
out(aOut,aOut)
endin
517
Csound Opcodes for Granular Synthesis 05 G. GRANULAR SYNTHESIS
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
We only use some of the many parameters here; others can be added easily. As we chose one
second for the table, we can simplify some calculations. Most important is to know for instr Gran-
ulator the current position of the write pointer, and to start playback giDelay milliseconds (here 1
ms) after it. For this, we write the current write pointer position to a global variable gaWritePointer
in instr Record and get the start for one grain by
kPointer = k(gaWritePointer)-giDelay/1000
After having built this self-made granulator step by step, we will look now into some Csound op-
codes for sample-based granular synthesis.
You will need to make sure that a sound file is available to sndwarp via a GEN01 function table.
You can replace the one used in this example with one of your own by replacing the reference
to ClassicalGuitar.wav. This sound file is stereo therefore instrument 1 uses the stereo version
sndwarpst. A mismatch between the number of channels in the sound file and the version of
sndwarp used will result in playback at an unexpected pitch.
sndwarp describes grain size as window size and it is defined in samples so therefore a window
size of 44100 means that grains will last for 1s each (when sample rate is set at 44100). Window
size randomization (irandw) adds a random number within that range to the duration of each grain.
As these two parameters are closely related it is sometime useful to set irandw to be a fraction of
518
05 G. GRANULAR SYNTHESIS Csound Opcodes for Granular Synthesis
window size. If irandw is set to zero we will get artefacts associated with synchronous granular
synthesis.
sndwarp (along with many of Csound’s other granular synthesis opcodes) requires us to supply it
with a window function in the form of a function table according to which it will apply an amplitude
envelope to each grain. By using different function tables we can alternatively create softer grains
with gradual attacks and decays (as in this example), with more of a percussive character (short
attack, long decay) or gate-like (short attack, long sustain, short decay).
EXAMPLE 05G08_sndwarp.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
--env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 16
nchnls = 2
0dbfs = 1
instr 1
kamp = 0.1
ktimewarp expon p4,p3,p5 ; amount of time stretch, 1=none 2=double
kresample line p6,p3,p7 ; pitch change 1=none 2=+1oct
ifn1 = giSound ; sound file to be granulated
ifn2 = giWFn ; window shaped used to envelope every grain
ibeg = 0
iwsize = 3000 ; grain size (in sample)
irandw = 3000 ; randomization of grain size range
ioverlap = 50 ; density
itimemode = 0 ; 0=stretch factor 1=pointer
prints p8 ; print a description
aSigL,aSigR sndwarpst kamp,ktimewarp,kresample,ifn1,ibeg, \
iwsize,irandw,ioverlap,ifn2,itimemode
outs aSigL,aSigR
endin
</CsInstruments>
<CsScore>
;p3 = stretch factor begin / pointer location begin
;p4 = stretch factor end / pointer location end
;p5 = resample begin (transposition)
;p6 = resample end (transposition)
;p7 = procedure description
;p8 = description string
; p1 p2 p3 p4 p5 p6 p7 p8
i 1 0 10 1 1 1 1 "No time stretch. No pitch shift."
i 1 10.5 10 2 2 1 1 "%nTime stretch x 2."
i 1 21 20 1 20 1 1 \
"%nGradually increasing time stretch factor from x 1 to x 20."
i 1 41.5 10 1 1 2 2 "%nPitch shift x 2 (up 1 octave)."
519
Csound Opcodes for Granular Synthesis 05 G. GRANULAR SYNTHESIS
The next example uses sndwarp’s other timestretch mode with which we explicitly define a pointer
position from where in the source file grains shall begin. This method allows us much greater
freedom with how a sound will be time warped; we can even freeze movement and go backwards
in time - something that is not possible with timestretching mode.
This example is self generative in that instrument 2, the instrument that actually creates the granu-
lar synthesis textures, is repeatedly triggered by instrument 1. Instrument 2 is triggered once every
12.5s and these notes then last for 40s each so will overlap. Instrument 1 is played from the score
for 1 hour so this entire process will last that length of time. Many of the parameters of granulation
are chosen randomly when a note begins so that each note will have unique characteristics. The
timestretch is created by a line function: the start and end points of which are defined randomly
when the note begins. Grain/window size and window size randomization are defined randomly
when a note begins - notes with smaller window sizes will have a fuzzy airy quality wheres notes
with a larger window size will produce a clearer tone. Each note will be randomly transposed
(within a range of +/- 2 octaves) but that transposition will be quantized to a rounded number of
semitones - this is done as a response to the equally tempered nature of source sound material
used.
Each entire note is enveloped by an amplitude envelope and a resonant lowpass filter in each case
encasing each note under a smooth arc. Finally a small amount of reverb is added to smooth the
overall texture slightly
EXAMPLE 05G09_selfmade_grain.csd
<CsoundSynthesizer>
<CsOptions>
-odac --env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
520
05 G. GRANULAR SYNTHESIS Csound Opcodes for Granular Synthesis
</CsInstruments>
<CsScore>
; p1 p2 p3
i 1 0 3600 ; triggers instr 2
i 3 0 3600 ; reverb instrument
</CsScore>
</CsoundSynthesizer>
;example written by Iain McCurdy
521
Csound Opcodes for Granular Synthesis 05 G. GRANULAR SYNTHESIS
With granule we define a number of grain streams for the opcode using its ivoice input argument.
This will also have an effect on the density of the texture produced. Like sndwarp’s first timestretch-
ing mode, granule also has a stretch ratio parameter. Confusingly it works the other way around
though, a value of 0.5 will slow movement through the file by 1/2, 2 will double is and so on. In-
creasing grain gap will also slow progress through the sound file. granule also provides up to four
pitch shift voices so that we can create chord-like structures without having to use more than one
iteration of the opcode. We define the number of pitch shifting voices we would like to use using
the ipshift parameter. If this is given a value of zero, all pitch shifting intervals will be ignored and
grain-by-grain transpositions will be chosen randomly within the range +/-1 octave. granule con-
tains built-in randomizing for several of it parameters in order to easier facilitate asynchronous
granular synthesis. In the case of grain gap and grain size randomization these are defined as
percentages by which to randomize the fixed values.
Unlike Csound’s other granular synthesis opcodes, granule does not use a function table to define
the amplitude envelope for each grain, instead attack and decay times are defined as percentages
of the total grain duration using input arguments. The sum of these two values should total less
than 100.
Five notes are played by this example. While each note explores grain gap and grain size in the
same way each time, different permutations for the four pitch transpositions are explored in each
note. Information about what these transpositions are is printed to the terminal as each note
begins.
EXAMPLE 05G10_granule.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
--env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
522
05 G. GRANULAR SYNTHESIS Csound Opcodes for Granular Synthesis
</CsInstruments>
<CsScore>
; p4 = pitch 1
; p5 = pitch 2
; p6 = pitch 3
; p7 = pitch 4
; p8 = number of pitch shift voices (0=random pitch)
; p1 p2 p3 p4 p5 p6 p7 p8 p9
i 1 0 48 1 1 1 1 4 "pitches: all unison"
i 1 + . 1 0.5 0.25 2 4 \
523
Csound Opcodes for Granular Synthesis 05 G. GRANULAR SYNTHESIS
EXAMPLE 05G11_grain_delay.csd
<CsoundSynthesizer>
<CsOptions>
--env:SSDIR+=../SourceMaterials
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 128
nchnls = 2
0dbfs = 1
; sigmoid rise/decay shape for fof2, half cycle from bottom to top
giSigRise ftgen 0,0,8192,19,0.5,1,270,1
; test sound
giSample ftgen 0,0,0,1,"fox.wav", 0,0,0
instr 1
; test sound, replace with live input
a1 loscil 1, 1, giSample, 1
outch 1, a1
chnmix a1, "liveAudio"
endin
instr 2
; write live input to buffer (table)
a1 chnget "liveAudio"
gkstart tablewa giLive, a1, 0
if gkstart < giTablen goto end
gkstart = 0
524
05 G. GRANULAR SYNTHESIS Csound Opcodes for Granular Synthesis
end:
a0 = 0
chnset a0, "liveAudio"
endin
instr 3
; delay parameters
kDelTim = 0.5 ; delay time in seconds (max 2.8 seconds)
kFeed = 0.8
; delay time random dev
kTmod = 0.2
kTmod rnd31 kTmod, 1
kDelTim = kDelTim+kTmod
; delay pitch random dev
kFmod linseg 0, 1, 0, 1, 0.1, 2, 0, 1, 0
kFmod rnd31 kFmod, 1
; grain delay processing
kamp = ampdbfs(-8)
kfund = 25 ; grain rate
kform = (1+kFmod)*(sr/giTablen) ; grain pitch transposition
koct = 0
kband = 0
kdur = 2.5 / kfund ; duration relative to grain rate
kris = 0.5*kdur
kdec = 0.5*kdur
kphs = (gkstart/giTablen)-(kDelTim/(giTablen/sr)) ;grain phase
kgliss = 0
a1 fof2 1, kfund, kform, koct, kband, kris, kdur, kdec, 100, \
giLive, giSigRise, 86400, kphs, kgliss
outch 2, a1*kamp
chnset a1*kFeed, "liveAudio"
endin
</CsInstruments>
<CsScore>
i 1 0 20
i 2 0 20
i 3 0 20
</CsScore>
</CsoundSynthesizer>
;example by Oeyvind Brandtsegg
In the last example we will use the grain opcode. This opcode is part of a little group of opcodes
which also includes grain2 and grain3. grain is the oldest opcode, Grain2 is a more easy-to-use
opcode, while Grain3 offers more control.
EXAMPLE 05G12_grain.csd
<CsoundSynthesizer>
<CsOptions>
-o dac --env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 128
nchnls = 2
0dbfs = 1
; First we hear each grain, but later on it sounds more like a drum roll.
gareverbL init 0
gareverbR init 0
525
Csound Opcodes for Granular Synthesis 05 G. GRANULAR SYNTHESIS
giFt1 ftgen 0, 0, 1025, 20, 2, 1 ; GEN20, Hanning window for grain envelope
giFt2 ftgen 0, 0, 0, 1, "fox.wav", 0, 0, 0
instr 2 ; Reverb
kkamp line 0, p3, 0.08
aL reverb gareverbL, 10*kkamp ; reverberate what is in gareverbL
aR reverb gareverbR, 10*kkamp ; and garaverbR
outs kkamp*aL, kkamp*aR ; and output the result
gareverbL = 0 ; empty the receivers for the next loop
gareverbR = 0
endin
</CsInstruments>
<CsScore>
i1 0 20 ; Granulation
i2 0 21 ; Reverb
</CsScore>
</CsoundSynthesizer>
;example by Bjørn Houdorf
Several opcodes for granular synthesis have been considered in this chapter but this is in no
way meant to suggest that these are the best, in fact it is strongly recommended to explore all
of Csound’s other opcodes as they each have their own unique character. The syncgrain family
of opcodes (including also syncloop and diskgrain) are deceptively simple as their k-rate controls
encourages further abstractions of grain manipulation, fog is designed for FOF synthesis type syn-
chronous granulation but with sound files and partikkel offers a comprehensive control of grain
characteristics on a grain-by-grain basis inspired by Curtis Roads’ encyclopedic book on granular
synthesis Microsound.
526
05 H. CONVOLUTION
The most common application of convolution in audio processing is reverberation but convolution
is equally adept at, for example, imitating the filtering and time smearing characteristics of vintage
microphones, valve amplifiers and speakers. It is also used sometimes to create more unusual
special effects. The strength of convolution based reverbs is that they implement acoustic imita-
tions of actual spaces based upon recordings of those spaces. All the quirks and nuances of the
original space will be retained. Reverberation algorithms based upon networks of comb and all-
pass filters create only idealised reverb responses imitating spaces that don’t actually exist. The
impulse response is a little like a fingerprint of the space. It is perhaps easier to manipulate char-
acteristics such as reverb time and high frequency diffusion (i.e. lowpass filtering) of the reverb
effect when using a Schroeder derived algorithm using comb and allpass filters but most of these
modification are still possible, if not immediately apparent, when implementing reverb using con-
volution. The quality of a convolution reverb is largely dependent upon the quality of the impulse
response used. An impulse response recording is typically achieved by recording the reverberant
tail that follows a burst of white noise. People often employ techniques such as bursting balloons
to achieve something approaching a short burst of noise. Crucially the impulse sound should not
excessively favour any particular frequency or exhibit any sort of resonance. More modern tech-
niques employ a sine wave sweep through all the audible frequencies when recording an impulse
response. Recorded results using this technique will normally require further processing in order
to provide a usable impulse response file and this approach will normally be beyond the means of
a beginner.
Many commercial, often expensive, implementations of convolution exist both in the form of soft-
527
pconvolve 05 H. CONVOLUTION
ware and hardware but fortunately Csound provides easy access to convolution for free. Csound
currently lists six different opcodes for convolution, convolve (convle), cross2, dconv, ftconv, ft-
morf and pconvolve. convolve and dconv are earlier implementations and are less suited to real-
time operation, cross2 relates to FFT-based cross synthesis and ftmorf is used to morph between
similar sized function table and is less related to what has been discussed so far, therefore in this
chapter we shall focus upon just two opcodes, pconvolve and ftconv.
pconvolve
pconvolve is perhaps the easiest of Csound’s convolution opcodes to use and the most useful in a
realtime application. It uses the uniformly partitioned (hence the p) overlap-save algorithm which
permits convolution with very little delay (latency) in the output signal. The impulse response file
that it uses is referenced directly, i.e. it does not have to be previously loaded into a function table,
and multichannel files are permitted. The impulse response file can be any standard sound file
acceptable to Csound and does not need to be pre-analysed as is required by convolve.
Convolution procedures through their very nature introduce a delay in the output signal but pcon-
volve minimises this using the algorithm mentioned above. It will still introduce some delay but
we can control this using the opcode’s ipartitionsize input argument. What value we give this will
require some consideration and perhaps some experimentation as choosing a high partition size
will result in excessively long delays (only an issue in realtime work) whereas very low partition
sizes demand more from the CPU and too low a size may result in buffer under-runs and inter-
rupted realtime audio. Bear in mind still that realtime CPU performance will depend heavily on the
length of the impulse response file. The partition size argument is actually an optional argument
and if omitted it will default to whatever the software buffer size is as defined by the -b command
line flag. If we specify the partition size explicitly however, we can use this information to delay
the input audio (after it has been used by pconvolve) so that it can be realigned in time with the
latency affected audio output from pconvolve - this will be essential in creating a wet/dry mix in
a reverb unit. Partition size is defined in sample frames therefore if we specify a partition size of
512, the delay resulting from the convolution procedure will be 512/sr, so about 12ms at a sample
rate of 44100 Hz.
In the following example a monophonic drum loop sample undergoes processing through a convo-
lution reverb implemented using pconvolve which in turn uses two different impulse files. The first
file is a more conventional reverb impulse file taken in a stairwell whereas the second is a record-
ing of the resonance created by striking a terracota bowl sharply. You can, of course, replace them
with ones of your own but remain mindful of mono/stereo/multichannel integrity.
EXAMPLE 05H01_pconvolve.csd
<CsoundSynthesizer>
<CsOptions>
--env:SSDIR+=../SourceMaterials -odac
</CsOptions>
<CsInstruments>
528
05 H. CONVOLUTION ftconv
sr = 44100
ksmps = 512
nchnls = 2
0dbfs = 1
gasig init 0
</CsInstruments>
<CsScore>
; instr 1. sound file player
; p4=input soundfile
; instr 2. convolution reverb
; p4=impulse response file
; p5=dry/wet mix (0 - 1)
i 1 0 8.6 "loop.wav"
i 2 0 10 "Stairwell.wav" 0.3
i 1 10 8.6 "loop.wav"
i 2 10 10 "dish.wav" 0.8
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
ftconv
ftconv (abbreviated from function table convolution) is perhaps slightly more complicated to use
than pconvolve but offers additional options. The fact that ftconv utilises an impulse response
that we must first store in a function table rather than directly referencing a sound file stored on
disk means that we have the option of performing transformations upon the audio stored in the
function table before it is employed by ftconv for convolution. This example begins just as the
previous example: a mono drum loop sample is convolved first with a typical reverb impulse re-
sponse and then with an impulse response derived from a terracotta bowl. After twenty seconds
529
ftconv 05 H. CONVOLUTION
the contents of the function tables containing the two impulse responses are reversed by calling
a UDO (instrument 3) and the convolution procedure is repeated, this time with a backwards re-
verb effect. When the reversed version is performed the dry signal is delayed further before being
sent to the speakers so that it appears that the reverb impulse sound occurs at the culmination
of the reverb build-up. This additional delay is switched on or off via p6 from the score. As with
pconvolve, ftconv performs the convolution process in overlapping partitions to minimise latency.
Again we can minimise the size of these partitions and therefore the latency but at the cost of CPU
efficiency. ftconv’s documentation refers to this partition size as iplen (partition length). ftconv
offers further facilities to work with multichannel files beyond stereo. When doing this it is sug-
gested that you use GEN52 which is designed for this purpose. GEN01 seems to work fine, at least
up to stereo, provided that you do not defer the table size definition (size=0). With ftconv we can
specify the actual length of the impulse response - it will probably be shorter than the power-of-2
sized function table used to store it - and this action will improve realtime efficiency. This optional
argument is defined in sample frames and defaults to the size of the impulse response function
table.
EXAMPLE 05H02_ftconv.csd
<CsoundSynthesizer>
<CsOptions>
--env:SSDIR+=../SourceMaterials -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 128
nchnls = 2
0dbfs = 1
gasig init 0
530
05 H. CONVOLUTION ftconv
</CsInstruments>
<CsScore>
; instr 1. sound file player
; p4=input soundfile
; instr 2. convolution reverb
; p4=impulse response file
; p5=dry/wet mix (0 - 1)
; p6=reverse reverb switch (0=off,1=on)
; instr 3. reverse table contents
; p4=function table number
Suggested avenues for further exploration with ftconv could be applying envelopes to, filtering and
time stretching and compressing the function table stored impulse files before use in convolution.
The impulse responses used here are admittedly of rather low quality and whilst it is always rec-
ommended to maintain as high standards of sound quality as possible the user should not feel
restricted from exploring the sound transformation possibilities possible form whatever source
material they may have lying around. Many commercial convolution algorithms demand a propri-
etary impulse response format inevitably limiting the user to using the impulse responses provided
by the software manufacturers but with Csound we have the freedom to use any sound we like.
531
liveconv 05 H. CONVOLUTION
liveconv
The liveconv opcode is an interesting extension of the ftconv opcode. Its main purpose is to make
dynamical reloading of the table with the impulse response not only possible, but give an option to
avoid artefacts in this reloading. This is possible as reloading can be done partition by partition.
The following example mimics the live input by short snippets of the fox.wav sound file. Once the
new sound starts to fill the table (each time instr Record_IR is called), it sends the number 1 via
software channel conv_update to the kupdate parameter of the liveconv opcode in instr Convolver.
This will start the process of applying the new impulse response.
EXAMPLE 05H03_liveconv.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
;create IR table
giIR_record ftgen 0, 0, 131072, 2, 0
instr Input
instr Record_IR
;fill IR table
andx_IR line 0, p3, ftlen(giIR_record)
tablew asnd, andx_IR, giIR_record
532
05 H. CONVOLUTION liveconv
endin
instr Convolver
endin
</CsInstruments>
<CsScore>
;play input sound alone first
i "Input" 0 15.65
;convolve continuously
i "Convolver" 2 13.65
</CsScore>
</CsoundSynthesizer>
;example by Oeyving Brandtsegg and Sigurd Saue
ä Line 13: A function table is created in which the impulse responses can be recorded in real-
time. A power of two size (here 217 = 131072) is preferred as the partition size will then be an
integer multiple of the table size.
ä Line 15-24: This instrument mimics the audio source on which the convolution will be ap-
plied. Here it is beats.wav, a short sound file which is looped.
ä Line 27: Whenever instr Record_IR is called, it will record an impulse response to table
giIR_record. The impulse response can be very small, but the whole table must be recorded
anyway. So the duration of the instrument (p3) must be set to the time it takes for this record-
ing. This is the length of the table divided by the sample rate: ftlen(giIR_record)/sr,
here 131072 / 44100 = 2.972 seconds.
ä Line 32-24: The second live input which is used for the impulse response, is mimicked here
by the file fox.wav which is played back with different skip times in the different calls of the
533
liveconv 05 H. CONVOLUTION
instrument. The envelope amp applies a short fade in and fade out to the short portion of
the sample which we want to use. (asnd *= amp is a short form for asnd = asnd*amp.)
ä Line 56-60: Depending on the intensity and the spectral content of the impulse response, the
convolution will have rather different volume. The code in these lines is to balance it. The
kdB[] array has seven different dB values for the seven calls of instr Record_IR. Each new up-
date message (when kupdate gets 1) will increase the kindx pointer in the array so that these
seven dB values are being applied in line 54 as ampdb(kdB[kindx]) to the convolution
aconv.
534
05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING
An audio signal can be described as continuous changes of amplitudes in time.1 This is what we
call time-domain. With a Fourier Transform (FT), we can transfer this time-domain signal to the
frequency domain. This can, for instance, be used to analyze and visualize the spectrum of the
signal. Fourier transform and subsequent manipulations in the frequency domain open a wide
area of far-reaching sound transformations, like time stretching, pitch shifting, cross synthesis
and any kind of spectral modification.
General Aspects
Fourier Transform is a complex method. We will describe here in short what is most important to
know about it for the user.
As continuous changes are inherent to sounds, the FT used in musical applications follows a prin-
ciple which is well known from film or video. The continuous flow of time is divided into a number
of fixed frames. If this number is big enough (at least 20 frames per second), the continuous flow
can reasonably be divided to this sequence of FT snapshots. This is called the Short Time Fourier
Transform (STFT).
Some care has to be taken to minimise the side effects of cutting the time into snippets. Firstly
an envelope for the analysis frame is applied. As one analysis frame is often called window, the
1 Silence in the digital domain is not only when the ampltitudes are always zero. Silence is any constant amplitude, it be 0
or 1 or -0.2.
2 To
put this simply: If we zoom into recordings of any pitched sound, we will see periodic repetitions. If a flute is playing a
440 Hz (A4) tone, we will see every 2.27 milliseconds (1/440 second) the same shape.
535
General Aspects 05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING
envelope shapes are called window function, window shape or window type. Most common are
the Hamming and the von Hann (or Hanning) window functions:
Secondly the analysis windows are not put side by side but as overlapping each other. The minimal
overlap would be to start the next window at the middle of the previous one. More common is to
have four overlaps which would result in this image:3
We already measured the size of the analysis window in these figures in samples rather than in
milliseconds. As we are dealing with digital audio, the Fourier Transform has become a Digital
Fourier Transform (DFT). It offers some simplifications compared to the analogue FT as the num-
ber of amplitudes in one frame is finite. And moreover, there is a considerable gain of speed in
the calculation if the window size is a power of two. This version of the DFT is called Fast Fourier
Transform (FFT) and is implemented in all audio programming languages.
536
05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING General Aspects
be most suitable for one FFT window, thus resulting in a window length of about 11, 23 and 46
milliseconds respectively. Whether a smaller or lager window size is better, depends on different
decisions.
First thing to know about this is that the frequency resolution in a FFT analysis window directly
relates to its size. This is based on two aspects: the fundamental frequency and the number of
potenial harmonics which are analysed and weighted via the Fourier Transform.
The fundamental frequency of one given FFT window is the inverse of its size in seconds related
to the sample rate. For sr=44100 Hz, the fundamental frequencies are:
It is obvious that a larger window is better for frequency analysis at least for low frequencies. This
is even more the case as the estimated harmonics which are scanned by the Fourier Transform
are integer multiples of the fundamental frequency.4 These estimated harmonics or partials are
usually called bins in FT terminology. So, again for sr=44100 Hz, the bins are:
ä bin 1 = 86.13 Hz, bin 2 = 172.26 Hz, bin 3 = 258.40 Hz for size=512
ä bin 1 = 43.07 Hz, bin 2 = 86.13 Hz, bin 3 = 129.20 Hz for size=1024
ä bin 1 = 21.53 Hz, bin 2 = 43.07 Hz, bin 3 = 64.60 Hz for size=2048
This means that a larger window is not only better to analyse low frequencies, it also has a bet-
ter frequency resolution in general. In fact, the window of size 2048 samples has 1024 analysis
bins from the fundamental frequency 21.53 Hz to the Nyquist frequency 22050 Hz, each of them
covering a frequency range of 21.53 Hz, whilst the window of size 512 samples has 256 analysis
bins from the fundamental frequency 86.13 Hz to the Nyquist frequency 22050 Hz, each of them
covering a frequency range of 86.13 Hz.5
Why then not always use the larger window? — Because a larger window needs more time, or in
other words: the time resolution is worse for a window size of 2048, is fair for a window size of
1024 and is better for a window size of 512.
4 Remember that FT is based on the assumption that the signal to be analysed is a periodic function.
5 For both,the bin 0 is to be added which analyses the energy at 0 Hz. So in general the number of bins is half of the window
size plus one: 257 bins for size 512, 513 bins for size 1924, 1025 bins for size 2048.
537
FFT in Csound 05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING
This dilemma is known as time-frequency tradeoff. We must decide for each FFT situation whether
the frequency resolution or the time resolution is more important. If, for instance, we have long
piano chords with low frequencies, we may use the bigger window size. If we analyse spoken
words of a female voice, we may use the smaller window size. Or to put it very pragmatic: We
will use the medium FFT size (1024 samples) first, and in case we experience unsatisfying results
(bad frequency response or smearing time resolution) we will change the window size.
FFT in Csound
The raw output of a Fourier Transform is a number of amplitude-phase pairs per analysis window
frame. Most Csound opcodes use another format which transforms the phase values to frequen-
cies. This format is related to the phase vocoder implementation, so the Csound opcodes of this
class are called phase vocoder opcodes and start with pv or pvs.
The pv opcodes belong to the early implementation of FFT in Csound. This group comprises the
opcodes pvadd, pvbufread, pvcross, pvinterp, pvoc, pvread and vpvoc. Note that these pv opcodes
are not designed to work in real-time.
The opcodes which are designed for real-time spectral processing are called phase vocoder stream-
ing opcodes. They all start with pvs; a rather complete list can be found on the Spectral Processing
site in the Csound Manual. They are fast and easy to use. Because of their power and diversity
they are one of the biggest strengths in using Csound.
We will focus on these pvs opcodes here, which for most use cases offer all what is desirable to
work in the spectral domain. There is, however, a group of opcodes which allow to go back to
the raw FFT output (without the phase vocoder format). They are listed as array-based spectral
opcodes in the Csound Manual.
There are several opcodes to perform this transform. The most simple one is pvsanal. It performs
on-the-fly transformation of an input audio signal aSig to a frequency signal fSig. In addition to the
audio signal input it requires some basic FFT settings:
ä ifftsize is the size of the FFT. As explained above, 512, 1024 or 2048 samples are reasonable
values here.
ä ioverlap is the number of samples after which the next (overlapping) FFT frame starts (often
refered to as hop size). Usually it is 1/4 of the FFT size, so for instance 256 samples for a
FFT size of 1024.
ä iwinsize is the size of the analysis window. Usually this is set to the same size as ifftsize.6
538
05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING FFT in Csound
ä iwintype is the shape of the analysis window. 0 will use a Hamming window, 1 will use a
von-Hann (or Hanning) window.
ä The audio signal derives from playing back a soundfile from the hard disk (instr 1).
ä The audio signal is the live input (instr 2).
(Caution - this example can quickly start feeding back. Best results are with headphones.)
EXAMPLE 05I01_pvsanal.csd
<CsoundSynthesizer>
<CsOptions>
-i adc -o dac
--env:SSDIR+=/home/runner/work/csound-floss/csound-floss/resources/SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
i 1 0 3
i 2 3 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
You should hear first the fox.wav sample, and then the slightly delayed live input signal. The delay
(or latency) that you will observe will depend first of all on the general settings for realtime input
(ksmps, -b and -B: see chapter 02 D), but it will also be added to by the FFT process. The window
size here is 1024 samples, so the additional delay is 1024/44100 = 0.023 seconds. If you change
the window size gifftsiz to 2048 or to 512 samples, you should notice a larger or shorter delay.
For realtime applications, the decision about the FFT size is not only a question of better time
resolution versus better frequency resolution, but it will also be a question concerning tolerable
latency.
539
FFT in Csound 05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING
What happens in the example above? Firstly, the audio signal (asig or ain) is being analyzed and
transformed to an f-signal. This is done via the opcode pvsanal. Then nothing more happens
than the f-signal being transformed from the frequency domain signal back into the time domain
(an audio signal). This is called inverse Fourier transformation (IFT or IFFT) and is carried out by
the opcode pvsynth. In this case, it is just a test: to see if everything works, to hear the results
of different window sizes and to check the latency, but potentially you can insert any other pvs
opcode(s) in between this analysis and resynthesis:
Working with pvsanal to create an f-signal is easy and straightforward. But if we are using an
already existing sound file, we are missing one of the interesting possibilities in working with FFT:
time stretching. This we can obtain most simple when we use pvstanal instead. The t in pvstanal
stands for table. This opcode performs FFT on a sound which has been loaded in a table.7 These
are the main parameters:
ä ktimescal is the time scaling ratio. 1 means normal speed, 0.5 means half speed, 2 means
double speed.
ä kpitch is the pitch scaling ratio. We will keep this here at 1 which means that the pitch is not
altered.
ä ktab is the function table which is being read.
pvstanal offers some more and quite interesting parameters but we will use it here only a simple
way to demonstrate time stretching.
EXAMPLE 05I02_pvstanal.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
iTimeScal = p4
fsig pvstanal iTimeScal, 1, 1, gifil
aout pvsynth fsig
outs aout, aout
endin
540
05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING FFT in Csound
</CsInstruments>
<CsScore>
i 1 0 2.7 1 ;normal speed
i 1 3 1.3 2 ;double speed
i 1 6 4.5 0.5 ; half speed
i 1 12 17 0.1 ; 1/10 speed
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
We hear that for extreme time stretching artifacts arise. This is expected and a result of the FFT
resynthesis. Later in this chapter we will discuss how to avoid these artifacts.
The other possibility to work with a table (buffer) and get the f-signal by reading it is to use pvsbuf-
read. This opcode does not read from an audio buffer but needs a buffer which is filled with FFT
data already. This job is done by the related opcode pvsbuffer. In the next example, we wrap this
procedure in the User Defined Opcode FileToPvsBuf. This UDO is called at the first control cycle
of instrument simple_time_stretch, when timeinstk() (which counts the control cycles in an instru-
ment) outputs 1. After this job is done, the pvs-buffer is ready and stored in the global variable
gibuffer.
Time stretching is then done in the first instrument in a similar way we performed above with
pvstanal; only that we do not control directly the speed of reading but the real time position (in
seconds) in the buffer. In the example, we start in the middle of the sound file and read the words
“over the lazy dog” with a time stretch factor of about 10.
The second instrument can still use the buffer. Here a time stretch line is superimposed by a
trembling random movement. It changes 10 times a second and interpolates to a point which is
between - 0.2 seconds and + 0.2 seconds from the current position of the slow moving time pointer
created by the expression linseg:k(0,p3,gilen).
So although a bit harder to use, pvsbufread offers some nice possibilities. And it is reported to
have a very good performance, for instance when playing back a lot of files triggered by a MIDI
keyboard.
EXAMPLE 05I03_pvsbufread.csd
<CsoundSynthesizer>
<CsOptions>
-o dac --env:SSDIR+=../SourceMaterials
541
FFT in Csound 05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr simple_time_stretch
gibuffer, gilen, k0 FileToPvsBuf timeinstk(), "fox.wav"
ktmpnt linseg 1.6, p3, gilen
fread pvsbufread ktmpnt, gibuffer
aout pvsynth fread
out aout, aout
endin
instr tremor_time_stretch
ktmpnt = linseg:k(0,p3,gilen) + randi:k(1/5,10)
fread pvsbufread ktmpnt, gibuffer
aout pvsynth fread
out aout, aout
endin
</CsInstruments>
<CsScore>
i 1 0 10
i 2 11 20
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The mincer opcode also provides a high-quality time- and pitch-shifting algorithm. Other than
pvstanal and pvsbufread it already transforms the f-signal back to time domain, thus outputting an
audio signal.
542
05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING FFT in Csound
Pitch shifting
Simple pitch shifting can be carried out by the opcode pvscale. All the frequency data in the f-
signal are scaled by a certain value. Multiplying by 2 results in transposing by an octave upwards;
multiplying by 0.5 in transposing by an octave downwards. For accepting cent values instead of
ratios as input, the cent opcode can be used.
EXAMPLE 05I04_pvscale.csd
<CsoundSynthesizer>
<CsOptions>
-odac --env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
gifftsize = 1024
gioverlap = gifftsize / 4
giwinsize = gifftsize
giwinshape = 1; von-Hann window
</CsInstruments>
<CsScore>
i 1 0 3 1; original pitch
i 1 3 3 .5; octave lower
i 1 6 3 2 ;octave higher
i 2 9 3 0
i 2 9 3 400 ;major third
i 2 9 3 700 ;fifth
e
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Pitch shifting via FFT resynthesis is very simple in general, but rather more complicated in detail.
With speech for instance, there is a problem because of the formants. If we simply scale the
frequencies, the formants are shifted, too, and the sound gets the typical helium voice effect. There
are some parameters in the pvscale opcode, and some other pvs-opcodes which can help to avoid
543
FFT in Csound 05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING
this, but the quality of the results will always depend to an extend upon the nature of the input
sound.
As mentioned above, simple pitch shifting can also be performed via pvstanal or mincer.
Spectral Shifting
Rather than multiplying the bin frequencies by a scaling factor, which results in pitch shifting, it is
also possible to add a certain amount to the single bin frequencies. This results in an effect which
is called frequency shifting. It resembles the shifted spectra in ring modulation which has been
described at the end of chapter 04C.
The frequency-domain spectral shifting which is performed by the pvshift opcode has some im-
portant differences compared to the time-domain ring modulation:
The following example performs some different shifts on a single viola tone.
EXAMPLE 05I05_pvshift.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Shift
aSig diskin "BratscheMono.wav"
fSig pvsanal aSig, 1024, 256, 1024, 1
fShift pvshift fSig, p4, p5
aShift pvsynth fShift
out aShift, aShift
endin
</CsInstruments>
<CsScore>
i "Shift" 0 9 0 0 ;no shift (base freq is 218)
i . + . 50 0 ;shift all by 50 Hz
i . + . 150 0 ;shift all by 150 Hz
i . + . 500 0 ;shift all by 500 Hz
i . + . 150 230 ;only above 230 Hz by 150 Hz
i . + . . 460 ;only above 460 Hz
i . + . . 920 ;only above 920 Hz
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
544
05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING FFT in Csound
Cross Synthesis
Working in the frequency domain makes it possible to combine or cross the spectra of two sounds.
As the Fourier transform of an analysis frame results in a frequency and an amplitude value for
each frequency bin, there are many different ways of performing cross synthesis. The most com-
mon methods are:
ä Combine the amplitudes of sound A with the frequencies of sound B. This is the classical
phase vocoder approach. If the frequencies are not completely from sound B, but represent
an interpolation between A and B, the cross synthesis is more flexible and adjustable. This
is what pvsvoc does.
ä Combine the frequencies of sound A with the amplitudes of sound B. Give user flexibility by
scaling the amplitudes between A and B: pvscross.
ä Get the frequencies from sound A. Multiply the amplitudes of A and B. This can be described
as spectral filtering. pvsfilter gives a flexible portion of this filtering effect.
This is an example of phase vocoding. It is nice to have speech as sound A, and a rich sound, like
classical music, as sound B. Here the fox sample is being played at half speed and sings through
the music of sound B:
EXAMPLE 05I06_phase_vocoder.csd
<CsoundSynthesizer>
<CsOptions>
-odac --env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
;read "fox.wav" in half speed and cross with classical guitar sample
fsigA pvstanal .5, giamp, gipitch, gifilA, gidet, giwrap, giskip,\
gifftsiz, giovlp, githresh
fsigB pvstanal 1, giamp, gipitch, gifilB, gidet, giwrap, giskip,\
gifftsiz, giovlp, githresh
fvoc pvsvoc fsigA, fsigB, 1, 1
aout pvsynth fvoc
aenv linen aout, .1, p3, .5
545
FFT in Csound 05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING
out aenv
endin
</CsInstruments>
<CsScore>
i 1 0 11
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
EXAMPLE 05I07_pvscross.csd
<CsoundSynthesizer>
<CsOptions>
-odac --env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
;cross viola with "fox.wav" in half speed
fsigA pvstanal 1, giamp, gipitch, gifilA, gidet, giwrap, giskip,\
gifftsiz, giovlp, githresh
fsigB pvstanal .5, giamp, gipitch, gifilB, gidet, giwrap, giskip,\
gifftsiz, giovlp, githresh
fcross pvscross fsigA, fsigB, 0, 1
aout pvsynth fcross
aenv linen aout, .1, p3, .5
out aenv
endin
</CsInstruments>
<CsScore>
i 1 0 11
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The last example shows spectral filtering via pvsfilter. The well-known fox (sound A) is now filtered
by the viola (sound B). Its resulting intensity is dependent upon the amplitudes of sound B, and if
the amplitudes are strong enough, you will hear a resonating effect:
546
05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING FFT in Csound
EXAMPLE 05I08_pvsfilter.csd
<CsoundSynthesizer>
<CsOptions>
-odac --env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
;filters "fox.wav" (half speed) by the spectrum of the viola (double speed)
fsigA pvstanal .5, giamp, gipitch, gifilA, gidet, giwrap, giskip,\
gifftsiz, giovlp, githresh
fsigB pvstanal 2, 5, gipitch, gifilB, gidet, giwrap, giskip,\
gifftsiz, giovlp, githresh
ffilt pvsfilter fsigA, fsigB, 1
aout pvsynth ffilt
aenv linen aout, .1, p3, .5
out aenv
endin
</CsInstruments>
<CsScore>
i 1 0 11
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
ä For pvsmooth, the kacf and the kfcf parameter apply a low pass filter on the amplitudes and
the frequencies of the f-signal. The range is 0-1 each, where 0 is the lowest and 1 the highest
cutoff frequency. Lower values will smooth more, so the effect will be stronger.
ä For pvsblur, the kblurtime depicts the time in seconds during which the single FFT windows
will be averaged.
547
FFT in Csound 05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING
This is a trial to reduce the amount of artefacts. Note that pvstanal actually has the best method
to reduce artifacts in spoken word, as it can leave onsets unstretched (kdetect which is on by
default).
EXAMPLE 05I09_pvsmooth_pvsblur.csd
<CsoundSynthesizer>
<CsOptions>
-m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Raw
fStretch pvstanal 1/10, 1, 1, gifil, 0 ;kdetect is turned off
aStretch pvsynth fStretch
out aStretch, aStretch
endin
instr Smooth
iAmpCutoff = p4 ;0-1
iFreqCutoff = p5 ;0-1
fStretch pvstanal 1/10, 1, 1, gifil, 0
fSmooth pvsmooth fStretch, iAmpCutoff, iFreqCutoff
aSmooth pvsynth fSmooth
out aSmooth, aSmooth
endin
instr Blur
iBlurtime = p4 ;sec
fStretch pvstanal 1/10, 1, 1, gifil, 0
fBlur pvsblur fStretch, iBlurtime, 1
aSmooth pvsynth fBlur
out aSmooth, aSmooth
endin
instr Smooth_var
fStretch pvstanal 1/10, 1, 1, gifil, 0
kAmpCut randomi .001, .1, 10, 3
kFreqCut randomi .05, .5, 50, 3
fSmooth pvsmooth fStretch, kAmpCut, kFreqCut
aSmooth pvsynth fSmooth
out aSmooth, aSmooth
endin
instr Blur_var
kBlurtime randomi .005, .5, 200, 3
fStretch pvstanal 1/10, 1, 1, gifil, 0
fBlur pvsblur fStretch, kBlurtime, 1
aSmooth pvsynth fBlur
out aSmooth, aSmooth
endin
instr SmoothBlur
iacf = p4
ifcf = p5
iblurtime = p6
548
05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING FFT in Csound
</CsInstruments>
<CsScore>
i "Raw" 0 16
i "Smooth" 17 16 .01 .1
i "Blur" 34 16 .2
i "Smooth_var" 51 16
i "Blur_var" 68 16
i "SmoothBlur" 85 16 1 1 0
i . 102 . .1 1 .25
i . 119 . .01 .1 .5
i . 136 . .001 .01 .75
i . 153 . .0001 .001 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz and farhad ilaghi hosseini
pvsbin
The most fundamental extraction of single bins can be done with the pvsbin opcode. It takes the
f-signal and the bin number as input, and returns the amplitude and the frequency of the bin. These
values can be used to drive an oscillator which resynthesizes this bin.
The next example shows three different applications. At first, instr SingleBin is called four times,
performing bin 10, 20, 30 and 40. Then instr FourBins calls the four instances of SingleBin at the
same time, so we hear the four bins together. Finally, instr SlidingBins uses the fact that the bin
number can be given to pvsbin as k-rate variable. The line kBin randomi 1,50,200,3 pro-
duces changing bins with a rate of 200 Hz, between bin 1 and 50.
Note that we are always smoothing the bin amplitudes kAmp by applying port(kAmp,.01). Raw
kAmp instead will get clicks, whereas port(kAmp,.1) would remove the small attacks.
EXAMPLE 05I10_pvsbin.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
549
FFT in Csound 05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING
ksmps = 32
nchnls = 2
0dbfs = 1
instr SingleBin
iBin = p4 //bin number
aSig diskin "fox.wav"
fSig pvsanal aSig, 1024, 256, 1024, 1
kAmp, kFreq pvsbin fSig, iBin
aBin poscil port(kAmp,.01), kFreq
aBin *= iBin/10
out aBin, aBin
endin
instr FourBins
iCount = 1
while iCount < 5 do
schedule("SingleBin",0,3,iCount*10)
iCount += 1
od
endin
instr SlidingBins
kBin randomi 1,50,200,3
aSig diskin "fox.wav"
fSig pvsanal aSig, 1024, 256, 1024, 1
kAmp, kFreq pvsbin fSig, int(kBin)
aBin poscil port(kAmp,.01), kFreq
aBin *= kBin/10
out aBin, aBin
endin
</CsInstruments>
<CsScore>
i "SingleBin" 0 3 10
i . + . 20
i . + . 30
i . + . 40
i "FourBins" 13 3
i "SlidingBins" 17 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
pvstrace
Another approach to retrieve a selection of bins is done by the opcode pvstrace. Here, only the N
loudest bins are written in the f signal which this opcode outputs.
This is a simple example first which lets pvstrace play in sequence the 1, 2, 4, 8 and 16 loudest
bins.
EXAMPLE 05I11_pvstrace_simple.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
550
05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING FFT in Csound
ksmps = 32
nchnls = 2
0dbfs = 1
instr Simple
aSig diskin "fox.wav"
fSig pvsanal aSig, 1024, 256, 1024, 1
fTrace pvstrace fSig, p4
aTrace pvsynth fTrace
out aTrace, aTrace
endin
</CsInstruments>
<CsScore>
i "Simple" 0 3 1
i . + . 2
i . + . 4
i . + . 8
i . + . 16
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
An optional second output of pvstrace returns an array with the kn bin numbers which are most
prominent. As a demonstration, this example passes only the loudest bin to pvsbin and resynthe-
sizes it with an oscillator unit.
EXAMPLE 05I12_pvstrace_array.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr LoudestBin
aSig diskin "fox.wav"
fSig pvsanal aSig, 1024, 256, 1024, 1
fTrace, kBins[] pvstrace fSig, 1, 1
kAmp, kFreq pvsbin fSig, kBins[0]
aLoudestBin poscil port(kAmp,.01), kFreq
out aLoudestBin, aLoudestBin
endin
</CsInstruments>
<CsScore>
i "LoudestBin" 0 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
551
FFT in Csound 05 I. FOURIER ANALYSIS / SPECTRAL PROCESSING
552
05 K. ATS RESYNTHESIS
General overview
The ATS technique (Analysis-Transformation-Synthesis) was developed by Juan Pampin. A com-
prehensive explanation of this technique can be found in his ATS Theory1 but, essentially, it may be
said that it represents two aspects of the analyzed signal: the deterministic part and the stochas-
tic or residual part. This model was initially conceived by Julius Orion Smith and Xavier Serra,2 but
ATS refines certain aspects of it, such as the weighting of the spectral components on the basis
of their Signal-to-Mask-Ratio (SMR).3
The deterministic part consists in sinusoidal trajectories with varying amplitude, frequency and
phase. It is achieved by means of the depuration of the spectral data obtained using STFT (Short-
Time Fourier Transform) analysis.
The stochastic part is also termed residual, because it is achieved by subtracting the deterministic
signal from the original signal. For such purposes, the deterministic part is synthesized preserving
the phase alignment of its components in the second step of the analysis. The residual part is
represented with noise variable energy values along the 25 critical bands.4
1. The splitting between deterministic and stochastic parts allows an independent treatment
of two different qualitative aspects of an audio signal.
2. The representation of the deterministic part by means of sinusoidal trajectories improves
the information and presents it on a way that is much closer to the way that musicians think
of sound. Therefore, it allows many classical spectral transformations (such as the sup-
pression of partials or their frequency warping) in a more flexible and conceptually clearer
553
The ATS Technique 05 K. ATS RESYNTHESIS
way.
3. The representation of the residual part by means of noise values among the 25 critical bands
simplifies the information and its further reconstruction. Namely, it is possible to overcome
the common artifacts that arise in synthesis using oscillator banks or IDFT, when the time
of a noisy signal analyzed using a FFT is warped.
The ATS files start with a header at which their description is stored (such as frame rate, dura-
tion, number of sinusoidal trajectories, etc.). The header of the ATS files contains the following
information:
The ATS frame type may be, at present, one of the four following:
Type 1: only sinusoidal trajectories with amplitude and frequency data. Type 2: only sinusoidal
trajectories with amplitude, frequency and phase data. Type 3: sinusoidal trajectories with ampli-
tude, and frequency data as well as residual data. Type 4: sinusoidal trajectories with amplitude,
frequency and phase data as well as residual data.
So, after the header, an ATS file with frame type 4, np number of partials and nf frames will have:
Frame 1:
time tag
Amp.of partial 1, Freq. of partial 1, Phase of partial 1
..................................................................
..................................................................
Amp.of partial np, Freq. of partial np, Phase of partial np
554
05 K. ATS RESYNTHESIS The ATS Technique
......................................................................
Frame nf:
time tag
Amp.of partial 1, Freq. of partial 1, Phase of partial 1
..................................................................
..................................................................
Amp.of partial np, Freq. of partial np, Phase of partial np
As an example, an ATS file of frame type 4, with 100 frames and 10 partials will need:
ä 100 _ 10 _ 3 double floats for storing the Amplitude, Frequency and Phase values of 10 partials
along 100 frames.
ä 25 * 100 double floats for storing the noise information of the 25 critical bands along 100
frames.
ä 100 double floats for storing the time tag information for each frame
Header: 10 * 8 = 80 bytes
Deterministic data: 3000 * 8 = 24000 bytes
Residual data: 2500 * 8 = 20000 bytes
Time tags data 100 = 800 bytes
The following Csound code shows how to retrieve the data of the header of an ATS file.
EXAMPLE 05K01_ats_header.csd
<CsoundSynthesizer>
<CsOptions>
-n -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
;Some macros
#define ATS_SR # 0 # ;sample rate (Hz)
#define ATS_FS # 1 # ;frame size (samples)
#define ATS_WS # 2 # ;window Size (samples)
#define ATS_NP # 3 # ;number of Partials
#define ATS_NF # 4 # ;number of Frames
#define ATS_AM # 5 # ;maximum Amplitude
#define ATS_FM # 6 # ;maximum Frequency (Hz)
#define ATS_DU # 7 # ;duration (seconds)
#define ATS_TY # 8 # ;ATS file Type
instr 1
iats_file=p4
;instr1 just reads the file header and loads its data into several variables
;and prints the result in the Csound prompt.
555
Performing ATS Analysis with the ATSA Command-line Utility of Csound 05 K. ATS RESYNTHESIS
print i_sampling_rate
print i_frame_size
print i_window_size
print i_number_of_partials
print i_number_of_frames
print i_max_amp
print i_max_freq
print i_duration
print i_ats_file_type
endin
</CsInstruments>
<CsScore>
;change to put any ATS file you like
#define ats_file #"basoon-C4.ats"#
; st dur atsfile
i1 0 0 $ats_file
e
</CsScore>
</CsoundSynthesizer>
;Example by Oscar Pablo Di Liscia
Another very good GUI program that can be used for such purposes is Qatsh, a Qt 4 port
556
05 K. ATS RESYNTHESIS Synthesizing ATS Analysis Files
In order to get a good analysis, the sound to be analysed should meet the following requirements:
1. The ATS analysis was meant to analyse isolated, individual sounds. This means that the
analysis of sequences and/or superpositions of sounds, though possible, is not likely to ren-
der optimal results.
2. Must have been recorded with a good signal-to-noise ratio, and should not contain unwanted
noises.
3. Must have been recorded without reverberation and/or echoes.
1. Must have a good temporal resolution of the frequency, amplitude, phase and noise (if any)
data. The tradeoff between temporal and frequency resolution is a very well known issue in
FFT based spectral analysis.
2. The Deterministic and Stochastic (also termed *residual) data must be reasonably separated
in their respective ways of representation. This means that, if a sound has both, deterministic
and stochastic data, the former must be represented by sinusoidal trajectories, whilst the
latter must be represented by energy values among the 25 critical bands. This allows a more
effective treatment of both types of data in the synthesis and transformation processes.
3. If the analysed sound is pitched, the sinusoidal trajectories (Deterministic) should be as
stable as possible and ordered according the original sound harmonics. This means that
the first trajectory should represent the first (fundamental) harmonic, the second trajectory
should represent the second harmonic, and so on. This allow to perform easily further trans-
formation processes during resynthesis (such as, for example, selecting the odd harmonics
to give them a different treatment than the others).
Whilst the first requirement is unavoidable, in order to get a useful analysis, the second and third
ones are sometimes almost impossible to meet in full and their accomplishment depends often
on the user objectives.
557
Synthesizing ATS Analysis Files 05 K. ATS RESYNTHESIS
ATSread
This opcode reads the deterministic ATS data from an ATS file. It outputs frequency/amplitude
pairs of a sinusoidal trajectory corresponding to a specific partial number, according to a time
pointer that must be delivered. As the unit works at k-rate, the frequency and amplitude data must
be interpolated in order to avoid unwanted clicks in the resynthesis.
The following example reads and synthesizes the 10 partials of an ATS analysis corresponding to a
steady 440 cps flute sound. Since the instrument is designed to synthesize only one partial of the
ATS file, the mixing of several of them must be obtained performing several notes in the score (the
use of Csound’s macros is strongly recommended in this case). Though not the most practical way
of synthesizing ATS data, this method facilitates individual control of the frequency and amplitude
values of each one of the partials, which is not possible any other way. In the example that follows,
even numbered partials are attenuated in amplitude, resulting in a sound that resembles a clarinet.
Amplitude and frequency envelopes could also be used in order to affect a time changing weighting
of the partials. Finally, the amplitude and frequency values could be used to drive other synthesis
units, such as filters or FM synthesis networks of oscillators.
EXAMPLE 05K02_atsread.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
org/2013/papers/26.pdf
558
05 K. ATS RESYNTHESIS Synthesizing ATS Analysis Files
instr 1
iamp = p4 ;amplitude scaler
ifreq = p5 ;frequency scaler
ipar = p6 ;partial required
itab = p7 ;audio table
iatsfile = p8 ;ats file
kfreq, kamp ATSread ktime, iatsfile, ipar ;get frequency and amplitude values
aamp interp kamp ;interpolate amplitude values
afreq interp kfreq ;interpolate frequency values
aout oscil3 aamp*iamp, afreq*ifreq, itab ;synthesize with amp and freq scaling
out aout
endin
</CsInstruments>
<CsScore>
; sine wave table
f 1 0 16384 10 1
#define atsfile #"flute-A5.ats"#
We can use arrays to simplify the code in this example, and to choose different numbers of partials:
EXAMPLE 05K03_atsread2.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
559
Synthesizing ATS Analysis Files 05 K. ATS RESYNTHESIS
instr Play
iamp = p4 ;amplitude scaler
ipar = p5 ;partial required
idur = p6 ;ats file duration
out aout
endin
</CsInstruments>
<CsScore>
; strt dur number of partials
i "Master" 0 3 1
i . + . 3
i . + . 10
</CsScore>
</CsoundSynthesizer>
;example by Oscar Pablo Di Liscia and Joachim Heintz
ATSreadnz
This opcode is similar to ATSread in the sense that it reads the noise data of an ATS file, delivering
k-rate energy values for the requested critical band. In order to this Opcode to work, the input ATS
file must be either type 3 or 4 (types 1 and 2 do not contain noise data). ATSreadnz is simpler than
ATSread, because whilst the number of partials of an ATS file is variable, the noise data (if any)
is stored always as 25 values per analysis frame each value corresponding to the energy of the
noise in each one of the critical bands. The three required arguments are: a time pointer, an ATS
file name and the number of critical band required (which, of course, must have a value between 1
and 25).
The following example is similar to the previous. The instrument is designed to synthesize only
one noise band of the ATS file, the mixing of several of them must be obtained performing several
notes in the score. In this example the synthesis of the noise band is done using Gaussian noise
filtered with a resonator (i.e., band-pass) filter. This is not the method used by the ATS synthesis
Opcodes that will be further shown, but its use in this example is meant to lay stress again on
the fact that the use of the ATS analysis data may be completely independent of its generation.
In this case, also, a macro that performs the synthesis of the 25 critical bands was programmed.
The ATS file used correspond to a female speech sound that lasts for 3.633 seconds, and in the
560
05 K. ATS RESYNTHESIS Synthesizing ATS Analysis Files
examples is stretched to 10.899 seconds, that is three times its original duration. This shows one
of the advantages of the Deterministic plus Stochastic data representation of ATS: the stochas-
tic (“noisy”) part of a signal may be stretched in the resynthesis without the artifacts that arise
commonly when the same data is represented by cosine components (as in the FFT based resyn-
thesis). Note that, because the Stochastic noise values correspond to energy (i.e., intensity), in
order to get the proper amplitude values, the square root of them must be computed.
EXAMPLE 05K04_atsreadnz.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
itabc = p7 ;table with the 25 critical band frequency edges
iscal = 1 ;reson filter scaling factor
iamp = p4 ;amplitude scaler
iband = p5 ;energy band required
if1 table iband-1, itabc ;lower edge
if2 table iband, itabc ;upper edge
idif = if2-if1
icf = if1 + idif*.5 ;center frequency value
ibw = icf*p6 ;bandwidth
iatsfile = p8 ;ats file name
ken ATSreadnz ktime, iatsfile, iband ;get frequency and amplitude values
anoise gauss 1
aout reson anoise*sqrt(ken), icf, ibw, iscal ;synthesize with scaling
out aout*iamp
endin
</CsInstruments>
<CsScore>
; sine wave table
f1 0 16384 10 1
;the 25 critical bands edge's frequencies
f2 0 32 -2 0 100 200 300 400 510 630 770 920 1080 1270 1480 1720 2000 2320 \
2700 3150 3700 4400 5300 6400 7700 9500 12000 15500 20000
;a macro that synthesize the noise data along all the 25 critical bands
#define all_bands(start'dur'amp'bw'file)
#
i1 $start $dur $amp 1 $bw 2 $file
i1 . . . 2 . . $file
i1 . . . 3 . . .
i1 . . . 4 . . .
i1 . . . 5 . . .
561
Synthesizing ATS Analysis Files 05 K. ATS RESYNTHESIS
i1 . . . 6 . . .
i1 . . . 7 . . .
i1 . . . 8 . . .
i1 . . . 9 . . .
i1 . . . 10 . . .
i1 . . . 11 . . .
i1 . . . 12 . . .
i1 . . . 13 . . .
i1 . . . 14 . . .
i1 . . . 15 . . .
i1 . . . 16 . . .
i1 . . . 17 . . .
i1 . . . 18 . . .
i1 . . . 19 . . .
i1 . . . 20 . . .
i1 . . . 21 . . .
i1 . . . 22 . . .
i1 . . . 23 . . .
i1 . . . 24 . . .
i1 . . . 25 . . .
#
e
</CsScore>
</CsoundSynthesizer>
;example by Oscar Pablo Di Liscia
The ATSbufread opcode reads an ATS file and stores its frequency and amplitude data into an in-
ternal table. The first and third input arguments are the same as in the ATSread and the ATSreadnz
Opcodes: a time pointer and an ATS file name. The second input argument is a frequency scaler.
The fourth argument is the number of partials to be stored. Finally, this Opcode may take two
optional arguments: the first partial and the increment of partials to be read, which default to 0
and 1 respectively.
Although this opcode does not have any output, the ATS frequency and amplitude data is avail-
able to be used by other opcode. In this case, two examples are provided, the first one uses the
ATSinterpread opcode and the second one uses the ATSpartialtap opcode.
The ATSinterpread opcode reads an ATS table generated by the ATSbufread opcode and outputs
amplitude values interpolating them between the two amplitude values of the two frequency tra-
jectories that are closer to a given frequency value. The only argument that this opcode takes is
the desired frequency value.
The following example synthesizes five sounds. All the data is taken from the ATS file test.ats. The
first and final sounds match the two frequencies closer to the first and the second partials of the
analysis file and have their amplitude values closer to the ones in the original ATS file. The other
three sounds (second, third and fourth), have frequencies that are in-between the ones of the first
and second partials of the ATS file, and their amplitudes are scaled by an interpolation between
the amplitudes of the first and second partials. The more the frequency requested approaches
the one of a partial, the more the amplitude envelope rendered by ATSinterpread is similar to the
562
05 K. ATS RESYNTHESIS Synthesizing ATS Analysis Files
one of this partial. So, the example shows a gradual morphing beween the amplitude envelope of
the first partial to the amplitude envelope of the second according to their frequency values.
EXAMPLE 05K05_atsinterpread.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
out aout*iamp
endin
</CsInstruments>
<CsScore>
; sine wave table
f 1 0 16384 10 1
#define atsfile #"test.ats"#
The ATSpartialtap opcode reads an ATS table generated by the ATSbufread Opcode and outputs
the frequency and amplitude k-rate values of a specific partial number. The example presented
here uses four of these opcodes that read from a single ATS buffer obtained using ATSbufread in
order to drive the frequency and amplitude of four oscillators. This allows the mixing of different
combinations of partials, as shown by the three notes triggered by the designed instrument.
563
Synthesizing ATS Analysis Files 05 K. ATS RESYNTHESIS
EXAMPLE 05K06_atspartialtap.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
iamp = p4/4 ;amplitude scaler
ifreq = p5 ;frequency scaler
itab = p6 ;audio table
ip1 = p7 ;first partial to be synthesized
ip2 = p8 ;second partial to be synthesized
ip3 = p9 ;third partial to be synthesized
ip4 = p10 ;fourth partial to be synthesized
iatsfile = p11 ;atsfile
kf1,ka1 ATSpartialtap ip1 ;get the amp values according each partial number
af1 interp kf1
aa1 interp ka1
kf2,ka2 ATSpartialtap ip2 ;ditto
af2 interp kf2
aa2 interp ka2
kf3,ka3 ATSpartialtap ip3 ;ditto
af3 interp kf3
aa3 interp ka3
kf4,ka4 ATSpartialtap ip4 ;ditto
af4 interp kf4
aa4 interp ka4
out (a1+a2+a3+a4)*iamp
endin
</CsInstruments>
<CsScore>
; sine wave table
f 1 0 16384 10 1
#define atsfile #"oboe-A5.ats"#
; start dur amp freq atab part#1 part#2 part#3 part#4 atsfile
i1 0 3 10 1 1 1 5 11 13 $atsfile
i1 + 3 7 1 1 1 6 14 17 $atsfile
i1 + 3 400 1 1 15 16 17 18 $atsfile
e
</CsScore>
564
05 K. ATS RESYNTHESIS Synthesizing ATS Analysis Files
</CsoundSynthesizer>
;example by Oscar Pablo Di Liscia
The ATSadd opcode synthesizes deterministic data from an ATS file using an array of table lookup
oscillators whose amplitude and frequency values are obtained by linear interpolation of the ones
in the ATS file according to the time of the analysis requested by a time pointer. The frequency
of all the partials may be modified at k-rate, allowing shifting and/or frequency modulation. An
ATS file, a time pointer and a function table are required. The table is supposed to contain either a
cosine or a sine function, but nothing prevents the user from experimenting with other functions.
Some care must be taken in the last case, so as not to produce foldover (frequency aliasing). The
user may also request a number of partials smaller than the number of partials of the ATS file (by
means of the inpars variable in the example below). There are also two optional arguments: a
partial offset (i.e., the first partial that will be taken into account for the synthesis, by means of
the ipofst variable in the example below) and a step to select the partials (by means of the inpincr
variable in the example below). Default values for these arguments are 0 and 1 respectively. Finally,
the user may define a final optional argument that references a function table that will be used to
rescale the amplitude values during the resynthesis. The amplitude values of all the partials along
all the frames are rescaled to the table length and used as indexes to lookup a scaling amplitude
value in the table. For example, in a table of size 1024, the scaling amplitude of all the 0.5 amplitude
values (-6 dBFS) that are found in the ATS file is in the position 512 (1024/2). Very complex filtering
effects can be obtained by carefully setting these gating tables according to the amplitude values
of a particular ATS analysis.
EXAMPLE 05K07_atsadd.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
;Some macros
#define ATS_NP # 3 # ;number of Partials
#define ATS_DU # 7 # ;duration
instr 1
565
Synthesizing ATS Analysis Files 05 K. ATS RESYNTHESIS
out asig*iamp
endin
</CsInstruments>
<CsScore>
; start dur amp freq atable npars offset pincr gatefn atsfile
i1 0 2.82 1 0 1 0 0 1 0 $ats_file
i1 + . 1 0 1 0 0 1 2 $ats_file
i1 + . .8 0 1 0 0 1 3 $ats_file
e
</CsScore>
</CsoundSynthesizer>
;example by Oscar Pablo Di Liscia
The ATSaddnz opcode synthesizes residual (“noise”) data from an ATS file using the method ex-
plained above. This opcode works in a similar fashion to ATSadd except that frequency warping of
the noise bands is not permitted and the maximum number of noise bands will always be 25 (the
25 critical bands, see Zwicker/Fastl, footnote 3). The optional arguments offset and increment
566
05 K. ATS RESYNTHESIS Synthesizing ATS Analysis Files
work in a similar fashion to that in ATSadd. The ATSaddnz opcode allows the synthesis of several
combinations of noise bands, but individual amplitude scaling of them is not possible.
EXAMPLE 05K08_atsaddnz.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
;Some macros
#define NB # 25 # ;number noise bands
#define ATS_DU # 7 # ;duration
instr 1
/*read some ATS data from the file header*/
iatsfile = p8
i_duration ATSinfo iatsfile, $ATS_DU
out asig*iamp
endin
</CsInstruments>
<CsScore>
e
</CsScore>
567
Synthesizing ATS Analysis Files 05 K. ATS RESYNTHESIS
</CsoundSynthesizer>
;example by Oscar Pablo Di Liscia
The ATSsinnoi opcode synthesizes both deterministic and residual (“noise”) data from an ATS
file. This opcode may be regarded as a combination of the two previous opcodes but with the
allowance of individual amplitude scaling of the mixes of deterministic and residual parts. All the
arguments of ATSsinnoi are the same as those for the two previous opcodes, except for the two
k-rate variables ksinlev and knoislev that allow individual, and possibly time-changing, scaling of
the deterministic and residual parts of the synthesis.
EXAMPLE 05K09_atssinnoi.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
;Some macros
#define ATS_NP # 3 # ;number of Partials
#define ATS_DU # 7 # ;duration
instr 1
iatsfile = p11
/*read some ATS data from the file header*/
i_number_of_partials ATSinfo iatsfile, $ATS_NP
i_duration ATSinfo iatsfile, $ATS_DU
print i_number_of_partials
out asig*iamp
568
05 K. ATS RESYNTHESIS Synthesizing ATS Analysis Files
endin
</CsInstruments>
<CsScore>
;change to put any ATS file you like
#define ats_file #"female-speech.ats"#
; start dur amp freqdev sinlev noislev npars offset pincr atsfile
i1 0 3.66 .79 0 1 0 0 0 1 $ats_file
;deterministic only
i1 + 3.66 .79 0 0 1 0 0 1 $ats_file
;residual only
i1 + 3.66 .79 0 1 1 0 0 1 $ats_file
;deterministic and residual
; start dur amp freqdev sinlev noislev npars offset pincr atsfile
i1 + 3.66 2.5 0 1 0 80 60 1 $ats_file
;from partial 60 to partial 140, deterministic only
i1 + 3.66 2.5 0 0 1 80 60 1 $ats_file
;from partial 60 to partial 140, residual only
i1 + 3.66 2.5 0 1 1 80 60 1 $ats_file
;from partial 60 to partial 140, deterministic and residual
e
</CsScore>
</CsoundSynthesizer>
;example by Oscar Pablo Di Liscia
ATScross is an opcode that performs some kind of “interpolation” of the amplitude data between
two ATS analyses. One of these two ATS analyses must be obtained using the ATSbufread opcode
(see above) and the other is to be loaded by an ATScross instance. Only the deterministic data of
both analyses is used. The ATS file, time pointer, frequency scaling, number of partials, partial
offset and partial increment arguments work the same way as usages in previously described op-
codes. Using the arguments kmylev and kbuflev the user may define how much of the amplitude
values of the file read by ATSbufread is to be used to scale the amplitude values corresponding
to the frequency values of the analysis read by ATScross. So, a value of 0 for kbuflev and 1 for
kmylev will retain the original ATS analysis read by ATScross unchanged whilst the converse (kbu-
flev =1 and kmylev=0) will retain the frequency values of the ATScross analysis but scaled by the
amplitude values of the ATSbufread analysis. As the time pointers of both units need not be the
same, and frequency warping and number of partials may also be changed, very complex cross
synthesis and sound hybridation can be obtained using this opcode.
EXAMPLE 05K10_atscross.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
;ATS files
#define ats1 #"flute-A5.ats"#
#define ats2 #"oboe-A5.ats"#
569
Synthesizing ATS Analysis Files 05 K. ATS RESYNTHESIS
instr 1
iamp = p4 ;general amplitude scaler
out aout*iamp
endin
</CsInstruments>
<CsScore>
e
</CsScore>
</CsoundSynthesizer>
;example by Oscar Pablo Di Liscia
570
06 A. RECORD AND PLAY SOUNDFILES
1. The csd file and the sound file are in the same directory (folder). This is the most simple
way and gives full flexibility to run the same csd from any other computer, just by copying
the whole folder.
2. The folder which contains the sound file is known to Csound. This can be done with the
option –env:SSDIR+=/path/to/sound/folder. Csound will then add this folder to the Sound
Sample Directory (SSDIR) in which it will look for sound samples.
A path to look for sound files can not only be given as absolute path but also as relative path. Let
us assume we have this structure for the csd file and the sound file:
1 As this is a matter of speed, it depends both, on the complexity of the csound file(s) you are running, and the speed of the
hard disk. A Solid State Disk is much faster than a traditional HDD, so a Csound file with a lot of diskin processes may run
fine on a SSD which did not run on a HDD.
571
Playing Soundfiles from Disk - diskin 06 A. RECORD AND PLAY SOUNDFILES
|-home
|-me
superloop.csd
|-Desktop
loop.wav
The superloop.csd Csound file is not in the same directory as the loop.wav sound file. But relative to
the csd file, the sound is in Desktop, and Desktop is indeed in the same folder as the superloop.csd
file. So we could write this:
aSound diskin "Desktop/loop.wav"
Now the loop.wav is relative to the csd file not in a subfolder, but on the higher level folder called
me, and then in the folder samples. So we have to specify the relative path like this: “Go up, then
look into the folder samples.” Going up is specified as two dots, so this would be relative path for
diskin:
aSound diskin "../samples/loop.wav"
Again, we could alternatively use –env:SSDIR+=../samples in the CsOptions and then simple refer
to “loop.wav”.
The first line is the traditional way. We will output here as many audio signals as the sound file has
channels. Many Csound user will have read this message:
INIT ERROR in instr 1 line 17: diskin2:
number of output args inconsistent with number of file channels
This inconsistency of the number of output arguments and the number of file channels happens, if
we use the stereo file “magic.wav” but write:
572
06 A. RECORD AND PLAY SOUNDFILES Playing Soundfiles from Disk - diskin
Since Csound6, however, we have the second option mentioned on Csound’s manual page for
diskin:
ar1[] diskin ...
If the output variable name is followed by square brackets, diskin will write its output in an audio
array.2 The size (length) of this array mirrors the number of channels in the audio file: 1 for a mono
file, 2 for a stereo file, 4 for a quadro file, etc.
This is a very convenient method to avoid the mismatch error between output arguments and file
channels. In the example below we will use this method. We write the audio in an array and will
only use the first element for the output. So this will work with any number of channels for the
input file.
ä kpitch specifies the speed of reading the sound file. The default is 1 here, which means
normal speed. 2 would result in double speed (octave higher and half time to read through
the sound file), 0.5 would result in half speed (octave lower and twice as much time needed
for reading). Negative values read backwards. As this is a k-rate parameter, it offers a lot of
possibilities for modification already.
ä iskiptim specifies the point in the sound file where reading starts. The default is 0 (= from
the beginning); 2 would mean to skip the first two seconds of the sound file.
ä iwraparound answers the question what diskin will do when reading reaches the end of the
file. The default is here 0 which means that reading stops. If we put 1 here, diskin will loop
the sound file.
EXAMPLE 06A01_Play_soundfile.csd
<CsoundSynthesizer>
<CsOptions>
-odac --env:SSDIR+=../SourceMaterials
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
573
Writing Audio to Disk 06 A. RECORD AND PLAY SOUNDFILES
instr Defaults
kSpeed = p4 ; playback speed
iSkip = p5 ; inskip into file (in seconds)
iLoop = p6 ; looping switch (0=off 1=on)
aRead[] diskin gS_file, kSpeed, iSkip, iLoop
out aRead[0], aRead[0] ;output first channel twice
endin
instr Scratch
kSpeed randomi -1, 1.5, 5, 3
aRead[] diskin gS_file, kSpeed, 1, 1
out aRead[0], aRead[0]
endin
</CsInstruments>
<CsScore>
; dur speed skip loop
i 1 0 4 1 0 0 ;default values
i . 4 3 1 1.7 0 ;skiptime
i . 7 6 0.5 0 0 ;speed
i . 13 6 1 0 1 ;loop
i 2 20 20
</CsScore>
</CsoundSynthesizer>
;example written by Iain McCurdy and joachim heintz
EXAMPLE 06A02_Write_soundfile.csd
<CsoundSynthesizer>
<CsOptions>
; audio output destination is given as a sound file (wav format specified)
; this method is for deferred time performance,
; simultaneous real-time audio will not be possible
-oWriteToDisk1.wav -W
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
574
06 A. RECORD AND PLAY SOUNDFILES Both Audio to Disk and RTAudio Output - fout with monitor
endin
</CsInstruments>
<CsScore>
; two chords
i 1 0 5 60
i 1 0.1 5 65
i 1 0.2 5 67
i 1 0.3 5 71
i 1 3 5 65
i 1 3.1 5 67
i 1 3.2 5 73
i 1 3.3 5 78
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy
EXAMPLE 06A03_Write_RT.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activate real-time audio output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
gaSig init 0; set initial value for global audio variable (silence)
</CsInstruments>
575
Both Audio to Disk and RTAudio Output - fout with monitor 06 A. RECORD AND PLAY SOUNDFILES
<CsScore>
; activate recording instrument to encapsulate the entire performance
i 2 0 8.3
; two chords
i 1 0 5 60
i 1 0.1 5 65
i 1 0.2 5 67
i 1 0.3 5 71
i 1 3 5 65
i 1 3.1 5 67
i 1 3.2 5 73
i 1 3.3 5 78
</CsScore>
</CsoundSynthesizer>
;example written by Iain McCurdy
576
06 B. RECORD AND PLAY BUFFERS
One of the newer and easier to use opcodes for this task is flooper2. As its name might suggest
it is intended for the playback of files with looping. flooper2 can also apply a cross-fade between
the end and the beginning of the loop in order to smooth the transition where looping takes place.
In the following example a sound file that has been loaded into a GEN01 function table is played
back using flooper2. The opcode also includes a parameter for modulating playback speed/pitch.
There is also the option of modulating the loop points at k-rate. In this example the entire file
is simply played and looped. As always, you can replace the sound file with one of your own.
Note that GEN01 accepts mono or stereo files; the number of output arguments for flooper2 must
correspond with the mono or stereo table.
EXAMPLE 06B01_flooper2.csd
<CsoundSynthesizer>
<CsOptions>
-odac ; activate real-time audio
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
577
Csound’s Built-in Record-Play Buffer - sndloop 06 B. RECORD AND PLAY BUFFERS
</CsInstruments>
<CsScore>
; p4 = pitch
; (sound file duration is 4.224)
i 1 0 [4.224*2] 1
i 1 + [4.224*2] 0.5
i 1 + [4.224*1] 2
e
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy
You will need to have a microphone connected to your computer in order to use this example.
EXAMPLE 06B02_sndloop.csd
<CsoundSynthesizer>
<CsOptions>
; real-time audio in and out are both activated
-iadc -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
; PRINT INSTRUCTIONS
prints "Press 'r' to record, 's' to stop playback, "
prints "'+' to increase pitch, '-' to decrease pitch.\\n"
; SENSE KEYBOARD ACTIVITY
kKey sensekey; sense activity on the computer keyboard
aIn inch 1 ; read audio from first input channel
kPitch init 1 ; initialize pitch parameter
iDur init 2 ; inititialize duration of loop parameter
578
06 B. RECORD AND PLAY BUFFERS Recording to and Playback from a Function Table
</CsInstruments>
<CsScore>
i 1 0 3600 ; instr 1 plays for 1 hour
</CsScore>
</CsoundSynthesizer>
;example written by Iain McCurdy
EXAMPLE 06B03_RecPlayToTable.csd
<CsoundSynthesizer>
<CsOptions>
; real-time audio in and out are both activated
-iadc -odac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
579
Encapsulating Record and Play Buffer Functionality to a UDO 06 B. RECORD AND PLAY BUFFERS
</CsInstruments>
<CsScore>
i 1 0 3600 ; Sense keyboard activity. Start recording - playback.
</CsScore>
</CsoundSynthesizer>
;example written by Iain McCurdy
580
06 B. RECORD AND PLAY BUFFERS Encapsulating Record and Play Buffer Functionality to a UDO
is mostly meant as example how open this field is for different user implementations, and how
easy it is to create own applications based on the fundamental functionalities of table reading and
writing.
One way to write compact Csound code is to follow the principle one job per line (of code). For
defining one job of a good size, we will mostly need a UDO which combines some low-level tasks
and also allows us to apply a memorizable name for this job. So often the principle one job per
line results in one UDO per line.
Let us go step by step through this list, before we finally write this instrument in four lines of code.
Step 1 we already did in the previous example; we only wrap the GEN routine in a UDO which gets
the time as input and returns the buffer variable as output. Anything else is hidden.
opcode createBuffer, i, i
ilen xin
ift ftgen 0, 0, ilen*sr, 2, 0
xout ift
endop
Step 2 is the only one which is a normal Csound code line, consisting of the sensekey opcode. Due
to the implementation of sensekey, there should only be one sensekey in a Csound orchestra.
kKey, kDown sensekey
Step 3 consists of two parts. We will write one UDO for both. The first UDO writes to a buffer if it
gets a signal to do so. We choose here a very low-level way of writing an audio signal to a buffer.
Instead of creating an index, we just increment the single index numbers. To continue the process
at the end of the buffer, we apply the modulo operation to the incremented numbers.2
opcode recordBuffer, 0, aik
ain, ift, krec xin
setksmps 1 ;k=a here in this UDO
kndx init 0 ;initialize index
if krec == 1 then
tablew ain, a(kndx), ift
kndx = (kndx+1) % ftlen(ift)
endif
endop
The second UDO ouputs 1 as long as a key is pressed. Its input consists of the ASCII key which is
selected, and of the output of the sensekey opcode.
opcode keyPressed, k, kki
kKey, kDown, iAscii xin
kPrev init 0 ;previous key value
kOut = (kKey == iAscii || (kKey == -1 && kPrev == iAscii) ? 1 : 0)
kPrev = (kKey > 0 ? kKey : kPrev)
kPrev = (kPrev == kKey && kDown == 0 ? 0 : kPrev)
xout kOut
endop
2 The symbol for the modulo operation is %. The result is the remainder in a division: 1 % 3 = 1, 4 % 3 = 1, 7 % 3 = 1 etc.
581
Encapsulating Record and Play Buffer Functionality to a UDO 06 B. RECORD AND PLAY BUFFERS
The reading procedure in step 4 is in fact the same as was used for writing. We only have to replace
the opcode for writing tablew with the opcode for reading table.
opcode playBuffer, a, ik
ift, kplay xin
setksmps 1 ;k=a here in this UDO
kndx init 0 ;initialize index
if kplay == 1 then
aRead table a(kndx), ift
kndx = (kndx+1) % ftlen(ift)
endif
xout aRead
endop
Note that you must disable the key repeats on your computer keyboard for the following example
(in CsoundQt, disable “Allow key repeats” in Configuration -> General). Press the r key as long as
you want to record, and the p key for playing back. Both, record and playback, is done circular.
EXAMPLE 06B04_BufRecPlay_UDO.csd
<CsoundSynthesizer>
<CsOptions>
-i adc -o dac -m128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
582
06 B. RECORD AND PLAY BUFFERS Further Opcodes for Investigation
instr RecPlay
iBuffer = createBuffer(3) ;buffer for 3 seconds of recording
kKey, kDown sensekey
recordBuffer(inch(1), iBuffer, keyPressed(kKey,kDown,114))
out playBuffer(iBuffer, keyPressed(kKey,kDown,112))
endin
</CsInstruments>
<CsScore>
i 1 0 1000
</CsScore>
</CsoundSynthesizer>
;example written by joachim heintz
We use mostly the functional style of writing Csound code here. Instead of
iBuffer = createBuffer(3)
To plug the audio signal from channel 1 directly into the recordBuffer UDO, we plug the inch(1)
directly into the first input. Similar the output of the keyPressed UDO as third input. For more
information about functional style coding, see chapter 03 I.
For reading multichannel files of more than two channels, the more recent loscilx exists as an
excellent option. It can also be used for mono or stereo, and it can — similar to diskin — write its
output in an audio array.
loscil and loscil3 will only allow looping points to be defined at i-time. lposcil, lposcil3, lposcila,
lposcilsa and lposcilsa2 will allow looping points to be changed a k-rate, while the note is playing.
It is worth not forgetting Csound’s more exotic methods of playback of sample stored in function
tables. mincer and temposcal use streaming vocoder techniques to faciliate independent pitch
and time-stretch control during playback (this area is covered more fully in chapter 05 I. sndwarp
and sndwarpst similiarly faciliate independent pitch and playback speed control but through the
technique of granular synthesis this area is covered in detail in chapter 05 G.
583
Further Opcodes for Investigation 06 B. RECORD AND PLAY BUFFERS
584
07 A. RECEIVING EVENTS BY MIDIIN
Csound provides a variety of opcodes, such as cpsmidi, ampmidi and ctrl7, which facilitate the
reading of incoming midi data into Csound with minimal fuss. These opcodes allow us to read in
midi information without us having to worry about parsing status bytes and so on. Occasionally
though when more complex midi interaction is required, it might be advantageous for us to scan
all raw midi information that is coming into Csound. The midiin opcode allows us to do this.
In the next example a simple midi monitor is constructed. Incoming midi events are printed to the
terminal with some formatting to make them readable. We can disable Csound’s default instru-
ment triggering mechanism (which in this example we don’t want to use) by writing the line:
massign 0,0
For this example to work you will need to ensure that you have activated live midi input within
Csound by using the -M flag. You will also need to make sure that you have a midi keyboard or
controller connected. You may also want to include the -m128 flag which will disable some of
Csound’s additional messaging output and therefore allow our midi printout to be presented more
clearly.
The status byte tells us what sort of midi information has been received. For example, a value of
144 tells us that a midi note event has been received, a value of 176 tells us that a midi controller
event has been received, a value of 224 tells us that pitch bend has been received and so on.
The meaning of the two data bytes depends on what sort of status byte has been received. For
example if a midi note event has been received then data byte 1 gives us the note velocity and data
byte 2 gives us the note number. If a midi controller event has been received then data byte 1 gives
us the controller number and data byte 2 gives us the controller value.
EXAMPLE 07A01_midiin_print.csd
<CsoundSynthesizer>
<CsOptions>
-Ma -m128
; activates all midi devices, suppress note printings
</CsOptions>
<CsInstruments>
; no audio so 'sr' or 'nchnls' aren't relevant
ksmps = 32
585
07 A. RECEIVING EVENTS BY MIDIIN
massign 0,0
instr 1
kstatus, kchan, kdata1, kdata2 midiin ;read in midi
ktrigger changed kstatus, kchan, kdata1, kdata2 ;trigger if midi data change
if ktrigger=1 && kstatus!=0 then ;if status byte is non-zero...
; -- print midi data to the terminal with formatting --
printks "status:%d%tchannel:%d%tdata1:%d%tdata2:%d%n",
0,kstatus,kchan,kdata1,kdata2
endif
endin
</CsInstruments>
<CsScore>
i 1 0 3600 ; instr 1 plays for 1 hour
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The principle advantage of using the midiin opcode is that, unlike opcodes such as cpsmidi, amp-
midi and ctrl7 which only receive specific midi data types on a specific channel, midiin “listens” to
all incoming data including system exclusive messages. In situations where elaborate Csound in-
strument triggering mappings that are beyond the capabilities of the default triggering mechanism
are required, then the use of midiin might be beneficial.
586
07 B. TRIGGERING INSTRUMENT INSTANCES
The following example confirms this default mapping of midi channels to instruments. You will
need a midi keyboard that allows you to change the midi channel on which it is transmmitting.
Besides a written confirmation to the console of which instrument is begin triggered, there is an
audible confirmation in that instrument 1 plays single pulses, instrument 2 plays sets of two pulses
and instrument 3 plays sets of three pulses. The example does not go beyond three instruments.
If notes are received on midi channel 4 and above, because corresonding instruments do not exist,
notes on any of these channels will be directed to instrument 1.
EXAMPLE 07B01_MidiInstrTrigger.csd
<CsoundSynthesizer>
<CsOptions>
-Ma -odac -m128
;activates all midi devices, real time sound output, suppress note printings
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
587
Using massign to Map MIDI Channels to Instruments 07 B. TRIGGERING INSTRUMENT INSTANCES
</CsInstruments>
<CsScore>
f 0 300
</CsScore>
<CsoundSynthesizer>
;example by Iain McCurdy
588
07 B. TRIGGERING INSTRUMENT INSTANCES Using massign to Map MIDI Channels to Instruments
EXAMPLE 07B02_massign.csd
<CsoundSynthesizer>
<CsOptions>
-Ma -odac -m128
; activate all midi devices, real time sound output, suppress note printing
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
</CsInstruments>
<CsScore>
f 0 300
</CsScore>
589
Using Multiple Triggering 07 B. TRIGGERING INSTRUMENT INSTANCES
<CsoundSynthesizer>
;example by Iain McCurdy
massign also has a couple of additional functions that may come in useful. A channel number
of zero is interpreted as meaning any. The following instruction will map notes on any and all
channels to instrument 1.
massign 0,1
An instrument number of zero is interpreted as meaning none so the following instruction will
instruct Csound to ignore triggering for notes received on all channels.
massign 0,0
The above feature is useful when we want to scan midi data from an already active instrument
using the midiin opcode, as we did in EXAMPLE 0701.csd.
EXAMPLE 07B03_MidiTriggerChain.csd
<CsoundSynthesizer>
<CsOptions>
-Ma
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 2
ichn = int(frac(p1)*100)
inote = round(frac(frac(p1)*100)*1000)
prints "instr %f: ichn = %f, inote = %f%n", p1, ichn, inote
590
07 B. TRIGGERING INSTRUMENT INSTANCES Using Multiple Triggering
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
;Example by Joachim Heintz, using code of Victor Lazzarini
This example merely demonstrates a technique for passing information about MIDI channel and
note number from the directly triggered instrument to a sub-instrument. A practical application
for this would be for creating keygroups - triggering different instruments by playing in different
regions of the keyboard. In this case you could change just the line:
instrnum = 2 + ichn/100 + inote/100000
to this:
if inote < 48 then
instrnum = 2
elseif inote < 72 then
instrnum = 3
else
instrnum = 4
endif
instrnum = instrnum + ichn/100 + inote/100000
In this case for any key below C3 instrument 2 will be called, for any key between C3 and B4
instrument 3, and for any higher key instrument 4.
Using this multiple triggering you are also able to trigger more than one instrument at the same
time (which is not possible using the massign opcode). Here is an example using a User Defined
Opcode (see the UDO chapter of this manual):
EXAMPLE 07B04_MidiMultiTrigg.csd
<CsoundSynthesizer>
<CsOptions>
-Ma
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
opcode MidiTrig, 0, io
;triggers the first inum instruments in the function table ifn by a midi event
; with fractional numbers containing channel and note number information
591
Using Multiple Triggering 07 B. TRIGGERING INSTRUMENT INSTANCES
instr 2
ichn = int(frac(p1)*100)
inote = round(frac(frac(p1)*100)*1000)
prints "instr %f: ichn = %f, inote = %f%n", p1, ichn, inote
printks "instr %f playing!%n", 1, p1
endin
instr 3
ichn = int(frac(p1)*100)
inote = round(frac(frac(p1)*100)*1000)
prints "instr %f: ichn = %f, inote = %f%n", p1, ichn, inote
printks "instr %f playing!%n", 1, p1
endin
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
;Example by Joachim Heintz, using code of Victor Lazzarini
592
07 C. WORKING WITH CONTROLLERS
The following example scans midi controller 1 on channel 1 and prints values received to the con-
sole. The minimum and maximum values are given as 0 and 127 therefore they are not rescaled
at all. Controller 1 is also the modulation wheel on a midi keyboard.
EXAMPLE 07C01_ctrl7_print.csd
<CsoundSynthesizer>
<CsOptions>
-Ma -odac
; activate all MIDI devices
</CsOptions>
<CsInstruments>
; 'sr' and 'nchnls' are irrelevant so are omitted
ksmps = 32
instr 1
kCtrl ctrl7 1,1,0,127 ; read in controller 1 on channel 1
kTrigger changed kCtrl ; if 'kCtrl' changes generate a trigger ('bang')
if kTrigger=1 then
; Print kCtrl to console with formatting, but only when its value changes.
printks "Controller Value: %d%n", 0, kCtrl
endif
endin
</CsInstruments>
<CsScore>
i 1 0 3600
</CsScore>
593
Scanning Pitch Bend and Aftertouch 07 C. WORKING WITH CONTROLLERS
<CsoundSynthesizer>
;example by Iain McCurdy
There are also 14 bit and 21 bit versions of ctrl7 (ctrl14 and ctrl21) which improve upon the 7 bit
resolution of ctrl7 but hardware that outputs 14 or 21 bit controller information is rare so these
opcodes are seldom used.
EXAMPLE 07C02_pchbend_aftouch.csd
<CsoundSynthesizer>
<CsOptions>
-odac -Ma
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
; -- pitch bend --
kPchBnd pchbend 0,4 ;read in pitch bend (range -2 to 2)
kTrig1 changed kPchBnd ;if 'kPchBnd' changes generate a trigger
if kTrig1=1 then
printks "Pitch Bend:%f%n",0,kPchBnd ;print kPchBnd to console when it changes
endif
; -- aftertouch --
kAfttch aftouch 0,0.9 ;read in aftertouch (range 0 to 0.9)
kTrig2 changed kAfttch ;if 'kAfttch' changes generate a trigger
if kTrig2=1 then
printks "Aftertouch:%d%n",0,kAfttch ;print kAfttch to console when it changes
endif
; -- create a sound --
iNum notnum ;read in MIDI note number
; MIDI note number + pitch bend are converted to cycles per seconds
aSig poscil 0.1,cpsmidinn(iNum+kPchBnd),giSine
594
07 C. WORKING WITH CONTROLLERS Initialising MIDI Controllers
</CsInstruments>
<CsScore>
</CsScore>
<CsoundSynthesizer>
;example by Iain McCurdy
In the following example a simple synthesizer is created. Midi controller 1 controls the output
volume of this instrument but the initc7 statement near the top of the orchestra ensures that this
control does not default to its minimum setting. The arguments that initc7 takes are for midi
channel, controller number and initial value. Initial value is defined within the range 0-1, therefore
a value of 1 will set this controller to its maximum value (midi value 127), and a value of 0.5 will set
it to its halfway value (midi value 64), and so on.
Additionally this example uses the cpsmidi opcode to scan midi pitch (basically converting midi
note numbers to cycles-per-second) and the ampmidi opcode to scan and rescale key velocity.
EXAMPLE 07C03_cpsmidi_ampmidi.csd
<CsoundSynthesizer>
<CsOptions>
-Ma -odac
; activate all midi inputs and real-time audio output
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
595
Smoothing 7-bit Quantisation in MIDI Controllers 07 C. WORKING WITH CONTROLLERS
</CsInstruments>
<CsScore>
</CsScore>
<CsoundSynthesizer>
;example by Iain McCurdy
You will maybe hear that this instrument produces clicks as notes begin and end. To find out how
to prevent this see the section on envelopes with release sensing in chapter 05 A.
ä we must be careful not to smooth excessively otherwise the response of the controller will
become sluggish. Any k-rate compatible lowpass filter can be used for this task but the
portk opcode is particularly useful as it allows us to define the amount of smoothing as a
time taken to glide to half the required value rather than having to specify a cutoff frequency.
Additionally this half time value can be varied at k-rate which provides an advantage availed
of in the following example.
This example takes the simple synthesizer of the previous example as its starting point. The vol-
ume control, which is controlled by midi controller 1 on channel 1, is passed through a portk filter.
The half time for portk ramps quickly up to its required value of 0.01 through the use of a linseg
statement in the previous line. This ensures that when a new note begins the volume control
immediately jumps to its required value rather than gliding up from zero as would otherwise be
affected by the portk filter. Try this example with the portk half time defined as a constant to hear
the difference. To further smooth the volume control, it is converted to an a-rate variable through
the use of the interp opcode which, as well as performing this conversion, interpolates values in
the gaps between k-cycles.
596
07 C. WORKING WITH CONTROLLERS Smoothing 7-bit Quantisation in MIDI Controllers
EXAMPLE 07C04_smoothing.csd
<CsoundSynthesizer>
<CsOptions>
-Ma -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
iCps cpsmidi ;read in midi pitch in cycles-per-second
iAmp ampmidi 1 ;read in note velocity - re-range 0 to 1
kVol ctrl7 1,1,0,1 ;read in CC 1, chan. 1. Re-range from 0 to 1
kPortTime linseg 0,0.001,0.01 ;create a value that quickly ramps up to .01
kVol portk kVol,kPortTime ;create a filtered version of kVol
aVol interp kVol ;create an a-rate version of kVol
aSig poscil iAmp*aVol,iCps,giSine
out aSig
endin
</CsInstruments>
<CsScore>
</CsScore>
<CsoundSynthesizer>
;example by Iain McCurdy
All of the techniques introduced in this section are combined in the final example which includes
a 2-semitone pitch bend and tone control which is controlled by aftertouch. For tone generation
this example uses the gbuzz opcode.
EXAMPLE 07C05_MidiControlComplex.csd
<CsoundSynthesizer>
<CsOptions>
-Ma -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
iNum notnum ; read in midi note number
iAmp ampmidi 0.1 ; read in note velocity - range 0 to 0.2
kVol ctrl7 1,1,0,1 ; read in CC 1, chn 1. Re-range from 0 to 1
kPortTime linseg 0,0.001,0.01 ; create a value that quickly ramps up to 0.01
kVol portk kVol, kPortTime ; create filtered version of kVol
aVol interp kVol ; create an a-rate version of kVol.
iRange = 2 ; pitch bend range in semitones
iMin = 0 ; equilibrium position
kPchBnd pchbend iMin, 2*iRange ; pitch bend in semitones (range -2 to 2)
597
RECORDING CONTROLLER DATA 07 C. WORKING WITH CONTROLLERS
</CsInstruments>
<CsScore>
</CsScore>
<CsoundSynthesizer>
;example by Iain McCurdy
A more efficient approach is to store values only when they change and to time stamp those events
to that they can be replayed later on in the right order and at the right speed. In this case data will
be written to a function table in pairs: time-stamp followed by a value for each new event (event
refers to when a controller changes). This method does not store durations of each event, merely
when they happen, therefore it will not record how long the final event lasts until recording stopped.
This may or may not be critical depending on how the recorded controller data is used later on but
in order to get around this, the following example stores the duration of the complete recording at
index location 0 so that we can derive the duration of the last event. Additionally the first event
stored at index location 1 is simply a value: the initial value of the controller (the time stamp for
this would always be zero anyway). Thereafter events are stored as time-stamped pairs of data:
index 2=time stamp, index 3=associated value and so on.
To use the following example, activate Record, move the slider around and then deactivate Record.
This gesture can now be replayed using the Play button. As well as moving the GUI slider, a tone
is produced, the pitch of which is controlled by the slider.
Recorded data in the GEN table can also be backed up onto the hard drive using ftsave and recalled
in a later session using ftload. Note that ftsave also has the capability of storing multiple function
tables in a single file.
EXAMPLE 07C06_RecordingController.csd
<CsoundSynthesizer>
<CsOptions>
-odac -dm0
598
07 C. WORKING WITH CONTROLLERS RECORDING CONTROLLER DATA
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 8
nchnls = 1
0dbfs = 1
opcode RecordController,0,Ki
kval,ifn xin
i_ ftgen 1,0,ftlen(ifn),-2,0 ; erase table
tableiw i(kval),1,ifn ; write initial value at index 1.
;(Index 0 will be used be storing the complete gesture duration.)
kndx init 2 ; Initialise index
kTime timeinsts ; time since this instrument started in seconds
; Write a data event only when the input value changes
if changed(kval)==1 && kndx<=(ftlen(ifn)-2) && kTime>0 then
; Write timestamp to table location defined by current index.
tablew kTime, kndx, ifn
; Write slider value to table location defined by current index.
tablew kval, kndx + 1, ifn
; Increment index 2 steps (one for time, one for value).
kndx = kndx + 2
endif
; sense note release
krel release
; if we are in the final k-cycle before the note ends
if(krel==1) then
; write total gesture duration into the table at index 0
tablew kTime,0,ifn
endif
endop
opcode PlaybackController,k,i
ifn xin
; read first value
; initial controller value read from index 1
ival table 1,ifn
; initial value for k-rate output
kval init ival
; Initialise index to first non-zero timestamp
kndx init 2
; time in seconds since this note started
kTime timeinsts
; first non-zero timestamp
iTimeStamp tablei 2,ifn
; initialise k-variable for first non-zero timestamp
kTimeStamp init iTimeStamp
; if we have reached the timestamp value...
if kTime>=kTimeStamp && kTimeStamp>0 then
; ...Read value from table defined by current index.
kval table kndx+1,ifn
kTimeStamp table kndx+2,ifn ; Read next timestamp
599
RECORDING CONTROLLER DATA 07 C. WORKING WITH CONTROLLERS
endin
600
07 C. WORKING WITH CONTROLLERS RECORDING CONTROLLER DATA
endin
</CsInstruments>
<CsScore>
i 1 0 3600
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
601
RECORDING CONTROLLER DATA 07 C. WORKING WITH CONTROLLERS
602
07 D. READING MIDI FILES
Instead of using either the standard Csound score or live midi events as input for an orchestra
Csound can read a midi file and use the data contained within it as if it were a live midi input.
The command line flag to instigate reading from a midi file is -F followed by the name of the file
or the complete path to the file if it is not in the same directory as the .csd file. Midi channels will
be mapped to instrument according to the rules and options discussed in Triggering Instrument
Instances and all controllers can be interpretted as desired using the techniques discussed in
Working with Controllers.
The following example plays back a midi file using Csound’s fluidsynth family of opcodes to facili-
tate playing soundfonts (sample libraries). For more information on these opcodes please consult
the Csound Reference Manual. In order to run the example you will need to download a midi file
and two (ideally contrasting) soundfonts. Adjust the references to these files in the example ac-
cordingly. Free midi files and soundfonts are readily available on the internet. I am suggesting that
you use contrasting soundfonts, such as a marimba and a trumpet, so that you can easily hear the
parsing of midi channels in the midi file to different Csound instruments. In the example channels
1,3,5,7,9,11,13 and 15 play back using soundfont 1 and channels 2,4,6,8,10,12,14 and 16 play back
using soundfont 2. When using fluidsynth in Csound we normally use an always on instrument
to gather all the audio from the various soundfonts (in this example instrument 99) which also
conveniently keeps performance going while our midi file plays back.
EXAMPLE 07D01_ReadMidiFile.csd
<CsoundSynthesizer>
<CsOptions>
;'-F' flag reads in a midi file
-F AnyMIDIfile.mid
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
603
07 D. READING MIDI FILES
Midi file input can be combined with other Csound inputs from the score or from live midi and
also bear in mind that a midi file doesn’t need to contain midi note events, it could instead contain,
for example, a sequence of controller data used to automate parameters of effects during a live
performance.
Rather than to directly play back a midi file using Csound instruments it might be useful to import
midi note events as a standard Csound score. This way events could be edited within the Csound
editor or several scores could be combined. The following example takes a midi file as input and
outputs standard Csound .sco files of the events contained therein. For convenience each midi
channel is output to a separate .sco file, therefore up to 16 .sco files will be created. Multiple .sco
files can be later recombined by using #include statements or simply by using copy and paste.
The only tricky aspect of this example is that note-ons followed by note-offs need to be sensed and
calculated as p3 duration values. This is implemented by sensing the note-off by using the release
opcode and at that moment triggering a note in another instrument with the required score data.
It is this second instrument that is responsible for writing this data to a score file. Midi channels
are rendered as p1 values, midi note numbers as p4 and velocity values as p5.
EXAMPLE 07D02_MidiToScore.csd
<CsoundSynthesizer>
<CsOptions>
; enter name of input midi file
-F InputMidiFile.mid
604
07 D. READING MIDI FILES
</CsOptions>
<CsInstruments>
massign 0,1
instr 1
iChan midichn
iCps cpsmidi ; read pitch in frequency from midi notes
iVel veloc 0, 127 ; read in velocity from midi notes
kDur timeinsts ; running total of duration of this note
kRelease release ; sense when note is ending
if kRelease=1 then ; if note is about to end
; p1 p2 p3 p4 p5 p6
event "i", 2, 0, kDur, iChan, iCps, iVel ; send full note data to instr 2
endif
endin
instr 2
iDur = p3
iChan = p4
iCps = p5
iVel = p6
iStartTime times ; read current time since the start of performance
; form file name for this channel (1-16) as a string variable
SFileName sprintf "Channel%d.sco",iChan
; write a line to the score for this channel's .sco file
fprints SFileName, "i%d\\t%f\\t%f\\t%f\\t%d\\n",\
iChan,iStartTime-iDur,iDur,iCps,iVel
endin
</CsInstruments>
<CsScore>
f 0 480 ; ensure this duration is as long or longer that duration of midi file
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The example above ignores continuous controller data, pitch bend and aftertouch. The second
example on the page in the Csound Manual for the opcode fprintks renders all midi data to a score
file.
605
07 D. READING MIDI FILES
606
07 E. MIDI OUTPUT
Csound’s ability to output midi data in real-time can open up many possibilities. We can relay
the Csound score to a hardware synthesizer so that it plays the notes in our score, instead of a
Csound instrument. We can algorithmically generate streams of notes within the orchestra and
have these played by the external device. We could even route midi data internally to another piece
of software. Csound could be used as a device to transform incoming midi data, transforming,
transposing or arpeggiating incoming notes before they are output again. Midi output could also
be used to preset faders on a motorized fader box to desired initial locations.
Another thing we need to be aware of is that midi notes do not contain any information about
note duration; instead the device playing the note waits until it receives a corresponding note-off
607
midiout - Outputting Raw MIDI Data 07 E. MIDI OUTPUT
instruction on the same midi channel and with the same note number before stopping the note.
We must be mindful of this when working with midiout. The status byte for a midi note-off is
128 but it is more common for note-offs to be expressed as a note-on (status byte 144) with zero
velocity. In the following example two notes (and corresponding note offs) are send to the midi
output - the first note-off makes use of the zero velocity convention whereas the second makes
use of the note-off status byte. Hardware and software synths should respond similarly to both.
One advantage of the note-off message using status byte 128 is that we can also send a note-
off velocity, i.e. how forcefully we release the key. Only more expensive midi keyboards actually
sense and send note-off velocity and it is even rarer for hardware to respond to received note-off
velocities in a meaningful way. Using Csound as a sound engine we could respond to this data in
a creative way however.
In order for the following example to work you must connect a midi sound module or keyboard
receiving on channel 1 to the midi output of your computer. You will also need to set the appropriate
device number after the -Q flag.
No use is made of audio so sample rate (sr), and number of channels (nchnls) are left undefined -
nonetheless they will assume default values.
EXAMPLE 07E01_midiout.csd
<CsoundSynthesizer>
<CsOptions>
; amend device number accordingly
-Q999
</CsOptions>
<CsInstruments>
ksmps = 32 ;no audio so sr and nchnls irrelevant
instr 1
; arguments for midiout are read from p-fields
istatus init p4
ichan init p5
idata1 init p6
idata2 init p7
midiout istatus, ichan, idata1, idata2; send raw midi data
turnoff ; turn instrument off to prevent reiterations of midiout
endin
</CsInstruments>
<CsScore>
;p1 p2 p3 p4 p5 p6 p7
i 1 0 0.01 144 1 60 100 ; note on
i 1 2 0.01 144 1 60 0 ; note off (using velocity zero)
The use of separate score events for note-ons and note-offs is rather cumbersome. It would be
more sensible to use the Csound note duration (p3) to define when the midi note-off is sent. The
next example does this by utilising a release flag generated by the release opcode whenever a note
ends and sending the note-off then.
608
07 E. MIDI OUTPUT midiout - Outputting Raw MIDI Data
EXAMPLE 07E02_score_to_midiout.csd
<CsoundSynthesizer>
<CsOptions>
; amend device number accordingly
-Q999
</CsOptions>
<CsInstruments>
ksmps = 32 ;no audio so sr and nchnls omitted
instr 1
;arguments for midiout are read from p-fields
istatus init p4
ichan init p5
idata1 init p6
idata2 init p7
kskip init 0
if kskip=0 then
midiout istatus, ichan, idata1, idata2; send raw midi data (note on)
kskip = 1; ensure that the note on will only be executed once
endif
krelease release; normally output is zero, on final k pass output is 1
if krelease=1 then; i.e. if we are on the final k pass...
midiout istatus, ichan, idata1, 0; send raw midi data (note off)
endif
endin
</CsInstruments>
<CsScore>
;p1 p2 p3 p4 p5 p6 p7
i 1 0 4 144 1 60 100
i 1 1 3 144 1 64 100
i 1 2 2 144 1 67 100
f 0 5; extending performance time prevents note-offs from being lost
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Obviously midiout is not limited to only sending only midi note information but instead this infor-
mation could include continuous controller information, pitch bend, system exclusive data and so
on. The next example, as well as playing a note, sends controller 1 (modulation) data which rises
from zero to maximum (127) across the duration of the note. To ensure that unnessessary midi
data is not sent out, the output of the line function is first converted into integers, and midiout for
the continuous controller data is only executed whenever this integer value changes. The function
that creates this stream of data goes slightly above this maximum value (it finishes at a value of
127.1) to ensure that a rounded value of 127 is actually achieved.
In practice it may be necessary to start sending the continuous controller data slightly before the
note-on to allow the hardware time to respond.
EXAMPLE 07E03_midiout_cc.csd
<CsoundSynthesizer>
<CsOptions>
; amend device number accordingly
-Q999
</CsOptions>
<CsInstruments>
ksmps = 32 ; no audio so sr and nchnls irrelevant
609
midion - Outputting MIDI Notes Made Easier 07 E. MIDI OUTPUT
instr 1
; play a midi note
; read in values from p-fields
ichan init p4
inote init p5
iveloc init p6
kskip init 0 ; 'skip' flag ensures that note-on is executed just once
if kskip=0 then
midiout 144, ichan, inote, iveloc; send raw midi data (note on)
kskip = 1 ; flip flag to prevent repeating the above line
endif
krelease release ; normally zero, on final k pass this will output 1
if krelease=1 then ; if we are on the final k pass...
midiout 144, ichan, inote, 0 ; send a note off
endif
</CsInstruments>
<CsScore>
;p1 p2 p3 p4 p5 p6 p7
i 1 0 5 1 60 100 1
f 0 7 ; extending performance time prevents note-offs from being lost
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
EXAMPLE 07E04_midion.csd
<CsoundSynthesizer>
<CsOptions>
; amend device number accordingly
-Q999
</CsOptions>
<CsInstruments>
instr 1
; read values in from p-fields
kchn = p4
knum = p5
610
07 E. MIDI OUTPUT midion - Outputting MIDI Notes Made Easier
kvel = p6
midion kchn, knum, kvel ; send a midi note
endin
</CsInstruments>
<CsScore>
;p1 p2 p3 p4 p5 p6
i 1 0 2.5 1 60 100
i 1 0.5 2 1 64 100
i 1 1 1.5 1 67 100
i 1 1.5 1 1 72 100
f 0 30 ; extending performance time prevents note-offs from being missed
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Changing any of midion’s k-rate input arguments in realtime will force it to stop the current midi
note and send out a new one with the new parameters.
midion2 allows us to control when new notes are sent (and the current note is stopped) through
the use of a trigger input. The next example uses midion2 to algorithmically generate a melodic
line. New note generation is controlled by a metro, the rate of which undulates slowly through the
use of a randomi function.
EXAMPLE 07E05_midion2.csd
<CsoundSynthesizer>
<CsOptions>
; amend device number accordingly
-Q999
</CsOptions>
<CsInstruments>
instr 1
; read values in from p-fields
kchn = p4
knum random 48,72.99 ; note numbers chosen randomly across a 2 octaves
kvel random 40, 115 ; velocities are chosen randomly
krate randomi 1,2,1 ; rate at which new notes will be output
ktrig metro krate^2 ; 'new note' trigger
midion2 kchn, int(knum), int(kvel), ktrig ; send midi note if ktrig=1
endin
</CsInstruments>
<CsScore>
i 1 0 20 1
f 0 21 ; extending performance time prevents the final note-off being lost
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
midion and midion2 generate monophonic melody lines with no gaps between notes.
moscil works in a slightly different way and allows us to explicitly define note durations as well as
the pauses between notes thereby permitting the generation of more staccato melodic lines. Like
midion and midion2, moscil will not generate overlapping notes (unless two or more instances of
it are concurrent). The next example algorithmically generates a melodic line using moscil.
611
MIDI File Output 07 E. MIDI OUTPUT
EXAMPLE 07E06_moscil.csd
<CsoundSynthesizer>
<CsOptions>
; amend device number accordingly
-Q999
</CsOptions>
<CsInstruments>
; Example by Iain McCurdy
instr 1
; read value in from p-field
kchn = p4
knum random 48,72.99 ; note numbers chosen randomly across a 2 octaves
kvel random 40, 115 ; velocities are chosen randomly
kdur random 0.2, 1 ; note durations chosen randomly from 0.2 to 1
kpause random 0, 0.4 ; pauses betw. notes chosen randomly from 0 to 0.4
moscil kchn, knum, kvel, kdur, kpause ; send a stream of midi notes
endin
</CsInstruments>
<CsScore>
;p1 p2 p3 p4
i 1 0 20 1
f 0 21 ; extending performance time prevents final note-off from being lost
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
will simultaneously stream realtime midi to midi output device number 2 and render to a file named
midiout.mid which will be saved in our home directory.
612
08 A. OPEN SOUND CONTROL
Open Sound Control (OSC) offers a flexible and dynamic alternative to MIDI. It uses modern net-
work communications, usually based on the user datagram transport layer protocol (UDP), and
allows not only the communication between synthesizers but also between applications and re-
mote computers.
The OSC message must specify the type(s) of its argument(s). This is a list of all types which are
available in Csound, and the signifier which Csound uses for this type:
audio a
character c
double d
float f
long integer 64-bit h
integer 32-bit i
string s
array (scalar) A
table G
Once data types are declared, messages can be sent and received. In OSC terminology, anything
that sends a message is a client, and anything that receives a message is a server. Csound can
be both. Usually it will communicate with another application either as client or as server. It can,
for instance, receive data from Processing, or it can send data to Inscore.
613
Sending and Receiving Different Data Types 08 A. OPEN SOUND CONTROL
Send/Receive an integer
EXAMPLE 08A01_OSC_send_recv_int.csd
<CsoundSynthesizer>
<CsOptions>
-m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Send
kSendTrigger = 1
kSendValue = 17
OSCsend kSendTrigger, "", 47120, "/exmp_1/int", "i", kSendValue
endin
instr Receive
kReceiveValue init 0
kGotIt OSClisten giPortHandle, "/exmp_1/int", "i", kReceiveValue
if kGotIt == 1 then
printf "Message Received for '%s' at time %f: kReceiveValue = %d\n",
1, "/exmp_1/int", times:k(), kReceiveValue
endif
endin
</CsInstruments>
<CsScore>
i "Receive" 0 3 ;start listening process first
i "Send" 1 1 ;then after one second send message
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
To understand the main functionalities to use OSC in Csound, we will look more closely to what
happens in the code.
OSCinit
614
08 A. OPEN SOUND CONTROL Sending and Receiving Different Data Types
The OSCinit statement is necessary for the OSClisten opcode. It takes a port number as input
argument and returns a handle, called giHandle in this case. This statement should usually be
done in the global space.
OSCsend
kSendTrigger = 1
kSendValue = 17
OSCsend kSendTrigger, "", 47120, "/exmp_1/int", "i", kSendValue
The OSCsend opcode will send a message whenever the kSendTrigger will change its value. As this
variable is set here to a fixed number, only one message will be sent. The second input for OSCsend
is the host to which the message is being sent. An empty string means “localhost” or “127.0.0.1”.
Third argument is the port number, here 47120, followed by the destination address string, here
“/exmp_1/int”. As we are sending an integer here, the type specifier is “i” as fifth argument, followed
by the value itself.
OSClisten
kReceiveValue init 0
kGotIt OSClisten giPortHandle, "/exmp_1/int", "i", kReceiveValue
On the receiver side, we find the giPortHandle which was returned by OSCinit, and the address
string again, as well as the expected type, here “i” for integer. Note that the value which is received
is on the input side of the opcode. So kReceiveValue must be initialized before, which is done in line
21. Whenever OSClisten receives a message, the kGotIt output variable will become 1 (otherwise
it is zero).
if kGotIt == 1 then
printf "Message Received for '/exmp_1/int' at time %f: \
kReceiveValue = %d\n", 1, times:k(), kReceiveValue
endif
Here we catch this point, and get a printout with the time at which the message has been received.
As our listening instrument starts first, and the sending instrument after one second, we will see
a message like this one in the console:
Message Received for '/exmp_1/int' at time 1.002086: kReceiveValue = 17
Note that the time at which the message is received is necessarily slightly later than the time at
which it is being sent. The time difference is usually around some milliseconds; it depends on the
UDP transmission.
615
Sending and Receiving Different Data Types 08 A. OPEN SOUND CONTROL
EXAMPLE 08A02_OSC_more_data.csd
<CsoundSynthesizer>
<CsOptions>
-m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Send
kSendTrigger = 1
kFloat = 1.23456789
Sstring = "bla bla"
OSCsend kSendTrigger, "", 47120, "/exmp_2/more", "fs", kFloat, Sstring
endin
instr Receive
kReceiveFloat init 0
SReceiveString init ""
kGotIt OSClisten giPortHandle, "/exmp_2/more", "fs",
kReceiveFloat, SReceiveString
if kGotIt == 1 then
printf "kReceiveFloat = %f\nSReceiveString = '%s'\n",
1, kReceiveFloat, SReceiveString
endif
endin
</CsInstruments>
<CsScore>
i "Receive" 0 3 ;start listening process first
i "Send" 1 1 ;then after one second send message
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Send/Receive arrays
Instead of single data, OSC can also send and receive collections of data. The next example shows
how an array is being sent once a second, and is being transformed for each metro tick.
EXAMPLE 08A03_Send_receive_array.csd
<CsoundSynthesizer>
<CsOptions>
-m 128
</CsOptions>
616
08 A. OPEN SOUND CONTROL Sending and Receiving Different Data Types
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr Send
kSendTrigger init 0
kArray[] fillarray 1, 2, 3, 4, 5, 6, 7
if metro(1)==1 then
kSendTrigger += 1
kArray *= 2
endif
OSCsend kSendTrigger, "", 47120, "/exmp_3/array", "A", kArray
endin
instr Receive
kReceiveArray[] init 7
kGotIt OSClisten giPortHandle, "/exmp_3/array", "A", kReceiveArray
if kGotIt == 1 then
printarray kReceiveArray
endif
endin
</CsInstruments>
<CsScore>
i "Receive" 0 3
i "Send" 0 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Each time the metro ticks, the array values are multiplied by two. So the printout is:
2.0000 4.0000 6.0000 8.0000 10.0000 12.0000 14.0000
4.0000 8.0000 12.0000 16.0000 20.0000 24.0000 28.0000
8.0000 16.0000 24.0000 32.0000 40.0000 48.0000 56.0000
EXAMPLE 08A04_Send_receive_table.csd
<CsoundSynthesizer>
<CsOptions>
-m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
617
Sending and Receiving Different Data Types 08 A. OPEN SOUND CONTROL
nchnls = 2
0dbfs = 1
instr Send
kSendTrigger init 1
kTable init giTable_1
kTime init 0
OSCsend kSendTrigger, "", 47120, "/exmp_4/table", "G", kTable
if timeinsts() >= kTime+1 then
kSendTrigger += 1
kTable += 1
kTime = timeinsts()
endif
endin
instr Receive
iReceiveTable ftgen 0, 0, 1024, 2, 0
kGotIt OSClisten giPortHandle, "/exmp_4/table", "G", iReceiveTable
aOut poscil .2, 400, iReceiveTable
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i "Receive" 0 3
i "Send" 0 3
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Send/Receive audio
It is also possible to send and receive an audio signal via OSC. In this case, a OSC message must
be sent on each k-cycle. Remember though that OSC is not optimized for this task. Most probably
you will hear some dropouts in the next example. (Larger ksmps values should give better result.)
EXAMPLE 08A05_send_receive_audio.csd
<CsoundSynthesizer>
<CsOptions>
-m 128
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 128
nchnls = 2
0dbfs = 1
618
08 A. OPEN SOUND CONTROL Other OSC Opcodes
instr Send
kSendTrigger init 1
aSend poscil .2, 400
OSCsend kSendTrigger, "", 47120, "/exmp_5/audio", "a", aSend
kSendTrigger += 1
endin
instr Receive
aReceive init 0
kGotIt OSClisten giPortHandle, "/exmp_5/audio", "a", aReceive
out aReceive, aReceive
endin
</CsInstruments>
<CsScore>
i "Receive" 1 3
i "Send" 0 5
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
619
Practical Examples with Processing 08 A. OPEN SOUND CONTROL
instr StartVideo
OSCsend(1,"",12000,"/launch2/start","i",1)
endin
This means that we send the integer 1 to the address “launch2/start” on port 12000.
To receive this message in Processing, we import the oscP5 library and create a new OscP5
instance which listens to port 12000:
import oscP5.*;
OscP5 oscP5;
oscP5 = new OscP5(this,12000);
Then we use the pluck() method which passes the OSC messages of a certain address (here
“/launch2/start”) to a method with a user-defined name. We call it “startVideo” here:
oscP5.plug(this,"startVideo","/launch2/start");
All we have to do now is to wrap Processing’s video play() message in this startVideo method.
This is the full example code, referring to the video “launch2.mp4” which can be found in the Pro-
cessing examples:
//import the video and osc library
import processing.video.*;
import oscP5.*;
//create objects
OscP5 oscP5;
Movie movie;
void setup() {
size(560, 406);
background(0);
// load the video
movie = new Movie(this,"launch2.mp4");
//receive OSC
oscP5 = new OscP5(this,12000);
//pass the message to the startVideo method
oscP5.plug(this,"startVideo","/launch2/start");
}
//show it
void draw() {
image(movie, 0, 0, width, height);
}
To start the video by hitting any key we can write something like this on the Csound side:
instr ReceiveKey
kKey, kPressed sensekey
if kPressed==1 then
620
08 A. OPEN SOUND CONTROL Practical Examples with Processing
schedulek("StartVideo",0,1)
endif
endin
schedule("ReceiveKey",0,-1)
instr StartVideo
OSCsend(1,"",12000,"/launch2/start","i",1)
endin
where myRemoteLocation is a NetAddress which is created by the library netP5. This is the
code for sending an integer count whenever the mouse is pressed via OSC. The message is sent
on port 12002 to the address “/P5/pressed”.
//import oscP5 and netP5 libraries
import oscP5.*;
import netP5.*;
//initialize objects
OscP5 oscP5;
NetAddress myRemoteLocation;
int count;
void setup(){
size(600, 400);
//create OSC object, listening at port 12001
oscP5 = new OscP5(this,12001);
//create NetAddress for sending: localhost at port 12002
myRemoteLocation = new NetAddress("127.0.0.1",12002);
//initialize variable for count
count = 0;
}
void mousePressed(){
//create new OSC message when mouse is pressed
OscMessage pressed = new OscMessage("/P5/pressed");
//add count to the message
count += 1;
pressed.add(count);
//send it
oscP5.send(pressed, myRemoteLocation);
}
void draw(){
}
The following Csound code receives the messages and prints the count numbers:
621
Practical Examples with Processing 08 A. OPEN SOUND CONTROL
EXAMPLE 08A06_receive_mouse_pressed.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr ReceiveOSC
iPort = OSCinit(12002)
kAns, kMess[] OSClisten iPort, "/P5/pressed", "i"
printf "Mouse in Processing pressed %d times!\n", kAns, kMess[0]
endin
schedule("ReceiveOSC",0,-1)
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
… we write:
xpos1 += mx/16;
xpos2 += mx/60;
xpos3 -= mx/18;
xpos4 -= mx/64;
And we send the x-positions of the four lines via the address “/P5/xpos” in this way:
//create OSC message and add the four x-positions
OscMessage xposMessage = new OscMessage("/P5/xpos");
xposMessage.add(xpos1);
xposMessage.add(xpos2);
xposMessage.add(xpos3);
xposMessage.add(xpos4);
//send it
oscP5.send(xposMessage, myRemoteLocation);
622
08 A. OPEN SOUND CONTROL Practical Examples with Processing
void setup(){
size(640, 360);
noStroke();
xpos1 = width/2;
xpos2 = width/2;
xpos3 = width/2;
xpos4 = width/2;
//create OSC object, listening at port 12001
oscP5 = new OscP5(this,12001);
//create NetAddress for sending: localhost at port 12002
myRemoteLocation = new NetAddress("127.0.0.1",12002);
}
void draw(){
//create movement depending on mouse position
background(0);
float mx = mouseX * 0.4 - width/5.0;
fill(102);
rect(xpos2, 0, thick, height/2);
rect(xpos4, height/2, thick, height/2);
fill(204);
rect(xpos1, 0, thin, height/2);
rect(xpos3, height/2, thin, height/2);
xpos1 += mx/16;
xpos2 += mx/60;
xpos3 -= mx/18;
xpos4 -= mx/64;
if(xpos1 < -thin) { xpos1 = width; }
if(xpos1 > width) { xpos1 = -thin; }
if(xpos2 < -thick) { xpos2 = width; }
if(xpos2 > width) { xpos2 = -thick; }
if(xpos3 < -thin) { xpos3 = width; }
if(xpos3 > width) { xpos3 = -thin; }
if(xpos4 < -thick) { xpos4 = width; }
if(xpos4 > width) { xpos4 = -thick; }
On the Csound side, we receive the four x-positions on “/p5/xpos” as floating point numbers and
623
Practical Examples with Processing 08 A. OPEN SOUND CONTROL
Each line in the Processing sketch is played by one instance of instrument “Line” in this way:
ä the thick lines have a lower pitch than the thin lines
ä in the middle of the canvas the sounds are louder (-20 dB compared to -40 dB at borders)
ä in the middle of the canvas the pitch is higher (one octave compared to the left/right)
To achieve this, we build a function table iTriangle with 640 points (as much as the Processing
canvas has x-points), containing a straight line from zero at left and right, to one in the middle:
value
0
0 320 640 table index
The incoming x position is used for both, the volume (dB) and the pitch (MIDI). A vibrato is added
to the sine waves (smaller but faster for higher pitches) to the sine waves, and the panning reflects
the position of the lines between left and right.
EXAMPLE 08A07_P5_Csound_OSC.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
instr GetLinePositions
//initialize port to receive OSC messages
iPort = OSCinit(12002)
//write the four x-positions in the global array gkPos
kAns,gkPos[] OSClisten iPort, "/P5/xpos", "ffff"
//call four instances of the instrument Line
indx = 0
while indx < 4 do
schedule("Line",0,9999,indx+1)
indx += 1
od
endin
624
08 A. OPEN SOUND CONTROL Practical Examples with Processing
schedule("GetLinePositions",0,-1)
instr Line
iLine = p4 //line 1-4 in processing
kPos = gkPos[iLine-1]
//table for volume (db) and pitch (midi)
iTriangle = ftgen(0,0,641,-7,0,320,1,320,0)
//volume is -40 db left/right and -20 in the middle of the screen
kVolDb = tablei:k(kPos,iTriangle)*30 - 40
//reduce by 4 db for the higher pitches (line 1 and 3)
kVolDb = (iLine % 2 != 0) ? kVolDb-4 : kVolDb
//base pitch is midi 50 for thick and 62 for thin lines
iMidiBasePitch = (iLine % 2 == 0) ? 50 : 62
//pitch is one octave plus iLine higher in the middle
kMidiPitch = table:k(kPos,iTriangle)*12 + iMidiBasePitch+iLine
//vibrato depending on line number
kVibr = randi:k(iLine/4, 100/iLine,2)
//generate sound and apply gentle fade in
aSnd = poscil:a(ampdb(kVolDb),mtof:k(kMidiPitch+kVibr))
aSnd *= linseg:a(0,1,1)
//panning follows position on screen
aL, aR pan2 aSnd, kPos/640
out(aL,aR)
endin
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Open Sound Control is the way to communicate between Processing and Csound as independent
applications. Another way for connecting these extensive libraries for image and audio processing
is by using JavaScript, and run both in a browser. Have a look at chapter 10 F and 12 G for more
information.
625
Practical Examples with Processing 08 A. OPEN SOUND CONTROL
626
08 B. CSOUND AND ARDUINO
It is the intention of this chapter to suggest a number of ways in which Csound can be paired with
an Arduino prototyping circuit board. It is not the intention of this chapter to go into any detail
about how to use an Arduino, there is already a wealth of information available elsewhere online
about this. It is common to use an Arduino and Csound with another program functioning as an
interpreter, so therefore some time is spent discussing these other programs.
An Arduino is a simple microcontroller circuit board that has become enormously popular as a
component in multidisciplinary and interactive projects for musicians and artists since its intro-
duction in 2005. An Arduino board can be programmed to do many things and to send and receive
data to and from a wide variety of other components and devices. As such it is impossible to
specifically define its function here. An Arduino is normally programmed using its own develop-
ment environment (IDE). A program is written on a computer which is then uploaded to the Arduino;
the Arduino then runs this program, independent of the computer if necessary. Arduino’s IDE is
based on that used by Processing and Wiring. Arduino programs are often referred to as sketches.
There now exists a plethora of Arduino variants and even a number of derivatives and clones but
all function in more or less the same way.
Arduino - Pd - Csound
First we will consider communication between an Arduino (running a Standard Firmata) and Pd.
Later we can consider the options for further communication from Pd to Csound.
627
Arduino - Pd - Csound 08 B. CSOUND AND ARDUINO
Assuming that the Arduino IDE (integrated development environment) has been installed and that
the Arduino has been connected, we should then open and upload a Firmata sketch. One can
normally be found by going to File -> Examples -> Firmata -> … There will be a variety of flavours
from which to choose but StandardFirmata should be a good place to start. Choose the appropriate
Arduino board type under Tools -> Board -> … and then choose the relevant serial port under Tools ->
Serial Port -> … Choosing the appropriate serial port may require some trial and error but if you have
chosen the wrong one this will become apparent when you attempt to upload the sketch. Once
you have established the correct serial port to use, it is worth taking a note of which number on the
list (counting from zero) this corresponds to as this number will be used by Pd to communicate
with the Arduino. Finally upload the sketch by clicking on the right-pointing arrow button.
Assuming that Pd is already installed, it will also be necessary to install an add-on library for Pd
called Pduino. Follow its included instructions about where to place this library on your platform
and then reopen Pd. You will now have access to a set of Pd objects for communicating with your
Arduino. The Pduino download will also have included a number of examples Pd. arduino-test.pd
will probably be the best patch to start. First set the appropriate serial port number to establish
communication and then set Arduino pins as input, output etc. as you desire. It is beyond the scope
of this chapter to go into further detail regarding setting up an Arduino with sensors and auxiliary
components, suffice to say that communication to an Arduino is normally tested by blinking digital
pin 13 and communication from an Arduino is normally tested by connecting a 10 kilo-ohm (10k)
potentiometer to analog pin zero. For the sake of argument, we shall assume in this tutorial that
we are setting the Arduino as a hardware controller and have a potentiometer connected to pin 0.
This picture below demonstrates a simple Pd patch that uses Pduino’s objects to receive com-
628
08 B. CSOUND AND ARDUINO Arduino - Pd - Csound
munication from Arduino’s analog and digital inputs. (Note that digital pins 0 and 1 are normally
reserved for serial communication if the USB serial communication is unavailable.) In this exam-
ple serial port 5 has been chosen. Once the analogIns enable box for pin 0 is checked, moving the
potentiometer will change the values in the left-most number box (and move the slider connected
to it). Arduino’s analog inputs generate integers with 10-bit resolution (0 - 1023) but these values
will often be rescaled as floats within the range 0 - 1 in the host program for convenience.
Having established communication between the Arduino and Pd we can now consider the options
available to us for communicating between Pd and Csound. The most obvious (but not necessarily
the best or most flexible) method is to use Pd’s csound6~ object). The above example could be
modified to employ csound6~ as shown below.
The outputs from the first two Arduino analog controls are passed into Csound using its API. Note
that we should use the unpegged (not quantised in time) values directly from the route object. The
Csound file control.csd is called upon by Pd and it should reside in the same directory as the Pd
patch. Establishing communication to and from Pd could employ code such as that shown below.
Data from controller one (Arduino analog 0) is used to modulate the amplitude of an oscillator and
data from controller two (Arduino analog 1) varies its pitch across a four octave range.
EXAMPLE 08B01_Pd_to_Csound.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
629
Arduino - Processing - Csound 08 B. CSOUND AND ARDUINO
sr = 44100
nchnls = 2
0dbfs = 1
ksmps = 32
instr 1
; read in controller data from Pd via the API using 'invalue'
kctrl1 invalue "ctrl1"
kctrl2 invalue "ctrl2"
; re-range controller values from 0 - 1 to 7 - 11
koct = (kctrl2*4)+7
; create an oscillator
a1 vco2 kctrl1,cpsoct(koct),4,0.1
outs a1,a1
endin
</CsInstruments>
<CsScore>
i 1 0 10000
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Communication from Pd into Csound is established using the invalue opcodes and audio is passed
back to Pd from Csound using outs. Note that Csound does not address the computer’s audio
hardware itself but merely passes audio signals back to Pd. Greater detail about using Csound
within Pd can be found in the chapter Csound in Pd.
A disadvantage of using the method is that in order to modify the Csound patch it will require being
edited in an external editor, re-saved, and then the Pd patch will need to be reloaded to reflect these
changes. This workflow might be considered rather inefficient.
Another method of data communication between PD and Csound could be to use MIDI. In this
case some sort of MIDI connection node or virtual patchbay will need to be employed. On Mac
this could be the IAC driver, on Windows this could be MIDI Yoke and on Linux this could be Jack.
This method will have the disadvantage that the Arduino’s signal might have to be quantised in
order to match the 7-bit MIDI controller format but the advantage is that Csound’s audio engine
will be used (not Pd’s; in fact audio can be disabled in Pd) so that making modifications to the
Csound file and hearing the changes should require fewer steps.
A final method for communication between Pd and Csound is to use OSC. This method would have
the advantage that analog 10 bit signal would not have to be quantised. Again workflow should
be good with this method as Pd’s interaction will effectively be transparent to the user and once
started it can reside in the background during working. Communication using OSC is also used
between Processing and Csound so is described in greater detail below.
The following method makes use of the Arduino and P5 (glove) libraries for processing. Again
these need to be copied into the appropriate directory for your chosen platform in order for Pro-
630
08 B. CSOUND AND ARDUINO Arduino - Processing - Csound
cessing to be able to use them. Once again there is no requirement to actually know very much
about Processing beyond installing it and running a patch (sketch). The following sketch will read
all Arduino inputs and output them as OSC.
Start the Processing sketch by simply clicking the triangle button at the top-left of the GUI. Pro-
cessing is now reading serial data from the Arduino and transmitting this as OSC data within the
computer.
The OSC data sent by Processing can be read by Csound using its own OSC opcodes. The follow-
ing example simply reads in data transmitted by Arduino’s analog pin 0 and prints changed values
to the terminal. To read in data from all analog and digital inputs you can use Iain McCurdy’s
Arduino_Processing_OSC_Csound.csd.
EXAMPLE 08B02_Processing_to_Csound.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 8
nchnls = 1
0dbfs = 1
instr 1
; initialise variable used for analog values
gkana0 init 0
; read in OSC channel '/analog/0'
gktrigana0 OSClisten gihandle, "/analog/0", "i", gkana0
; print changed values to terminal
printk2 gkana0
endin
</CsInstruments>
<CsScore>
i 1 0 3600
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
631
Arduino as a MIDI Device 08 B. CSOUND AND ARDUINO
Also worth in investigating is Jacob Joaquin’s Csoundo - a Csound library for Processing. This
library will allow closer interaction between Processing and Csound in the manner of the csound6~
object in Pd. This project has more recently been developed by Rory Walsh.
Programming an Arduino to generate a MIDI controller signal from analog pin 0 could be done
using the following code:
// example written by Iain McCurdy
// import midi library
#include <MIDI.h>
void setup()
{
MIDI.begin(1);
}
void loop()
{
sensorValue = analogRead(analogInPin);
632
08 B. CSOUND AND ARDUINO The Serial Opcodes
delay(10);
}
Data from the Arduino can now be read using Csound’s ctrl7 opcodes for reading MIDI controller
data.
void setup() {
// enable serial communication
Serial.begin(9600);
void loop()
{
// only do something if we received something
// (this should be at csound's k-rate)
if (Serial.available())
{
// while we are here, get our knob value and send it to csound
int sensorValue = analogRead(A0);
Serial.write(sensorValue/4); // scale to 1-byte range (0-255)
}
}
It will be necessary to provide the correct address of the serial port to which the Arduino is con-
nected (in the given example the Windows platform was being used and the port address was
/COM4).
It will be necessary to scale the value to correspond to the range provided by a single byte (0-255)
so therefore the Arduino’s 10 bit analog input range (0-1023) will have to be divided by four.
EXAMPLE 08B03_Serial_Read.csd
<CsoundSynthesizer>
; Example written by Matt Ingalls
; run with a commandline something like:
633
The Serial Opcodes 08 B. CSOUND AND ARDUINO
<CsOptions>
</CsOptions>
;--opcode-lib=serialOpcodes.dylib -odac
<CsInstruments>
ksmps = 500 ; the default krate can be too fast for the arduino to handle
0dbfs = 1
instr 1
iPort serialBegin "/COM4", 9600
kVal serialRead iPort
printk2 kVal
endin
</CsInstruments>
<CsScore>
i 1 0 3600
</CsScore>
</CsoundSynthesizer>
This example will read serial data from the Arduino and print it to the terminal. Reading output
streams from several of Arduino’s sensor inputs simultaneously will require more complex pars-
ing of data within Csound as well as more complex packaging of data from the Arduino. This is
demonstrated in the following example which also shows how to handle serial transmission of
integers larger than 255 (the Arduino analog inputs have 10 bit resolution).
First the Arduino sketch, in this case reading and transmitting two analog and one digital input:
// Example written by Sigurd Saue
// ARDUINO CODE:
// Analog pins
int potPin = 0;
int lightPin = 1;
// Digital pin
int buttonPin = 2;
/*
** Two functions that handles serial send of numbers of varying length
*/
634
08 B. CSOUND AND ARDUINO The Serial Opcodes
void setup() {
// enable serial communication
Serial.begin(9600);
pinMode(buttonPin, INPUT);
}
void loop()
{
// Only do something if we received something (at csound's k-rate)
if (Serial.available())
{
// Read the value (to empty the buffer)
int csound_val = Serial.read();
The solution is similar to MIDI messages. You have to define an ID (a unique number >= 128)
for every sensor. The ID behaves as a status byte that clearly marks the beginning of a message
received by Csound. The remaining bytes of the message will all have a most significant bit equal
to zero (value < 128). The sensor values are transmitted as ID, length (number of data bytes),
and the data itself. The recursive function serial_send_recursive counts the number of data bytes
necessary and sends the bytes in the correct order. Only one sensor value is transmitted for each
run through the Arduino loop.
The Csound code receives the values with the ID first. Of course you have to make sure that the
IDs in the Csound code matches the ones in the Arduino sketch. Here’s an example of a Csound
orchestra that handles the messages sent from the Arduino sketch:
635
The Serial Opcodes 08 B. CSOUND AND ARDUINO
EXAMPLE 08B04_Serial_Read_multiple.csd
<CsoundSynthesizer>
<CsOptions>
-d -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 500 ; the default krate can be too fast for the arduino to handle
nchnls = 2
0dbfs = 1
giSaw ftgen 0, 0, 4096, 10, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8
instr 1
iPort serialBegin "/COM5", 9600 ;connect to the arduino with baudrate = 9600
serialWrite iPort, 1 ;Triggering the Arduino (k-rate)
kValue = 0
kType serialRead iPort ; Read type of data (pot, light, button)
kIndex = 0
kSize serialRead iPort
loopStart:
kValue = kValue << 7
kByte serialRead iPort
kValue = kValue + kByte
loop_lt kIndex, 1, kSize, loopStart
endif
continue:
; Here you can do something with the variables kPot, kLight and kButton
; printks "Pot %f\n", 1, kPot
; printks "Light %f\n", 1, kLight
; printks "Button %d\n", 1, kButton
if (kButton == 0) then
636
08 B. CSOUND AND ARDUINO HID
out aOut
endif
endin
</CsInstruments>
<CsScore>
i 1 0 60 ; Duration one minute
e
</CsScore>
</CsoundSynthesizer>
;example written by Sigurd Saue
Remember to provide the correct address of the serial port to which the Arduino is connected (the
example uses /COM5).
HID
Another option for communication has been made available by a new Arduino board called
Leonardo. It pairs with a computer as if it were an HID (Human Interface Device) such as a
mouse, keyboard or a gamepad. Sensor data can therefore be used to imitate the actions of
a mouse connected to the computer or keystrokes on a keyboard. Csound is already equipped
with opcodes to make use of this data. Gamepad-like data is perhaps the most useful option
though and there exist opcodes (at least in the Linux version) for reading gamepad data. It is also
possible to read in data from a gamepad using pygame and Csound’s python opcodes.
637
HID 08 B. CSOUND AND ARDUINO
638
08 C. CSOUND VIA UDP
Alternatively, if using a frontend such as CsoundQt, it is possible run an empty CSD, with the –port
in its CsOptions field:
EXAMPLE 10F01_csound_udp.csd
<CsoundSynthesizer>
<CsOptions>
--port=1234
</CsOptions>
<CsInstruments>
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
This will start Csound in a daemon mode, waiting for any UDP messages in port 1234. Now with
netcat, orchestra code can be sent to Csound. A basic option is to use it interactively in the termi-
nal, with a heredocument command («) to indicate the end of the orchestra we are sending:
$ nc -u 127.0.0.1 1234 << EOF
> instr 1
> a1 oscili p4*0dbfs,p5
> out a1
> endin
> schedule 1,0,1,0.5,440
> EOF
639
Using Csound via UDP with the –port Option 08 C. CSOUND VIA UDP
Csound will respond with a 440Hz sinewave. The ctl-c key combination can be used to close nc
and go back to the shell prompt. Alternatively, we could write our orchestra code to a file and then
send it to Csound via the following command (orch is the name of our file):
$ nc -u 127.0.0.1 1234 < orch
Csound performance can be stopped in the usual way via ctl-c in the terminal, or through the dedi-
cated transport controls in a frontend. We can also close the server it via a special UDP message:
ERROR WITH MACRO close
However, this will not close Csound, but just stop the UDP server.
640
09 A. CSOUND IN PD
Installing
You can embed Csound in PD via the external object csound6~ which has been written by Victor
Lazzarini. This external is either part of the Csound distribution or can be built from the sources
at https://github.com/csound/csound_pd . In the examples folder of this repository you
can also find all the .csd and .pd files of this chapter.
On Ubuntu Linux, you can install the csound6~ via the Synaptic Package Manager. Just look for
csound6~ or pd-csound, check install, and your system will install the library at the appropriate
location. If you build Csound from sources, go to the csound_pd repository and follow the build
instructions. Once it is compiled, the object will appear as csound6~.pd_linux and should be copied
(together with csound6~-help.pd) to /usr/lib/pd/extra, so that PD can find it. If not, add it to PD’s
search path (File->Path…).
On Mac OSX, you find the csound6~ external, help file and examples in the release directory
of the csound_pd repository. (Prior to 6.11, the csound6~ was in /Library/Frameworks/C-
soundLib64.framework/Versions/6.0/Resources/PD after installing Csound.)
Put these files in a folder which is in PD’s search path. For PD-extended, it is by default ~/Li-
brary/Pd. But you can put it anywhere. Just make sure that the location is specified in PD’s
Preferences-> Path… menu.
On Windows, you find the csound6~ external, help file and examples in the release directory of the
csound_pd repository, too.
Control Data
You can send control data from PD to your Csound instrument via the keyword control in a mes-
sage box. In your Csound code, you must receive the data via invalue or chnget. This is a simple
example:
641
Live Input 09 A. CSOUND IN PD
EXAMPLE 09A01_pdcs_control_in.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 2
0dbfs = 1
ksmps = 8
instr 1
kFreq invalue "freq"
kAmp invalue "amp"
aSin poscil kAmp, kFreq
out aSin, aSin
endin
</CsInstruments>
<CsScore>
i 1 0 10000
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Save this file under the name control.csd. Save a PD window in the same folder and create the
following patch:
Note that for invalue channels, you first must register these channels by a set message. The usage
of chnget is easier; a simple example can be found in this example in the csound6~ repository.
As you see, the first two outlets of the csound6~ object are the signal outlets for the audio channels
1 and 2. The third outlet is an outlet for control data (not used here, see below). The rightmost
outlet sends a bang when the score has been finished.
Live Input
Audio streams from PD can be received in Csound via the inch opcode. The number of audio inlets
created in the csound6~ object will depend on the number of input channels used in the Csound
orchestra. The following .csd uses two audio inputs:
642
09 A. CSOUND IN PD MIDI
EXAMPLE 09A02_pdcs_live_in.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
0dbfs = 1
ksmps = 8
nchnls = 2
instr 1
aL inch 1
aR inch 2
kcfL randomi 100, 1000, 1; center frequency
kcfR randomi 100, 1000, 1; for band pass filter
aFiltL butterbp aL, kcfL, kcfL/10
aoutL balance aFiltL, aL
aFiltR butterbp aR, kcfR, kcfR/10
aoutR balance aFiltR, aR
outch 1, aoutL
outch 2, aoutR
endin
</CsInstruments>
<CsScore>
i 1 0 10000
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
MIDI
The csound6~ object receives MIDI data via the keyword midi. Csound is able to trigger instrument
instances in receiving a note on message, and turning them off in receiving a note off message
(or a note-on message with velocity=0). So this is a very simple way to build a synthesizer with
arbitrary polyphonic output:
643
MIDI 09 A. CSOUND IN PD
This is the corresponding midi.csd. It must contain the options -+rtmidi=null -M0 in the <CsOp-
tions> tag. It is an FM synth in which the modulation index is defined according to the note veloc-
ity. The harder a key is truck, the higher the index of modulation will be; and therefore a greater
number of stronger partials will be created. The ratio is calculated randomly between two limits,
which can be adjusted.
EXAMPLE 09A03_pdcs_midi.csd
<CsoundSynthesizer>
<CsOptions>
-+rtmidi=null -M0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 8
nchnls = 2
0dbfs = 1
instr 1
iFreq cpsmidi ;gets frequency of a pressed key
iAmp ampmidi 8;gets amplitude and scales 0-8
iRatio random .9, 1.1; ratio randomly between 0.9 and 1.1
aTone foscili .1, iFreq, 1, iRatio/5, iAmp+1, giSine; fm
aEnv linenr aTone, 0, .01, .01; avoiding clicks at the end of a note
outs aEnv, aEnv
endin
</CsInstruments>
<CsScore>
f 0 36000; play for 10 hours
e
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
644
09 A. CSOUND IN PD Score Events
Score Events
Score events can be sent from PD to Csound by a message with the keyword event. You can send
any kind of score events, like instrument calls or function table statements. The following example
triggers Csound’s instrument 1 whenever you press the message box on the top. Different sounds
can be selected by sending f events (building/replacing a function table) to Csound.
EXAMPLE 09A04_pdcs_events.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 8
nchnls = 2
0dbfs = 1
seed 0; each time different seed
instr 1
iDur random 0.5, 3
p3 = iDur
iFreq1 random 400, 1200
iFreq2 random 400, 1200
idB random -18, -6
kFreq linseg iFreq1, iDur, iFreq2
kEnv transeg ampdb(idB), p3, -10, 0
aTone poscil kEnv, kFreq
outs aTone, aTone
endin
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
645
Control Output 09 A. CSOUND IN PD
Control Output
If you want Csound to pass any control data to PD, you can use the opcode outvalue. You will
receive this data at the second outlet from the right of the csound6~ object. The data are sent as
a list with two elements. The name of the control channel is the first element, and the value is the
second element. You can get the values by a route object or by a send/receive chain. This is a
simple example:
EXAMPLE 09A05_pdcs_control_out.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 2
0dbfs = 1
ksmps = 8
instr 1
ktim times
kphas phasor 1
outvalue "time", ktim
outvalue "phas", kphas*127
endin
</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
646
09 A. CSOUND IN PD Send/Receive Buffers from PD to Csound and back
EXAMPLE 06A06_pdcs_tabset_tabget.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 8
nchnls = 1
0dbfs = 1
instr 1
itable = p4
aout BufPlay1 itable
out aout
endin
</CsInstruments>
<CsScore>
f 0 99999
</CsScore>
</CsoundSynthesizer>
647
Settings 09 A. CSOUND IN PD
Settings
Make sure that the Csound vector size given by the ksmps value, is not larger than the internal PD
vector size. It should be a power of 2. I would recommend starting with ksmps=8. If there are
performance problems, try to increase this value to 16, 32, or 64, i.e. ascending powers of 2.
The csound6~ object runs by default if you turn on audio in PD. You can stop it by sending a run 0
message, and start it again with a run 1 message.
You can recompile the csd file of a csound6~ object by sending a reset message.
By default, you see all the messages of Csound in the PD window. If you do not want to see them,
send a message 0 message. message 1 re-enables message printing.
If you want to open a new .csd file in the csound6~ object, send the message open, followed by
the path of the .csd file you want to load.
A rewind message rewinds the score without recompilation. The message offset, followed by a
number, offsets the score playback by that number of seconds.
648
09 B. CSOUND IN MAXMSP
Csound can be embedded in a Max patch using the csound~ object. This allows you to synthesize
and process audio, MIDI, or control data with Csound.
Note: Most of the descriptions below have been written years ago by Davis Pyon. They may be
outdated and will need to be updated.
Installing
The csound~ requires an installation of Csound. The external can be downloaded on Csound’s
download page (under Other).
3. Create a text file called helloworld.csd within the same folder as your patch.
649
Audio I/O 09 B. CSOUND IN MAXMSP
EXAMPLE 09B01_maxcs_helloworld.csd
<CsoundSynthesizer>
<CsInstruments>
;Example by Davis Pyon
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
aNoise noise .1, 0
outch 1, aNoise, 2, aNoise
endin
</CsInstruments>
<CsScore>
f0 86400
i1 0 86400
e
</CsScore>
</CsoundSynthesizer>
5. Open the patch, press the bang button, then press the speaker icon.
At this point, you should hear some noise. Congratulations! You created your first csound~ patch.
You may be wondering why we had to save, close, and reopen the patch. This is needed in order
for csound~ to find the csd file. In effect, saving and opening the patch allows csound~ to “know”
where the patch is. Using this information, csound~ can then find csd files specified using a rel-
ative pathname (e.g. helloworld.csd). Keep in mind that this is only necessary for newly created
patches that have not been saved yet. By the way, had we specified an absolute pathname (e.g.
C:/Mystuff/helloworld.csd), the process of saving and reopening would have been unnecessary.
The @scale 0 argument tells csound~ not to scale audio data between Max and Csound. By de-
fault, csound~ will scale audio to match 0dB levels. Max uses a 0dB level equal to one, while
Csound uses a 0dB level equal to 32768. Using @scale 0 and adding the statement 0dbfs = 1
within the csd file allows you to work with a 0dB level equal to one everywhere. This is highly
recommended.
Audio I/O
All csound~ inlets accept an audio signal and some outlets send an audio signal. The number of
audio outlets is determined by the arguments to the csound~ object. Here are four ways to specify
the number of inlets and outlets:
ä [csound~ @io 3]
ä [csound~ @i 4 @o 7]
ä [csound~ 3]
650
09 B. CSOUND IN MAXMSP Audio I/O
ä [csound~ 4 7]
@io 3 creates 3 audio inlets and 3 audio outlets. @i 4 @o 7 creates 4 audio inlets and 7 audio
outlets. The third and fourth lines accomplish the same thing as the first two. If you don’t specify
the number of audio inlets or outlets, then csound~ will have two audio inlets and two audio oulets.
By the way, audio outlets always appear to the left of non-audio outlets. Let’s create a patch called
audio_io.maxpat that demonstrates audio i/o:
EXAMPLE 09B02_maxcs_audio_io.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 3
0dbfs = 1
instr 1
aTri1 inch 1
aTri2 inch 2
aTri3 inch 3
aMix = (aTri1 + aTri2 + aTri3) * .2
outch 1, aMix, 2, aMix
endin
</CsInstruments>
<CsScore>
f0 86400
i1 0 86400
e
</CsScore>
</CsoundSynthesizer>
;example by Davis Pyon
In audio_io.maxpat, we are mixing three triangle waves into a stereo pair of outlets. In audio_io.csd,
we use inch and outch to receive and send audio from and to csound~. inch and outch both use a
numbering system that starts with one (the left-most inlet or outlet).
Notice the statement nchnls = 3 in the orchestra header. This tells the Csound compiler to cre-
ate three audio input channels and three audio output channels. Naturally, this means that our
csound~ object should have no more than three audio inlets or outlets.
651
Control Messages 09 B. CSOUND IN MAXMSP
Control Messages
Control messages allow you to send numbers to Csound. It is the primary way to control Csound
parameters at i-rate or k-rate. To control a-rate (audio) parameters, you must use and audio inlet.
Here are two examples:
Notice that you can use either control or c to indicate a control message. The second argument
specifies the name of the channel you want to control and the third argument specifies the value.
EXAMPLE 09B03_maxcs_control_in.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
kPitch chnget "pitch"
kMod invalue "mod"
aFM foscil .2, cpsmidinn(kPitch), 2, kMod, 1.5, giSine
outch 1, aFM, 2, aFM
endin
</CsInstruments>
<CsScore>
f0 86400
i1 0 86400
e
</CsScore>
</CsoundSynthesizer>
;example by Davis Pyon
In the patch, notice that we use two different methods to construct control messages. The pak
652
09 B. CSOUND IN MAXMSP MIDI
method is a little faster than the message box method, but do whatever looks best to you. You
may be wondering how we can send messages to an audio inlet (remember, all inlets are audio
inlets). Don’t worry about it. In fact, we can send a message to any inlet and it will work.
In the Csound file, notice that we use two different opcodes to receive the values sent in the control
messages: chnget and invalue. chnget is more versatile (it works at i-rate and k-rate, and it accepts
strings) and is a tiny bit faster than invalue. On the other hand, the limited nature of invalue (only
works at k-rate, never requires any declarations in the header section of the orchestra) may be
easier for newcomers to Csound.
MIDI
csound~ accepts raw MIDI numbers in its first inlet. This allows you to create Csound instrument
instances with MIDI notes and also control parameters using MIDI Control Change. csound~ ac-
cepts all types of MIDI messages, except for: sysex, time code, and sync. Let’s look at a patch and
text file that uses MIDI:
EXAMPLE 09B04_maxcs_midi.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
iPitch cpsmidi
kMod midic7 1, 0, 10
aFM foscil .2, iPitch, 2, kMod, 1.5, giSine
outch 1, aFM, 2, aFM
endin
</CsInstruments>
653
Events 09 B. CSOUND IN MAXMSP
<CsScore>
</CsScore>
</CsoundSynthesizer>
;example by Davis Pyon
In the patch, notice how we’re using midiformat to format note and control change lists into raw
MIDI bytes. The 1 argument for midiformat specifies that all MIDI messages will be on channel
one.
In the Csound file, notice the massign statements in the header of the orchestra. massign 0, 0
tells Csound to clear all mappings between MIDI channels and Csound instrument numbers. This
is highly recommended because forgetting to add this statement may cause confusion some-
where down the road. The next statement massign 1,1 tells Csound to map MIDI channel one to
instrument one.
To get the MIDI pitch, we use the opcode cpsmidi. To get the FM modulation factor, we use midic7
in order to read the last known value of MIDI CC number one (mapped to the range [0,10]).1
Notice that in the score section of the Csound file, we no longer have the statement i1 0 86400 as
we had in earlier examples. The score section is left empty here, so that instrument 1 is compiled
but not activated. Activation is done via MIDI here.
Events
To send Csound events (i.e. score statements), use the event or e message. You can send any
type of event that Csound understands. The following patch and text file demonstrates how to
send events:
EXAMPLE 09B05_maxcs_events.csd
<CsoundSynthesizer>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
1 Csound’s MIDI options and opcodes are described in detail in section 7 of this manual.
654
09 B. CSOUND IN MAXMSP Events
0dbfs = 1
instr 1
iDur = p3
iCps = cpsmidinn(p4)
iMeth = 1
print iDur, iCps, iMeth
aPluck pluck .2, iCps, iCps, 0, iMeth
outch 1, aPluck, 2, aPluck
endin
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
;example by Davis Pyon
In the patch, notice how the arguments to the pack object are declared. The i1 statement tells
Csound that we want to create an instance of instrument one. There is no space between i and
1 because pack considers i as a special symbol signifying an integer. The next number specifies
the start time. Here, we use 0 because we want the event to start right now. The duration 3. is
specified as a floating point number so that we can have non-integer durations. Finally, the number
64 determines the MIDI pitch. You might be wondering why the pack object output is being sent
to a message box. This is good practice as it will reveal any mistakes you made in constructing
an event message.
In the Csound file, we access the event parameters using p-statements. We never access p1 (in-
strument number) or p2 (start time) because they are not important within the context of our in-
strument. Although p3 (duration) is not used for anything here, it is often used to create audio
envelopes. Finally, p4 (MIDI pitch) is converted to cycles-per-second. The print statement is there
so that we can verify the parameter values.
655
Events 09 B. CSOUND IN MAXMSP
656
09 C. CSOUND AS A VST PLUGIN
Csound can be built into a VST or AU plugin through the use of the Csound host API. Refer to the
section on using the Csound API for more details.
The best choice currently is to use Cabbage to create Csound based plugins. See the Cabbage
chapter in part 10 of this manual.
657
09 C. CSOUND AS A VST PLUGIN
658
10 A. CSOUNDQT
CsoundQt (named QuteCsound until automn 2011) is a free, cross-platform graphical frontend to
Csound. It has been written by Andrés Cabrera and is maintained since 2016 by Tarmo Johannes. It
features syntax highlighting, code completion and a graphical widget editor for realtime control of
Csound. It comes with many useful code examples, from basic tutorials to complex synthesizers
and pieces written in Csound. It also features an integrated Csound language help display.
Installing
CsoundQt is a frontend for Csound, so Csound needs to be installed first. Make sure you have
installed Csound before you install CsoundQt; otherwise it will not work at all.
CsoundQt is included in the Csound installers for Mac OSX and Windows. It is recommended to
use the CsoundQt version which is shipped with the installer for compatibility between CsoundQt
and Csound. The Windows installer will probably install CsoundQt automatically. For OSX, first in-
stall the Csound package, then open the CsoundQt disk image and copy the CsoundQt Application
into the Applications folder.
For Linux there is a Debian/Ubuntu package. Unfortunately it is built without RtMidi support, so you
will not be able to connect CsoundQt’s widgets directly with your midi controllers. The alternative
is to build CsoundQt with QtCreator which is not too hard and gives you all options, including the
PythonQt connection. You will find instructions how to build in the CsoundQt Wiki.
659
General Usage and Configuration 10 A. CSOUNDQT
In the widget editor panel, you can create a variety of widgets to control Csound. To link the value
from a widget, you first need to set its channel, and then use the Csound opcodes invalue or chnget.
To send values to widgets, e.g. for data display, you need to use the outvalue or chnset opcode.
CsoundQt also implements the use of HTML and JavaScript code embedded in the optional <html>
element of the CSD file. If this element is detected, CsoundQt will parse it out as a Web page,
compile it, and display it in the HTML5 Gui window. HTML code in this window can control Csound
via a selected part of the Csound API that is exposed in JavaScript. This can be used to define
custom user interfaces, display video and 3D graphics, generate Csound scores, and much more.
See chapter Csound and Html for more information.
CsoundQt also offers convenient facilities for score editing in a spreadsheet like environment
which can be transformed using Python scripting (see also the chapter about Python in CsoundQt).
660
10 A. CSOUNDQT General Usage and Configuration
Configuring CsoundQt
CsoundQt gives easy access to the most important Csound options and to many specific
CsoundQt settings via its Configuration Panel. In particular the Run tab offers many choices
which have to be understood and set carefully.
To open the configuration panel simply push the Configure button. The configuration panel com-
prises seven tabs. The available configurable parameters in each tab are described below for each
tab.
The single options, their meaning and tips of how to set them are listed at the Configuring
CsoundQt page of CsoundQt’s website.
661
General Usage and Configuration 10 A. CSOUNDQT
662
10 B. CABBAGE
Cabbage is a software for prototyping and developing audio instruments with the Csound audio
synthesis language. Instrument development and prototyping is carried out with the main Cab-
bage IDE. Users write and compile Csound code in a code editor. If one wishes, they can also
create a graphical frontend, although this is not essential. Any Csound file can be run with Cab-
bage, regardless of whether or not it has a graphical interface. Cabbage is designed for realtime
processing in mind. While it is possible to use Cabbage to run Csound in the more traditional
score-driven way, but your success may vary.
Cabbage is a ‘host’ application. It treats each and every Csound instruments as a unique native
plugin, which gets added to a digital audio graph (DAG) once it is compiled. The graph can be
opened and edited at any point during a performance. If one wishes to use one of their Csound
instruments in another audio plugin host; such as Reaper, Live, Bitwig, Ardour, QTractor, etc. They
can export the instrument through the ‘Export instrument’ option.
663
Using Cabbage 10 B. CABBAGE
installed, you can skip this step. Note that you will need to have Csound installed one way or
another in order to run Cabbage.
Using Cabbage
Instrument development and prototyping is carried out with the main Cabbage IDE. Users write
and compile their Csound code in a code editor. Each Csound file opened with have a correspond-
ing editor. If one wishes one can also create a graphical frontend, although this is no longer a
requirement for Cabbage. Any Csound file can be run with Cabbage, regardless of whether or not
it has a graphical interface. Each Csound files that is compiled by Cabbage will be added to an
underlying digital audio graph. Through this graph users can manage and configure instrument to
create patches of complex processing chains.
Opening files
User can open any .csd files by clicking on the Open File menu command, or toolbar button. Users
can also browse the Examples menu from the main File menu. Cabbage ships with over 100 high-
end instruments that can be modified, hacked, and adapted in any way you wish. Note that if
you wish to modify the examples, use the Save-as option first. Although this is only required on
Windows, it’s a good habit to form. You don’t want to constantly overwrite the examples with your
own code. Cabbage can load and perform non-Cabbage score-driven .csd files. However, it also
uses its own audio IO, so it will overwrite any -odac options set in the CsOptions section of a .csd
file.
664
10 B. CABBAGE Using Cabbage
ä A new synth.
When this option is selected Cabbage will generate a simple synthesiser with an ADSR en-
velope and MIDI keyboard widget. In the world of VST, these instruments are referred to a
VSTi’s.
ä A new effect.
When this option is selected Cabbage will create a simple audio effect. It will generate a sim-
ple Csound instrument that provides access to an incoming audio stream. It also generates
code that will control the gain of the output.
ä A new Csound file.
This will generate a basic Csound file without any graphical frontend.
Note that these templates are provided for quickly creating new instruments. One can modify any
of the template code to convert it from a synth to an effect or vice versa.
Building/exporting instruments
To run an instrument users can use the controls at the top of the file’s editor. Alternatively one can
go to the ‘Tools’ menu and hit ‘Build Instrument’. If you wish to export a plugin go to ‘Export’ and
choose the type of plugin you wish to export. To use the plugin in a host, you will need to let the
host know where your plugin file is located. On Windows and Linux, the corresponding .csd should
be located in the same directory as the plugin dll. The situation is different on MacOS as the .csd
file is automatically packaged into the plugin bundle.
Closing a file will not stop it from performing. To stop a file from performing you must hit the Stop
button.
665
Using Cabbage 10 B. CABBAGE
it. You can move widgets around whilst in edit. You can also right-click and insert new widgets,
as well as modify their appearance using the GUI properties editor on the right-hand side of the
screen.
You will notice that when you select a widget whilst in edit mode, Cabbage will highlight the corre-
sponding line of text in your source code. When updating GUI properties, hit ‘Enter’ when you are
editing single line text or numeric properties, ‘Tab’ when you are editing multi-line text properties,
and ‘Escape’ when you are editing colours.
Instruments can also be added directly through the graph by right-clicking and adding them from
the context menu. The context menu will show all the examples files, along with a selection of
files from a user-defined folder. See the section below on Settings to learn how to set this folder.
666
10 B. CABBAGE Using Cabbage
Instruments can also be deleted by right-clicking the instrument node. Users can delete/modify
connections by clicking on the connections themselves. They can also connect node by clicking
and dragging from an output to an input.
Once an instrument node has been added, Cabbage will automatically open the corresponding
code. Each time you update the corresponding source code, the node will also be updated.
As mentioned above, closing a file will not stop it from performing. It is possible to have many
instruments running even though their code is not showing. To stop an instrument you must hit
the Stop button at the top of its editor, or delete the plugin from the graph.
It can become quite tricky to navigate very long text files. For this reason Cabbage provides a
means of quickly jumping to instrument definitions. It is also possible to create a special ;-
Region: tag. Any text that appears after this tag will be appended to the drop-down combo
box in the Cabbage tool bar.
667
Settings 10 B. CABBAGE
Code can be modified, edited or deleted at a later stage in the Settings dialogue.
Settings
The settings dialogue can be opened by going to the Edit->Setting menu command, or pressing
the Settings cog in the main toolbar.
668
10 B. CABBAGE Settings
Editor
The following settings provide control for various aspects of Cabbage and how it runs its instru-
ments.
ä Auto-load: Enabling this will cause Cabbage to automatically load the last files that were
open.
ä Plugin Window: Enable this checkbox to ensure that the plugin window is always on top and
does not disappear behind the main editor when it loses focus.
ä Graph Window: Same as above only for the Cabbage patcher window.
ä Auto-complete: provides a rough auto-complete of variable names Editor lines to scroll with
MouseWheel: Sets the number of lines to jump on each movement of the mouse wheel.
Directories
These directory fields are given default directories that rarely, if ever, need to be changed.
ä Csound manual directory: Sets the path to index.html in the Csound help manual. The default
directories will be the standard location of the Csound help manual after installing Csound.
ä Cabbage manual directory: Sets the path to index.html in the Cabbage help manual.
ä Cabbage examples directory: Set the path to the Cabbage examples folder. This should
never need to be modified.
669
First Synth 10 B. CABBAGE
ä User files directory: Sets the path to a folder containing user files that can be inserted by right-
clicking in the patcher. Only files stored in this, and the examples path will be accessible in
the Cabbage patcher context menu.
Colours
ä Interface: Allows user to set custom colours for various elements of the main graphical
interface
ä Editor: Allows users to modify the syntax highlighting in the Csound editor
ä Console: Allows users to changes various colours in the Csound output console.
Code Repository
This tab shows the various blocks of code that have been saved to the repository. You can edit or
delete any of the code blocks. Hit Save/Update to update any changes.
First Synth
As mentioned in the previous section, each Cabbage instrument is defined in a simple text file
with a .csd extension. The syntax used to create GUI widgets is quite straightforward and should
be provided within special xml-style tags <Cabbage> and </Cabbage> which can appear either
above or below Csound’s own <CsoundSynthesizer> tags. Each line of Cabbage specific code
relates to one GUI widget only. The attributes of each widget are set using different identifiers
such as colour(), channel(), size() etc. Where identifiers are not used, Cabbage will use the default
values. Long lines can be broken up with a \ placed at the end of a line.
Each and every Cabbage widget has 4 common parameters: position on screen(x, y) and
size(width, height). Apart from position and size all other parameters are optional and if left out
default values will be assigned. To set widget parameters you will need to use an appropriate
identifier after the widget name. More information on the various widgets and identifiers available
in Cabbage can be found in the Widget reference section of these docs.
Getting started
Now that the basics of the Csound language have been outlined, let’s create a simple instrument.
The opcodes used in this simple walk through are vco2, madsr, moogladder and outs.
The vco2 opcode models a voltage controlled oscillator. It provides users with an effective way
of generating band-limited waveforms and can be the building blocks of many a synthesiser. Its
syntax, taken from the Csound reference manual, is given below. It is important to become au
fait with the way opcodes are presented in the Csound reference manual. It, along with the the
670
10 B. CABBAGE First Synth
Cabbage widget reference are two documents that you will end up referencing time and time again
as you start developing Cabbage instruments.
ares vco2 kamp, kcps [, imode] [, kpw] [, kphs] [, inyx]
vco2 outputs an a-rate signal and accepts several different input argument. The types of input
parameters are given by the first letter in their names. We see above that the kamp argument
needs to be k-rate. Square brackets around an input argument means that argument is optional
and can be left out. Although not seen above, whenever an input argument start with x, it can be
an i, k or a-rate variable.
kamp determines the amplitude of the signal, while kcps set the frequency of the signal. The
default type of waveform created by a vco2 is a sawtooth waveform. The simplest instrument
that can be written to use a vco2 is given below. The out opcode is used to output an a-rate
signal as audio.
instr 1
aOut vco2 1, 440
out aOut
endin
In the traditional Csound context, we would start this instrument using a score statement. We’ll
learn about score statements later, but because we are building a synthesiser that will be played
with a MIDI keyboard, our score section will not be very complex. In fact, it will only contain one
line of code. f0 z is a special score statement that instructs Csound to listen for events for an
extremely long time. Below is the entire source code, including a simple Cabbage section for the
instrument presented above.
EXAMPLE 10B01_cabbage_1.csd
<Cabbage>
form caption("Untitled") size(400, 300), \
colour(58, 110, 182), \
pluginID("def1")
keyboard bounds(8, 158, 381, 95)
</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-+rtmidi=NULL -M0 -m0d --midi-key-cps=4 --midi-velocity-amp=5
</CsOptions>
<CsInstruments>
; Initialize the global variables.
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
;causes Csound to run for about 7000 years...
671
First Synth 10 B. CABBAGE
f0 z
</CsScore>
</CsoundSynthesizer>
You’ll notice that the pitch and frequency for the vco2 opcode has been replaced with two i-rate
variables, iFreq and iAmp, who in turn get their value from p5, and p4. p5 and p4 are p-variables.
Their values will be assigned based on incoming MIDI data. If you look at the code in the sec-
tion you’ll see the text ‘–midi-key-cps=4 –midi-velocity-amp=5’. This instructs Csound to pass the
current note’s velocity to p5, and the current note’s frequency, in cycle per second(Hz.) to p4. p4
and p5 were chosen arbitrarily. p7, p8, p-whatever could have been used, so long as we accessed
those same p variables in our instrument.
Another important piece of text from the section is the ‘-M0’. This tells Csound to send MIDI data
to our instruments. Whenever a note is pressed using the on-screen MIDI keyboard, or a hardware
keyboard if one is connected, it will trigger instrument 1 to play. Not only will it trigger instrument
one to play, it will also pass the current note’s amplitude and frequency to our two p variables.
Csound offers several ADSR envelopes. The simplest one to use, and the one that will work out of
the box with MIDI based instruments is madsr Its syntax, as listed in the Csound reference manual,
is given as:
kres madsr iatt, idec, islev, irel
Note that the inputs to madsr are i-rate. They cannot change over the duration of a note. There
are several places in the instrument code where the output of this opcode can be used. It could be
applied directly to the first input argument of the vco2 opcode, or it can be placed in the line with
the out opcode. Both are valid approaches.
672
10 B. CABBAGE First Synth
EXAMPLE 10B02_cabbage_2.csd
<Cabbage>
form caption("Untitled") size(400, 300), \
colour(58, 110, 182), \
pluginID("def1")
keyboard bounds(8, 158, 381, 95)
</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-n -d -+rtmidi=NULL -M0 -m0d --midi-key-cps=4 --midi-velocity-amp=5
</CsOptions>
<CsInstruments>
; Initialize the global variables.
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
;causes Csound to run for about 7000 years...
f0 z
</CsScore>
</CsoundSynthesizer>
673
First Synth 10 B. CABBAGE
It can’t be stated enough that each widget responsible for controlling an aspect of your instrument
MUST have a channel set using the channel() identifier. Why? Because Csound can access
these channels using its chnget opcode. The syntax for chnget is very simple:
kRes chnget "channel"
The chnget opcode will create a variable that will contain the current value of the named channel.
The rate at which the chnget opcode will operate at is determined by the first letter of its output
variable. The simple instrument shown in the complete example above can now be modified so
that it accesses the values of each of the sliders.
instr 1
iFreq = p4
iAmp = p5
iAtt chnget "att"
iDec chnget "dec"
iSus chnget "sus"
iRel chnget "rel"
kEnv madsr iAtt, iDec, iSus, iRel
aOut vco2 iAmp, iFreq
outs aOut*kEnv, aOut*kEnv
endin
Every time a user plays a note, the instrument will grab the current value of e ach slider and use
that value to set its ADSR envelop. Note that the chnget opcodes listed above all operate at i-time
only. This is important because the madsr opcode expects i-rate variable.
674
10 B. CABBAGE First Synth
Its first input argument is an a-rate variable. The next two arguments set the filter cut-off frequency
and the amount of resonance to be added to the signal. Both of these can be k-rate variables, thus
allowing them to be changed during the note. Cut-off and resonance controls can easily be added
to our instrument. To do so we need to add two more sliders to our Cabbage section of code. We’ll
also need to add two more chnget opcodes and a moogladder to our Csound code. One thing
to note about the cut-off slider is that it should be exponential. As the users increases the slider,
it should increment in larger and larger steps. We can do this be setting the sliders skew value to
.5. More details about this can be found in the slider widget reference page.
EXAMPLE 10B03_cabbage_3.csd
<Cabbage>
form caption("Simple Synth") size(450, 220), \
colour(58, 110, 182), \
pluginID("def1")
keyboard bounds(14, 88, 413, 95)
rslider bounds(12, 14, 70, 70), \
channel("att"), \
range(0, 1, 0.01, 1, .01), \
text("Attack")
rslider bounds(82, 14, 70, 70), \
channel("dec"), \
range(0, 1, 0.5, 1, .01), \
text("Decay")
rslider bounds(152, 14, 70, 70), \
channel("sus"), \
range(0, 1, 0.5, 1, .01), \
text("Sustain")
rslider bounds(222, 14, 70, 70), \
channel("rel"), \
range(0, 1, 0.7, 1, .01), \
text("Release")
rslider bounds(292, 14, 70, 70), \
channel("cutoff"), \
range(0, 22000, 2000, .5, .01), \
text("Cut-Off")
rslider bounds(360, 14, 70, 70), \
channel("res"), range(0, 1, 0.7, 1, .01), \
text("Resonance")
</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-n -d -+rtmidi=NULL -M0 -m0d --midi-key-cps=4 --midi-velocity-amp=5
</CsOptions>
<CsInstruments>
; Initialize the global variables.
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
675
First Synth 10 B. CABBAGE
</CsInstruments>
<CsScore>
;causes Csound to run for about 7000 years...
f0 z
</CsScore>
</CsoundSynthesizer>
Sightings of LFOs!
Many synths use some kind of automation to control filter parameters. Sometimes an ADSR is
used to control the cut-off frequency of a filter, and in other cases low frequency oscillators, or
LFOs are used. As we have already seen how ADSRs work, let’s look at implementing an LFO to
control the filters cut-off frequency. Csound comes with a standard LFO opcode that provides
several different type of waveforms to use. Its syntax, as listed in the Csound reference manual is
given as:
kres lfo kamp, kcps [, itype]
ä itype = 0 sine
ä itype = 1 triangles
ä itype = 2 square (bipolar)
ä itype = 3 square (unipolar)
ä itype = 4 saw-tooth
ä itype = 5 saw-tooth(down)
In our example we will use a downward moving saw-tooth wave form. A basic implementation
676
10 B. CABBAGE First Synth
The output of the LFO is multiplied by the value of kCutOff. The frequency of the LFO is set to 1
which means the cut-off frequency will move from kCutOff to 0, once every second. This will create
a simple rhythmical effect. Of course it doesn’t make much sense to have the frequency fixed at
1. Instead, it is better to give the user control over the frequency using another slider. Finally, an
amplitude control slider will also be added, allowing users to control the over amplitude of their
synth.
There are many further improvements that could be made to the simple instrument. For example,
a second vco2 could be added to create a detune effect which will add some depth to the synth’s
sound. One could also an ADSR to control the filter envelope, allowing the user an option to switch
between modes. If you do end up with something special why not share it on the Cabbage recipes
forum!
EXAMPLE 10B04_cabbage_4.csd
<Cabbage>
form caption("Simple Synth") size(310, 310), \
colour(58, 110, 182), \
pluginID("def1")
keyboard bounds(12, 164, 281, 95)
rslider bounds(12, 14, 70, 70), \
channel("att"), \
range(0, 1, 0.01, 1, .01), \
text("Attack")
rslider bounds(82, 14, 70, 70), \
channel("dec"), \
range(0, 1, 0.5, 1, .01), \
text("Decay")
rslider bounds(152, 14, 70, 70), \
channel("sus"), \
range(0, 1, 0.5, 1, .01), \
text("Sustain")
rslider bounds(222, 14, 70, 70), \
channel("rel"), \
range(0, 1, 0.7, 1, .01), \
text("Release")
rslider bounds(12, 84, 70, 70), \
channel("cutoff"), \
range(0, 22000, 2000, .5, .01), \
text("Cut-Off")
rslider bounds(82, 84, 70, 70), \
channel("res"), \
range(0, 1, 0.7, 1, .01), \
text("Resonance")
rslider bounds(152, 84, 70, 70), \
channel("LFOFreq"), \
range(0, 10, 0, 1, .01), \
text("LFO Freq")
677
A basic Cabbage effect 10 B. CABBAGE
</CsInstruments>
<CsScore>
;causes Csound to run for about 7000 years...
f0 z
</CsScore>
</CsoundSynthesizer>
After you have named the new effect Cabage will generate a very simple instrument that takes an
incoming stream of audio and outputs directly, without any modification or further processing. In
order to do some processing we can add some Csound code the instrument. The code presented
below is for a simple reverb unit. We assign the incoming sample data to two variables, i.e., aInL
and aInR. We then process the incoming signal through the reverbsc opcode. Some GUI widgets
have also been added to provide users with access to various parameter. See the previous section
678
10 B. CABBAGE A basic Cabbage effect
on creating your first synth if you are not sure about how to add GUI widgets.
Example
EXAMPLE 10B05_cabbage_5.csd
<Cabbage>
form size(280, 160), \
caption("Simple Reverb"), \
pluginID("plu1")
groupbox bounds(20, 12, 233, 112), text("groupbox")
rslider bounds(32, 40, 68, 70), \
channel("size"), \
range(0, 1, .2, 1, 0.001), \
text("Size"), \
colour(2, 132, 0, 255),
rslider bounds(102, 40, 68, 70), \
channel("fco"), \
range(1, 22000, 10000, 1, 0.001), \
text("Cut-Off"), \
colour(2, 132, 0, 255),
rslider bounds(172, 40, 68, 70), \
channel("gain"), \
range(0, 1, .5, 1, 0.001), \
text("Gain"), \
colour(2, 132, 0, 255),
</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-n -d
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs=1
instr 1
kFdBack chnget "size"
kFco chnget "fco"
kGain chnget "gain"
aInL inch 1
aInR inch 2
aOutL, aOutR reverbsc aInL, aInR, kFdBack, kFco
outs aOutL*kGain, aOutR*kGain
endin
</CsInstruments>
<CsScore>
f1 0 1024 10 1
i1 0 z
</CsScore>
</CsoundSynthesizer>
The above instrument uses 3 rsliders to control the reverb size(feedback level), the cut-off fre-
quency, and overall gain. The range() identifier is used with each slider to specify the min, max
and starting value of the sliders.
If you compare the two score sections in the synth and effect instruments, you’ll notice that the
679
A basic Cabbage effect 10 B. CABBAGE
synth instrument doesn’t use any i-statement. Instead it uses an f0 z. This tells Csound to wait
for incoming events until the user kills it. Because the instrument is to be controlled via MIDI we
don’t need to use an i-statement in the score. In the second example we use an i-statement with
a long duration so that the instrument runs without stopping for a long time.
Learning More
To learn more about Cabbage, please visit the Cabbage website. There you will find links to more
tutorials, video links, and the user forum.
680
10 C. BLUE
General Overview
Blue is a graphical computer music environment for composition, a versatile front-end to Csound.
It is written in Java, platform-independent, and uses Csound as its audio engine. It provides higher
level abstractions such as a graphical timeline for composition, GUI-based instruments, score gen-
erating SoundObjects like PianoRolls, python scripting, Cmask, Jmask and more. It is available at:
http://blue.kunstmusik.com
In several places you will find lists and trees: All of your instruments used in a composition are
numbered, named and listed in the Orchestra-window.
You will find the same for UDOs (User Defined Opcodes). From this list you may export or import
Instruments and UDOs from a library to the piece and vice versa. You may also bind several UDOs
to a particular Instrument and export this instrument along with the UDOs it needs.
Editor
Blue holds several windows where you can enter code in an editor-like window. The editor-like
windows are found for example in the Orchestra-window, the window to enter global score or the
Tables-window to collect all the functions. There you may type in, import or paste text-based
information. It gets displayed with syntax highlighting of Csound code.
681
Organization of Tabs and Windows 10 C. BLUE
The Score timeline allows for visual organization of all the used SoundObjects in a composition.
In the score window, which is the main graphical window that represents the composition, you may
arrange the composition by arranging the various SoundObjects in the timeline. A SoundObject
is an object that holds or even generates a certain amount of score-events. SoundObjects are
the building blocks within Blue’s score timeline. SoundObjects can be lists of notes, algorithmic
generators, python script code, Csound instrument definitions, PianoRolls, Pattern Editors, Tracker
interfaces, and more. These SoundObjects may be text based or GUI-based as well, depending on
their facilities and purposes.
Figure 60.2: Timeline holding several Sound Objects, one selected and opened in the SoundObject
editor window
682
10 C. BLUE SoundObjects
SoundObjects
To enable every kind of music production style and thus every kind of electronic music, blue holds
a set of different SoundObjects. SoundObjects in blue can represent many things, whether it is
a single sound, a melody, a rhythm, a phrase, a section involving phrases and multiple lines, a
gesture, or anything else that is a perceived sound idea.
Just as there are many ways to think about music, each with their own model for describing
sound and vocabulary for explaining music, there are a number of different SoundObjects in blue.
Each SoundObject in blue is useful for different purposes, with some being more appropriate
for expressing certain musical ideas than others. For example, using a scripting object like the
PythonObject or RhinoObject would serve a user who is trying to express a musical idea that may
require an algorithmic basis, while the PianoRoll would be useful for those interested in notating
melodic and harmonic ideas. The variety of different SoundObjects allows for users to choose
what tool will be the most appropriate to express their musical ideas.
Since there are many ways to express musical ideas, to fully allow the range of expression that
Csound offers, Blue’s SoundObjects are capable of generating different things that Csound will
use. Although most often they are used for generating Csound SCO text, SoundObjects may also
generate ftables, instruments, user-defined opcodes, and everything else that would be needed to
express a musical idea in Csound.
Modifying a SoundObject
First, you may set the start time and duration of every SoundObject by hand by typing in precise
numbers or drag it more intuitively back and fourth on the timeline. This modifies the position in
time of a SoundObject, while stretching it modifies the outer boundaries of it and may even change
the density of events it generates inside.
If you want to enter information into a SoundObject, you can open and edit it in a SoundObject
editor-window. But there is also a way to modify the “output” of a SoundObject, without having to
change its content. The way to do this is using NoteProcessors.
By using NoteProcessors, several operations may be applied onto the parameters of a SoundOb-
ject. NoteProcessors allow for modifying the SoundObjects score results, i.e. adding 2 to all p4
values, multiplying all p5 values by 6, etc. These NoteProcessors can be chained together to ma-
nipulate and modify objects to achieve things like transposition, serial processing of scores, and
more.
Finally the SoundObjects may be grouped together and organized in larger-scale hierarchy by com-
bining them to PolyObjects. Polyobject are objects, which hold other SoundObjects, and have time-
lines in themselves. Working within them on their timelines and outside of them on the parent
timeline helps organize and understand the concepts of objective time and relative time between
different objects.
683
Instruments with a graphical interface 10 C. BLUE
BlueSynthBuilder (BSB)-Instruments
The BlueSynthBuilder (BSB)-Instruments and the BlueEffects work like conventional Csound in-
struments, but there is an additional opportunity to add and design a GUI that may contain sliders,
knobs, textfields, pull-down menus and more. You may convert any conventional Csound Instru-
ment automatically to a BSB-Instrument and then add and design a GUI.
Blue Mixer
Blue’s graphical mixer system allows signals generated by instruments to be mixed together and
further processed by Blue Effects. The GUI follows a paradigm commonly found in music se-
quencers and digital audio workstations.
The mixer UI is divided into channels, sub-channels, and the master channel. Each channel has a
fader for applying level adjustments to the channel’s signal, as well as bins pre- and post-fader for
adding effects. Effects can be created on the mixer, or added from the Effects Library.
Users can modify the values of widgets by manipulating them in real-time, but they can also draw
automation curves to compose value changes over time.
684
10 C. BLUE Automation
Automation
For BSB-Instruments, blueMixer and blueEffects it is possible to use Lines and Graphs within the
score timeline to enter and edit parameters via a line. In Blue, most widgets in BlueSynthBuilder
and Effects can have automation enabled. Faders in the Mixer can also be automated.
Editing automation is done in the Score timeline. This is done by first selecting a parameter for
automation from the SoundLayer’s “A” button’s popup menu, then selecting the Single Line mode
in the Score for editing individual line values.
Using Multi-Line mode in the score allows the user to select blocks of SoundObjects and automa-
tions and move them as a whole to other parts of the Score.
Thus the parameters of these instruments with a GUI may be automatized and controlled via an
editable graph in the Score-window.
Libraries
blue features also libraries for instruments, SoundObjects, UDOs, Effects (for the blueMixer) and
the CodeRepository for code snippets. All these libraries are organized as lists or trees. Items of
the library may be imported to the current composition or exported from it to be used later in other
pieces.
The SoundObject library allows for instantiating multiple copies of a SoundObject, which allows for
editing the original object and updating all copies. If NoteProcessors are applied to the instances
in the composition representing the general structure of the composition you may edit the content
of a SoundObject in the library while the structure of the composition remains unchanged. That
way you may work on a SoundObject while all the occurrences in the composition of that very
SoundObject are updated automatically according the changes done in the library.
685
Other Features 10 C. BLUE
The Orchestra manager organizes instruments and functions as an instrument librarian. There is
also an Effects Library and a Library for the UDOs.
Other Features
ä blueLive - work with SoundObjects in realtime to experiment with musical ideas or perfor-
mance.
ä SoundObject freezing - frees up CPU cycles by pre-rendering SoundObjects
ä Microtonal supportusing scales defined in the Scala scale format, including a microtonal
PianoRoll, Tracker, NoteProcessors, and more.
686
10 D. WINXOUND
It seems that WinXound is not developed any more, but it might still be useful to run older Csound
versions; so we keep this chapter for now.
WinXound is a free and open-source Front-End GUI Editor for CSound 6, CSoundAV, CSoundAC,
with Python and Lua support, developed by Stefano Bonetti. It runs on Microsoft Windows, Apple
Mac OsX and Linux. WinXound is optimized to work with the CSound 6 compiler.
687
10 D. WINXOUND
688
10 E. CSOUND VIA TERMINAL
Whilst many of us now interact with Csound through one of its many front-ends which provide
us with an experience more akin the that of mainstream software, new-comers to Csound should
bear in mind that there was a time when the only way running Csound was from the command line
using the Csound command. In fact we must still run Csound in this way but front-ends do this
for us usually via some toolbar button or widget. Many people still prefer to interact with Csound
from a terminal window and feel this provides a more “naked” and honest interfacing with the
program. Very often these people come from the group of users who have been using Csound for
many years, form the time before front-ends. It is still important for all users to be aware of how to
run Csound from the terminal as it provides a useful backup if problems develop with a preferred
front-end.
Executing csound with no additional arguments will run the program but after a variety of con-
figuration information is printed to the terminal we will be informed that we provided “insufficient
arguments” for Csound to do anything useful. This action can still be valid for first testing if Csound
is installed and configured for terminal use, for checking what version is installed and for finding
out what performance flags are available without having to refer to the manual.
Performance flags are controls that can be used to define how Csound will run. All of these flags
have defaults but we can make explicitly use flags and change these defaults to do useful things
like controlling the amount of information that Csound displays for us while running, activating a
MIDI device for input, or altering buffer sizes for fine tuning realtime audio performance. Even if
you are using a front-end, command line flags can be manipulated in a familiar format usually in
settings or preferences menu. Adding flags here will have the same effect as adding them as part
of the Csound command. To learn more about Csound’s command line flags it is best to start on
the page in the reference manual where they are listed and described by category.
Command line flags can also be defined within the <CsOptions> … </CsOptions> part of a .csd file
and also in a file called .csoundrc which can be located in the Csound home program directory
and/or in the current working directory. Having all these different options for where esentially the
689
The Csound Command 10 E. CSOUND VIA TERMINAL
same information is stored might seem excessive but it is really just to allow flexibiliy in how users
can make changes to how Csound runs, depending on the situation and in the most efficient way
possible. This does however bring up one one issue in that if a particular command line flag has
been set in two different places, how does Csound know which one to choose? There is an order
of precedence that allows us to find out.
Beginning from its own defaults the first place Csound looks for additional flag options is in the
.csoundrc file in Csound’s home directory, the next is in a .csoundrc file in the current working direc-
tory (if it exists), the next is in the <CsOptions> of the .csd and finally the Csound command itself.
Flags that are read later in this list will overwrite earlier ones. Where flags have been set within
a front-end’s options, these will normally overwrite any previous instructions for that flag as they
form part of the Csound command. Often a front-end will incorporate a check-box for disabling
its own inclusion of flag (without actually having to delete them from the dialogue window).
After the command line flags (if any) have been declared in the Csound command, we provide the
name(s) of out input file(s) - originally this would have been the orchestra (.orc) and score (.sco) file
but this arrangement has now all but been replaced by the more recently introduced .csd (unified
orchestra and score) file. The facility to use a separate orchestra and score file remains however.
For example:
Csound -d -W -osoundoutput.wav inputfile.csd
will run Csound and render the input .csd inputfile.csd as a wav file (-W flag) to the file soundout-
put.wav (-o flag). Additionally displays will be suppressed as dictated by the -d flag. The input .csd
file will need to be in the current working directory as no full path has been provided. the output
file will be written to the current working directory of SFDIR if specified.
690
10 F. WEB BASED CSOUND
Csound to be used in a browser via javascript is currently one of the most exciting developments.
This chapter will be rewritten soon. For now, have a look at these introductions and tutorials:
1. Victor Lazzarini has written an extensive tutorial as Vanilla Guide to Webaudio Csound
2. Steven Yi and Hlöðver Sigurðsson showed an extended tutorial on the ICSC 2022: Csound
on the Web
3. Rory Walsh has contributed a tutorial about his p5.csound, a wrapper to use Csound inside
p5.js sketches (the javascript version of Processing: https://rorywalsh.github.io/
p5.Csound/#/.
691
10 F. WEB BASED CSOUND
692
11 A. ANALYSIS
Csound comes bundled with a variety of additional utility applications. These are small programs
that perform a single function, very often with a sound file, that might be useful just before or
just after working with the main Csound program. Originally these were programs that were run
from the command line but many of Csound front-ends now offer direct access to many of these
utilities through their own utilities menus. It is useful to still have access to these programs via
the command line though, if all else fails.
The standard syntax for using these programs from the command line is to type the name of the
utility followed optionally by one or more command line flags which control various performance
options of the program — all of these will have useable defaults anyway — and finally the name of
the sound file upon which the utility will operate.
utility_name [flag(s)] [file_name(s)]
If we require some help or information about a utility and don’t want to be bothered hunting through
the Csound Manual we can just type the the utility’s name with no additional arguments, hit enter
and the commmand line response will give us some information about that utility and what com-
mand line flags it offers. We can also run the utility through Csound — perhaps useful if there are
problems running the utility directly — by calling Csound with the -U flag. The -U flag will instruct
Csound to run the utility and to interpret subsequent flags as those of the utility and not its own.
Csound -U utility_name [flag(s)] [file_name(s)]
Analysis Utilities
Although many of Csound’s opcodes already operate upon commonly encountered sound file for-
mats such as wav and aiff, a number of them require sound information in more specialised and
pre-analysed formats, and for this Csound provides the sound analysis utilities atsa, cvanal, hetro,
lpanal and pvanal.
We will explain in the following paragraphs the background and usage of these five different sound
analysis utilities.
693
Analysis Utilities 11 A. ANALYSIS
atsa
Chapter 05 K gives some background about the Analysis-Transformation-Synthesis (ATS) method
of spectral resynthesis. It requires the preceding analysis of a sound file. This is the job of the
atsa utility.
where infilename is the sound file to be analyzed, and outfilename is the .ats file which is written
as result of the atsa utility.
It can be said that the default values of the various flags are reasonable for a first usage. For a
refinement of the analysis the atsa manual page provides all necessary information.
cvanal
The cvanal utility analyses an impulse response for usage in the old concolve opcode. Nowadays,
convolution in Csound is mostly done with other opcodes which are described in the Convolution
chapter of this book. More information about the cvanal utility can be found here in the Csound
Manual.
hetro
The hetrodyne filter analysis can be understood as one way of applying the Fourier Transform.1 Its
attempt is to reconstruct a number of partial tracks in a time-breakpoint manner. The breakpoints
1 Cf.
Curtis Roads, The Computer Music Tutorial, Cambridge MA: MIT Press 1996, 548-549; James Beauchamp, Analysis,
Synthesis and Perception of Musical Sounds, New York:Springer 2007, 5-12
694
11 A. ANALYSIS Analysis Utilities
are measured in milliseconds. Although this utility is originally designed for the usage in the adsyn
opcode, it can be used to get data from any harmonic sound for additive synthesis.
But the adjustment of some flags is crucial here depending on the desired usage of the analysis:
ä -f begfreq: This is the estimated frequency of the fundamental. The default is 100 Hz, but
it should be adjusted as good as possible to the real fundamental frequency of the input
sound.
ä -h partials: This is the number of partials the utility will analyze and write in the output file.
The default number of 10 is quite low and will usually result in a dull sound in the resynthesis.
ä -n brkpts: This is the number of breakpoints for the analysis. These breakpoints are initially
evenly spread over the duration, and then reduced and adjusted by the algorithm. The default
number of 256 is reasonable for most usage, but can be massively reduced for some sounds
and usages.
ä -m minamp: The hetro utility uses the old Csound amplitude convention where 0 dB is set to
32767. This has to be considered in this option, in which a minimal amplitude is set. Below
this amplitude a partial is considered dormant. So the default 64 corresponds to -54 dB;
other common values are 128 (-48 dB), 32 (-60 dB) or 0 (no thresholding).
695
Analysis Utilities 11 A. ANALYSIS
The file BratscheMono.het starts with HETRO 10 as first line, showing that 10 partial track data
will follow. The amplitude data lines begin with -1, the frequency data lines begin with -2. This is
start and end of the first two lines, slightly formatted to show the breakpoints:
-1, 0,0, 815,3409, 1631,11614, 2447,12857, ... , 7343,0, 32767
-2, 0,220, 815,217, 1631,218, 2447,219, ... , 7343,217, 32767
After the starting -1 or -2, the time-value pairs are written. Here we have at 0 ms an amplitude of 0
and a frequency of 220. At 815 ms we have amplitude of 3409 and frequency of 217. At 7343 ms,
near the end of this file, we have amplitude of 0 and frequency of 217, followed in both cases by
32767 (as additional line ending signifier).
lpanal
Linear Prediction Coding has been developed for the analysis and resynthesis of speech.2 The
lpanal utility performs the analysis, which will then be used by the LPC Resynthesis Opcodes. The
defaults can be seen in the following screenshot:
2 Cf. Curtis Roads, The Computer Music Tutorial, Cambridge MA: MIT Press 1996, 200-210
696
11 A. ANALYSIS Analysis Utilities
It should be mentioned that in 2020 Victor Lazzarini wrote a bunch of opcodes which apply real-
time (streaming) linear prediction analysis. The complement of the old lpanal utility is the lpcanal
opcode.
pvanal
The pvanal utility performs a Short-Time Fourier Transform over a sound file. It will produce a .pvx
file which can be used by the old pv-opcodes. Nowadays the pvs-opcodes are mostly in use; see
chapter 05 I of this book. Nevertheless, the pvanal utility provides a simple option to perform FFT
and write the result in a file.
The main parameter are few; the defaults can be seen here:
The binary data of a .pvx file can be converted in a text file via the pvlook utility.
697
Analysis Utilities 11 A. ANALYSIS
698
11 B. FILE INFO AND CONVERSION
sndinfo
The utility sndinfo (sound information) provides the user with some information about one or more
sound files. sndinfo is invoked and provided with a file name:
sndinfo ../SourceMaterials/fox.wav
If you are unsure of the file address of your sound file you can always just drag and drop it into the
terminal window. The output should be something like:
util sndinfo:
../SourceMaterials/fox.wav:
srate 44100, monaural, 16 bit WAV, 2.757 seconds
(121569 sample frames)
sndinfo will accept a list of file names and provide information on all of them in one go so it may
prove more efficient gleaning the same information from a GUI based sample editor. We also have
the advantage of being able to copy and paste from the terminal window into a .csd file.
het_import / het_export
The utilities het_import and het_export are marked as deprecated because the files generated by
hetro are text files nowadays.
pvlook
The pvlook utility shows the output of a STFT analysis files created with pvanal. The invocation is:
699
File Conversion Utilities 11 B. FILE INFO AND CONVERSION
As these files contain a big amount of information, the flags contain some options to select a
range of bins and frames:
ä -bb and -eb set the begin and end of the bin number for the output (defaulting to lowest and
highest bin)
ä -bf and -ef set the begin and end of the analysis frames to be printed (defaulting to first and
last frame).
If we want to look at the fifth bin only in the frames 100-110 or the file “fox.pvx”, we run:
pvlook -bb 5 -eb 5 -bf 100 -ef 110 fox.pvx
Bin 5 Freqs.
131.728 134.213 135.257 133.603 133.640 131.737 135.581 147.809 176.199
211.347 149.678
Bin 5 Amps.
0.018 0.020 0.020 0.019 0.018 0.017 0.020 0.016 0.002 0.011 0.010
pvexport / pvimport
Another method of transforming a .pvx analysis file created by pvanal is done with the pv_export
utility. It converts the binary file to a text file. After some general information about the source file
in the header, each line contains amp-freq pairs of the bins.
The text file can be re-converted to a binary .pvx file with the pv_import utility.
sdif2ad
The hetro utility will create an sdif file if the extension .sdif is given for the outfile. This file can be
converted by the sdif2ad utility to a file which can be used by the adsyn opcode.
700
11 B. FILE INFO AND CONVERSION File Conversion Utilities
src_conv
Sample rate conversion is an everyday’s situation in electronic music production. The src_conv
utility is based on Eric de Castro Lopo’s libsamplerate. It offers five quality levels where 1 is the
worst and 5 the best. The general syntax is here:
src_conv [flags] infile
To convert the sample rate of fox.wav in best quality to 48 kHz and writing a 32 bit output file as
best_fox.wav we write:
src_conv -r 48000 -o best_fox.wav -W -Q5 -f fox.wav
701
File Conversion Utilities 11 B. FILE INFO AND CONVERSION
702
11 C. MISCELLANEOUS
A final group gathers together various unsorted utilities: cs, csb64enc, envext, extractor, makecsd,
mixer, scale and mkdb.
Most interesting of these are perhaps extractor which will extract a user defined fragment of a
sound file which it will then write to a new file, mixer which mixes together any number of sound
files and with gain control over each file and scale which will scale the amplitude of an individual
sound file.
703
11 C. MISCELLANEOUS
704
12 A. THE CSOUND API
Though it is written in C, the Csound API uses an object structure. This is achieved through an
opaque pointer representing a Csound instance. This opaque pointer is passed as the first argu-
ment when an API function is called from the host program.
To use the Csound C API, you have to include csound.h in your source file and to link your code
with libcsound64 (or libcsound if using the 32 bit version of the library). Here is an example of the
csound command line application written in C, using the Csound C API:
#include <csound/csound.h>
On a linux system, using libcsound64 (double version of the csound library), supposing that all
include and library paths are set correctly, we would build the above example with the following
command (notice the use of the -DUSE_DOUBLE flag to signify that we compile against the 64 bit
version of the csound library):
705
Threading 12 A. THE CSOUND API
The command for building with a 32 bit version of the library would be:
gcc -o csoundCommand csoundCommand.c -lcsound
Within the C or C++ examples of this chapter, we will use the MYFLT type for the audio samples.
Doing so, the same source files can be used for both development (32 bit or 64 bit), the compiler
knowing how to interpret MYFLT as double if the macro USE_DOUBLE is defined, or as float if the
macro is not defined.
The C API has been wrapped in a C++ class for convenience. This gives the Csound basic C++
API. With this API, the above example would become:
#include <csound/csound.hpp>
Here, we get a pointer to a Csound object instead of the csound opaque pointer. We call methods
of this object instead of C functions, and we don’t need to call csoundDestroy() in the end of the
program, because the C++ object destruction mechanism takes care of this. On our linux system,
the example would be built with the following command:
g++ -DUSE_DOUBLE -o csoundCommandCpp csoundCommand.cpp -lcsound64
Threading
Before we begin to look at how to control Csound in real time we need to look at threads. Threads
are used so that a program can split itself into two or more simultaneously running tasks. Multiple
threads can be executed in parallel on many computer systems. The advantage of running threads
is that you do not have to wait for one part of your software to finish executing before you start
another.
In order to control aspects of your instruments in real time your will need to employ the use of
threads. If you run the first example found on this page you will see that the host will run for
as long as csoundPerform() returns 0. As soon as it returns non-zero it will exit the loop and
cause the application to quit. Once called, csoundPerform() will cause the program to hang
until it is finished. In order to interact with Csound while it is performing you will need to call
csoundPerform() in a separate unique thread.
When implementing threads using the Csound API, we must define a special performance-thread
function. We then pass the name of this performance function to csoundCreateThread(), thus
registering our performance-thread function with Csound. When defining a Csound performance-
706
12 A. THE CSOUND API Threading
thread routine you must declare it to have a return type uintptr_t, hence it will need to return a value
when called. The thread function will take only one parameter, a pointer to void. This pointer to void
is quite important as it allows us to pass important data from the main thread to the performance
thread. As several variables are needed in our thread function the best approach is to create a
user defined data structure that will hold all the information your performance thread will need.
For example:
typedef struct {
int result; /* result of csoundCompile() */
CSOUND *csound; /* instance of csound */
bool PERF_STATUS; /* performance status */
} userData;
Below is a basic performance-thread routine. *data is cast as a userData data type so that we
can access its members.
uintptr_t csThread(void *data)
{
userData *udata = (userData *)data;
if (!udata->result) {
while ((csoundPerformKsmps(udata->csound) == 0) &&
(udata->PERF_STATUS == 1));
csoundDestroy(udata->csound);
}
udata->PERF_STATUS = 0;
return 1;
}
In order to start this thread we must call the csoundCreateThread() API function which is
declared in csound.h as:
void *csoundCreateThread(uintptr_t (*threadRoutine (void *),
void *userdata);
If you are building a command line program you will need to use some kind of mechanism to
prevent int main() from returning until after the performance has taken place. A simple while
loop will suffice.
The first example presented above can now be rewritten to include a unique performance thread:
#include <stdio.h>
#include <csound/csound.h>
typedef struct {
int result;
CSOUND *csound;
int PERF_STATUS;
} userData;
707
Channel I/O 12 A. THE CSOUND API
if (!ud->result) {
ud->PERF_STATUS = 1;
ThreadID = csoundCreateThread(csThread, (void *)ud);
}
else {
return 1;
}
The application above might not appear all that interesting. In fact it’s almost the exact same as
the first example presented except that users can now stop Csound by hitting ‘enter’. The real
worth of threads can only be appreciated when you start to control your instrument in real time.
Channel I/O
The big advantage to using the API is that it allows a host to control your Csound instruments in
real time. There are several mechanisms provided by the API that allow us to do this. The simplest
mechanism makes use of a ‘software bus’.
The term bus is usually used to describe a means of communication between hardware compo-
nents. Buses are used in mixing consoles to route signals out of the mixing desk into external de-
vices. Signals get sent through the sends and are taken back into the console through the returns.
The same thing happens in a software bus, only instead of sending analog signals to different
hardware devices we send data to and from different software.
Using one of the software bus opcodes in Csound we can provide an interface for communication
with a host application. An example of one such opcode is chnget. The chnget opcode reads
data that is being sent from a host Csound API application on a particular named channel, and
assigns it to an output variable. In the following example instrument 1 retrieves any data the host
may be sending on a channel named “pitch”:
instr 1
kfreq chnget "pitch"
asig oscil 10000, kfreq, 1
708
12 A. THE CSOUND API Channel I/O
out asig
endin
One way in which data can be sent from a host application to an instance of Csound is through
the use of the csoundGetChannelPtr() API function which is defined in csound.h as:
int csoundGetChannelPtr(CSOUND *, MYFLT **p, const char *name, int type);
CsoundGetChannelPtr() stores a pointer to the specified channel of the bus in p. The channel
pointer p is of type MYFLT *. The argument name is the name of the channel and the argument
type is a bitwise OR of exactly one of the following values:
# control data (one MYFLT value)
CSOUND_CONTROL_CHANNEL
If the call to csoundGetChannelPtr() is successful the function will return zero. If not, it will
return a negative error code. We can now modify our previous code in order to send data from our
application on a named software bus to an instance of Csound using csoundGetChannelPtr().
#include <stdio.h>
#include <csound/csound.h>
/*-----------------------------------------------------------
* main function
*-----------------------------------------------------------*/
int main(int argc, char *argv[])
{
int userInput = 200;
void *ThreadID;
userData *ud;
ud = (userData *)malloc(sizeof(userData));
MYFLT *pvalue;
ud->csound = csoundCreate(NULL);
ud->result = csoundCompile(ud->csound, argc, argv);
if (csoundGetChannelPtr(ud->csound, &pvalue, "pitch",
709
Channel I/O 12 A. THE CSOUND API
CSOUND_INPUT_CHANNEL | CSOUND_CONTROL_CHANNEL) != 0) {
printf("csoundGetChannelPtr could not get the \"pitch\" channel");
return 1;
}
if (!ud->result) {
ud->PERF_STATUS = 1;
ThreadID = csoundCreateThread(csThread, (void*)ud);
}
else {
printf("csoundCompiled returned an error");
return 1;
}
printf("\nEnter a pitch in Hz(0 to Exit) and type return\n");
while (userInput != 0) {
*pvalue = (MYFLT)userInput;
scanf("%d", &userInput);
}
ud->PERF_STATUS = 0;
csoundDestroy(ud->csound);
free(ud);
return 0;
}
/*-----------------------------------------------------------
* definition of our performance thread function
*-----------------------------------------------------------*/
uintptr_t csThread(void *data)
{
userData *udata = (userData *)data;
if (!udata->result) {
while ((csoundPerformKsmps(udata->csound) == 0) &&
(udata->PERF_STATUS == 1));
csoundDestroy(udata->csound);
}
udata->PERF_STATUS = 0;
return 1;
}
There are several ways of sending data to and from Csound through software buses. They are
divided in two categories:
The opcodes concerned are chani, chano, chnget and chnset. When using numbered channels
with chani and chano, the API sees those channels as named channels, the name being derived
from the channel number (i.e. 1 gives “1”, 17 gives “17”, etc).
There is also a helper function returning the data size of a named channel:
710
12 A. THE CSOUND API Channel I/O
int csoundSetControlChannelHints(
CSOUND *csound, const char *name, controlChannelHints_t hints
);
int csoundGetControlChannelHints(
CSOUND *csound, const char *name, controlChannelHints_t *hints
);
int csoundRegisterKeyboardCallback(
CSOUND *csound,
int (*func)(void *userData, void *p, unsigned int type),
void *userData, unsigned int type
);
# replace csoundSetCallback() and csoundRemoveCallback()
void csoundRemoveKeyboardCallback(
CSOUND *csound,
int (*func)(void *, void *, unsigned int)
);
711
Score Events 12 A. THE CSOUND API
Score Events
Adding score events to the csound instance is easy to do. It requires that csound has its threading
done, see the paragraph above on threading. To enter a score event into csound, one calls the
following function:
void myInputMessageFunction(void *data, const char *message)
{
userData *udata = (userData *)data;
csoundInputMessage(udata->csound, message );
}
Now we can call that function to insert Score events into a running csound instance. The for-
matting of the message should be the same as one would normally have in the Score part of the
.csd file. The example shows the format for the message. Note that if you’re allowing csound to
print its error messages, if you send a malformed message, it will warn you. Good for debugging.
There’s an example with the csound source code that allows you to type in a message, and then it
will send it.
/* instrNum start duration p4 p5 p6 ... pN */
const char *message = "i1 0 1 0.5 0.3 0.1";
myInputMessageFunction((void*)udata, message);
Callbacks
Csound can call subroutines declared in the host program when some special events occur. This
is done through the callback mechanism. One has to declare to Csound the existence of a callback
routine using an API setter function. Then when a corresponding event occurs during performance,
Csound will call the host callback routine, eventually passing some arguments to it.
The example below shows a very simple command line application allowing the user to rewind the
score or to abort the performance. This is achieved by reading characters from the keyboard: ‘r’
for rewind and ‘q’ for quit. During performance, Csound executes a loop. Each pass in the loop
yields ksmps audio frames. Using the API csoundSetYieldCallback() function, we can tell
to Csound to call our own routine after each pass in its internal loop.
The yieldCallback routine must be non-blocking. That’s why it is a bit tricky to force the C getc
function to be non-blocking. To enter a character, you have to type the character and then hit the
return key.
#include <csound/csound.h>
fd = fileno(stdin);
oldstat = fcntl(fd, F_GETFL, dummy);
712
12 A. THE CSOUND API CsoundPerformanceThread: A Swiss Knife for the API
The user can also set callback routines for file open events, real-time audio events, real-time MIDI
events, message events, keyboards events, graph events, and channel invalue and outvalue events.
The example below is equivalent to the example in the callback section. But this time, as the
characters are read in a different thread, there is no need to have a non-blocking character reading
routine.
#include <csound/csound.hpp>
#include <csound/csPerfThread.hpp>
#include <iostream>
using namespace std;
713
CsoundPerformanceThread: A Swiss Knife for the API 12 A. THE CSOUND API
Because CsoundPerformanceThread is not part of the API, we have to link to libcsnd6 to get it
working:
g++ -DUSE_DOUBLE -o perfThread perfThread.cpp -lcsound64 -lcsnd6
When using this class from Python or Java, this is not an issue because the ctcsound.py module
and the csnd6.jar package include the API functions and classes, and the CsoundPerformanceThread
class as well (see below).
Here is a more complete example which could be the base of a frontal application to run Csound.
The host application is modeled through the CsoundSession class which has its own event loop
(mainLoop). CsoundSession inherits from the API Csound class and it embeds an object of
type CsoundPerformanceThread. Most of the CsoundPerformanceThread class methods
are used.
#include <csound/csound.hpp>
#include <csound/csPerfThread.hpp>
#include <iostream>
#include <string>
void startThread() {
if (Compile((char *)m_csd.c_str()) == 0) {
m_pt = new CsoundPerformanceThread(this);
m_pt - > Play();
}
};
714
12 A. THE CSOUND API CsoundPerformanceThread: A Swiss Knife for the API
if (!m_csd.empty()) {
stopPerformance();
startThread();
}
};
void stopPerformance() {
if (m_pt) {
if (m_pt - > GetStatus() == 0)
m_pt - > Stop();
m_pt - > Join();
m_pt = NULL;
}
Reset();
};
void mainLoop() {
string s;
bool loop = true;
while (loop) {
cout
<< endl
<< "l)oad csd; " +
"e(vent; " +
"r(ewind; " +
"t(oggle pause; " +
"s(top; " +
"p(lay; " +
"q(uit: ";
char c = cin.get();
switch (c) {
case 'l':
cout << "Enter the name of csd file:";
cin >> s;
resetSession(s);
break;
case 'e':
cout << "Enter a score event:";
cin.ignore(1000, '\n'); // a bit tricky, but well, this is C++!
getline(cin, s);
m_pt - > InputMessage(s.c_str());
break;
case 'r':
RewindScore();
break;
case 't':
if (m_pt)
m_pt - > TogglePause();
break;
case 's':
stopPerformance();
break;
case 'p':
resetSession("");
break;
case 'q':
if (m_pt) {
m_pt - > Stop();
m_pt - > Join();
}
loop = false;
break;
}
715
Csound API Review 12 A. THE CSOUND API
private:
string m_csd;
CsoundPerformanceThread *m_pt;
};
There are also methods in CsoundPerformanceThread for sending score events (ScoreEvent),
for moving the time pointer (SetScoreOffsetSeconds), for setting a callback function
(SetProcessCallback) to be called at the end of each pass in the process loop, and for
flushing the message queue (FlushMessageQueue).
As an exercise, the user should complete this example using the methods above and then try to
rewrite the example in Python and/or in Java (see below).
#include <iostream>
#include <string>
#include <vector>
using namespace std;
716
12 A. THE CSOUND API Csound API Review
"endin\n";
void mainLoop() {
SetMessageCallback(noMessageCallback);
SetOutput((char *)"dac", NULL, NULL);
GetParams(&m_csParams);
m_csParams.sample_rate_override = 48000;
m_csParams.control_rate_override = 480;
m_csParams.e0dbfs_override = 1.0;
// Note that setParams is called before first compilation
SetParams(&m_csParams);
if (CompileOrc(orc1.c_str()) == 0) {
Start(this->GetCsound());
// Just to be sure...
cout << GetSr() << ", " << GetKr() << ", ";
cout << GetNchnls() << ", " << Get0dBFS() << endl;
717
Csound API Review 12 A. THE CSOUND API
string s;
TREE *tree;
bool loop = true;
while (loop) {
cout << endl << "1) 2) 3): orchestras, 4) 5) 6): scores; q(uit: ";
char c = cin.get();
cin.ignore(1, '\n');
switch (c) {
case '1':
tree = ParseOrc(m_orc[0].c_str());
CompileTree(tree);
DeleteTree(tree);
break;
case '2':
CompileOrc(m_orc[1].c_str());
break;
case '3':
EvalCode(m_orc[2].c_str());
break;
case '4':
ReadScore((char *)m_sco[0].c_str());
break;
case '5':
ReadScore((char *)m_sco[1].c_str());
break;
case '6':
ReadScore((char *)m_sco[2].c_str());
break;
case 'q':
if (m_pt) {
m_pt->Stop();
m_pt->Join();
}
loop = false;
break;
}
}
};
private:
CsoundPerformanceThread *m_pt;
CSOUND_PARAMS m_csParams;
vector<string> m_orc;
vector<string> m_sco;
};
718
12 A. THE CSOUND API Deprecated Functions
Deprecated Functions
csoundQueryInterface()
csoundSetInputValueCallback()
csoundSetOutputValueCallback()
csoundSetChannelIOCallback()
csoundPerformKsmpsAbsolute()
Builtin Wrappers
The Csound API has also been wrapped to other languages. Usually Csound is built and distributed
including a wrapper for Python and a wrapper for Java.
To use the Python Csound API wrapper, you have to import the ctcsound module. The ctcsound
module is normally installed in the site-packages or dist-packages directory of your python distri-
bution as a ctcsound.py file. Our csound command example becomes:
import sys
import ctcsound
cs = ctcsound.Csound()
result = cs.compile_(sys.argv)
if result == 0:
result = cs.perform()
cs.cleanup()
del cs
sys.exit(result)
We use a Csound object (remember Python has OOp features). Note the use of the sys.argv list
to get the program input arguments.
To use the Java Csound API wrapper, you have to import the csnd6 package. The csnd6 package is
located in the csnd6.jar archive which has to be known from your Java path. Our csound command
example becomes:
import csnd6.*;
719
Foreign Function Interfaces 12 A. THE CSOUND API
Note the “dummy” string as first argument in the arguments list. C, C++ and Python expect that the
first argument in a program argv input array is implicitly the name of the calling program. This is not
the case in Java: the first location in the program argv input array contains the first command line
argument if any. So we have to had this “dummy” string value in the first location of the arguments
array so that the C API function called by our csound.Compile method is happy. This illustrates a
fundamental point about the Csound API. Whichever API wrapper is used (C++, Python, Java, etc),
it is the C API which is working under the hood. So a thorough knowledge of the Csound C API is
highly recommended if you plan to use the Csound API in any of its different flavours.
On our linux system, with csnd.jar located in /usr/local/lib/, our Java Program would be compiled
and run with the following commands:
javac -cp /usr/local/lib/csnd6.jar CsoundCommand.java
java -cp /usr/local/lib/csnd6.jar:. CsoundCommand
There is a drawback using the java wrappers: as it is built during the Csound build, the host system
on which Csound will be used must have the same version of Java than the one which were on
the system used to build Csound. The mechanism presented in the next section can solve this
problem.
Python provides the ctypes module which is used by the ctcsound.py module.
Lua proposes the same functionality through the LuaJIT project. Here is a version of the csound
command using LuaJIT FFI:
-- This is the wrapper part defining our LuaJIT interface to
-- the Csound API functions that we will use, and a helper function
-- called csoundCompile, which makes a pair of C argc, argv arguments from
-- the script input args and calls the API csoundCompile function
-- This wrapper could be written in a separate file and imported
720
12 A. THE CSOUND API Foreign Function Interfaces
csoundAPI = ffi.load("csound64.so")
The FFI package of the Google Go programming language is called cgo. Here is a version of the
csound command using cgo:
package main
/*
#cgo CFLAGS: -DUSE_DOUBLE=1
#cgo CFLAGS: -I /usr/local/include
#cgo linux CFLAGS: -DLINUX=1
#cgo LDFLAGS: -lcsound64
#include <csound/csound.h>
*/
import "C"
import (
"os"
"unsafe"
)
721
Foreign Function Interfaces 12 A. THE CSOUND API
A complete wrapper to the Csound API written in Go is available at the Go-Csnd project on github.
The different examples in this section are written for Linux. For other operating systems, some
adaptations are needed: for example, for Windows the library name suffix is .dll instead of .so.
The advantage of FFI over Builtin Wrappers is that as long as the signatures of the functions in
the interface are the same than the ones in the API, it will work without caring about the version
number of the foreign programming language used to write the host program. Moreover, one
needs to include in the interface only the functions used in the host program. However a good
understanding of the C language low level features is needed to write the helper functions needed
to adapt the foreign language data structures to the C pointer system.
722
12 A. THE CSOUND API References & Links
ctcsound Docs
Rory Walsh 2006, Developing standalone applications using the Csound Host API and wxWidgets,
Csound Journal Volume 1 Issue 4 - Summer 2006
Rory Walsh 2010, Developing Audio Software with the Csound Host API, The Audio Programming
Book, DVD Chapter 35, The MIT Press
François Pinot 2011, Real-time Coding Using the Python API: Score Events, Csound Journal Issue
14 - Winter 2011
723
References & Links 12 A. THE CSOUND API
724
12 B. PYTHON AND CSOUND
The connection between Csound and Python has a long history. Already in 2002 Maurizio Um-
berto Puxeddu contributed the Python Opcodes which allowed the execution of Python code in-
side Csound. Because of Csound’s confession to keep backwards compatibility, this possibility to
run Python inside Csound will stay as long as the Python code can be executed.
With the Csound API however, which has been explained in the previous chapter, a more flexible
and versatile communication between Python and Csound can be established. Now it is Csound
which runs inside Python. This Csound Python API was first generated by SWIG from the Csound’s
C API. This version was called csnd6.py. In 2015, François Pinot wrote a new version of the Csound
Python API. It is based on Python’s ctypes from which its name ctcsound (as ctypes csound) orig-
inates. This version is better adopted to native Python code and has some useful additional fea-
tures, as the integration into Jupyter Notebooks and the new implementation of Andrés Cabrera’s
iCsound.
We will describe in the first part of this chapter some features of using Csound inside Python
via ctcsound. In the second part we will describe some use cases of the old Python Opcodes in
Csound. The possibility to use Python in the score section of a .csd file is described in chapter 14
A.
Installing
Install ctcsound.py
The file ctcsound.py is distributed with the Csound installer. This version must be used, to avoid
incompatibilities between the installed Csound version and the ctcsound version. In case it cannot
be found, it can be installed from the Csound sources.
725
Csound in Python using ctcsound 12 B. PYTHON AND CSOUND
To make the ctcsound.py working in Python, it must be copied to a directory which Python uses
to load external libraries. This folder is usually called site-packages. In case there are more than
one versions of Python on your computer, make sure you copy to the one which you use to launch
the Juypter Notebooks. On OSX, for instance, when using Anaconda Python, copy ctcsound.py
from /Library/Frameworks/CsoundLib64.framework/Versions/6.0/Resources/Python/Current to
anaconda3/lib/python3.X/site-packages.
Install csoundmagics
The csoundmagics offer some nice features to work with ctcsound in the Jupyter Notebooks, in-
cluding syntax highlighting. They also contain the iCsound class. To install the csoundmagics, the
files at https://github.com/csound/ctcsound should be downloaded first. They contain
the cookbook with a lot of Jupyter Notebook files. The files to be installed can be found in the
csoundmagics foldes in the cookbook directory. A description how to install can be found in the
fifth example of the cookbook.
iCsound
The iCsound class is loaded as part of ctcsound with the same command we just used to check
the installation:
%load_ext csoundmagics
After this command has loaded all the libraries, we create an instance of iCsound:
cs = ICsound()
Usually we will get the message: Csound engine started at slot#: 1. Now we can write
some simple code and send it to this instance of Csound:
orc = """
instr 1
aOut poscil .2, 400
out aOut, aOut
endin
"""
cs.sendCode(orc)
cs.sendScore('i 1 0 -1')
Csound runs now and plays a sine tone. To turn off the instrument and delete this instance of
iCsound, we use:
726
12 B. PYTHON AND CSOUND Csound in Python using ctcsound
cs.sendScore('i -1 0 1')
del cs
Some features
As a short survey of some csoundmagics and iCsound features, we start again with loading the
library and creating an instance:
%load_ext csoundmagics
cs = ICsound()
Now we can use the %%csound magics to communicate directly with the running csound instance:
%%csound
iSine1 ftgen 1, 0, 1024, 10, 1
iSine2 ftgen 2, 0, 1024, 10, 0, 1
The plotTable() method displays now both tables by an internal call to the matplotlib:
cs.plotTable(1)
cs.plotTable(2)
If we want to see both tables in the same plot, we use the option reuse=True:
cs.plotTable(1)
cs.plotTable(2,reuse=True)
727
Csound in Python using ctcsound 12 B. PYTHON AND CSOUND
Generally spoken, a GUI can have two functions. It can control Csound, for instance start/stop
Csound, browse files or change control values. The second function is to use a GUI to display
Csound values. We will give one simple example for each case.
This is a code which creates a GUI which lets the user browse an audio file, start and stop playback
in a loop, with a volume slider, and deletes the Csound instance when closing. Comments are
below.
import PySimpleGUI as sg
%load_ext csoundmagics
cs = ICsound()
orc = """
instr 1
Sfile chnget "file"
kVol chnget "vol"
aSound[] diskin Sfile, 1, 0, 1
kFadeOut linenr 1, .01, 1, .01
out aSound[0]*kFadeOut*ampdb(kVol), aSound[1]*kFadeOut*ampdb(kVol)
endin
"""
cs.sendCode(orc)
layout = [
[sg.Text('Select File, then Start/Stop')],
[sg.FileBrowse(key='FILE', enable_events=True),
sg.Button('Start'),
sg.Button('Stop')],
728
12 B. PYTHON AND CSOUND Csound in Python using ctcsound
[sg.Slider(key='VOL',
range=(-20,6),
default_value=0,
orientation='h',
enable_events=True)]]
while True:
event, values = window.read()
if event is None:
cs.sendScore('i -1 0 1')
del cs
break
cs.setStringChannel('file',values['FILE'])
cs.setControlChannel('vol',values['VOL'])
if event is 'Start':
cs.sendScore('i 1 0 -1')
if event is 'Stop':
cs.sendScore('i -1 0 1')
window.close()
In the first section, we see the usual way to load the modules, create a Csound instance and send
an instrument to it. The layout section defines the widgets which will be present in the GUI. The
key parameter is particularly important here, as this is the way a widget can be identified.
The interaction between the GUI and Csound happens in the while loop. Here we send the values
of the browse button and the slider to Csound:
cs.setStringChannel('file',values['FILE'])
cs.setControlChannel('vol',values['VOL'])
Also we start and stop the Csound instrument when the Start/Stop buttons are pressed:
if event is 'Start':
cs.sendScore('i 1 0 -1')
if event is 'Stop':
cs.sendScore('i -1 0 1')
And finally, if the window is being closed, we turn off the instrument, delete the Csound instance
and leave the while-loop:
if event is None:
cs.sendScore('i -1 0 1')
del cs
break
In the previous example, Csound received values from the GUI via chnget, and the Python code
sent these values via setStringChannel and setControlChannel. Considering now the
other way round, we find chnset on the Csound side, and channel on the Python side. The
following code shows a moving line in Csound which is displayed by a slider and a text box.
import PySimpleGUI as sg
%load_ext csoundmagics
cs = ICsound()
orc = """
seed 0
729
Python in Csound using the Python Opcodes 12 B. PYTHON AND CSOUND
instr 1
kLine randomi -1,1,1,3
chnset kLine, "line"
endin
"""
cs.sendCode(orc)
cs.sendScore('i 1 0 -1')
layout = [[sg.Slider(range=(-1,1),
orientation='h',
key='LINE',
resolution=.01)],
[sg.Text(size=(6,1),
key='LINET',
text_color='black',
background_color='white',
justification = 'right',
font=('Courier',16,'bold'))]
]
while True:
event, values = window.read(timeout=100)
if event is None:
cs.sendScore('i -1 0 1')
del cs
break
window['LINE'].update(cs.channel('line')[0])
window['LINET'].update('%+.3f' % cs.channel('line')[0])
window.close()
Starting the Python Interpreter and Running Python Code at i-Time: pyinit and
pyruni
To use the Python opcodes inside Csound, you must first start the Python interpreter. This is done
using the pyinit opcode. The pyinit opcode must be put in the header before any other Python
opcode is used, otherwise, since the interpreter is not running, all Python opcodes will return an
error. You can run any Python code by placing it within quotes as argument to the opcode pyruni.
730
12 B. PYTHON AND CSOUND Python in Csound using the Python Opcodes
This opcode executes the Python code at init time2 and can be put in the header. The example
below shows a simple csd file which prints the text “Hello Csound world!” to the terminal.
EXAMPLE 12B01_pyinit.csd
<CsoundSynthesizer>
<CsOptions>
-ndm0
</CsOptions>
<CsInstruments>
</CsInstruments>
<CsScore>
e 0
</CsScore>
</CsoundSynthesizer>
;Example by Andrés Cabrera and Joachim Heintz
EXAMPLE 12B02_python_global.csd
<CsoundSynthesizer>
<CsOptions>
-ndm0
</CsOptions>
<CsInstruments>
pyinit
731
Python in Csound using the Python Opcodes 12 B. PYTHON AND CSOUND
instr 2 ;calculate d
prints "Instrument %d calculates the value of d!\n", p1
pyruni "d = c**2"
endin
</CsInstruments>
<CsScore>
i 1 1 0
i 2 3 0
i 3 5 0
</CsScore>
</CsoundSynthesizer>
;Example by Andrés Cabrera and Joachim Heintz
Prints:
Instrument 1 reports:
a + b = c = 5
Instrument 2 calculates the value of d!
Instrument 3 reports:
c squared = d = 25
EXAMPLE 12B03_pyrun.csd
<CsoundSynthesizer>
<CsOptions>
-ndm0
</CsOptions>
<CsInstruments>
kr=100
732
12 B. PYTHON AND CSOUND Python in Csound using the Python Opcodes
pyinit
;set variable a to zero at init-time
pyruni "a = 0"
instr 1
;increment variable a by one in each k-cycle
pyrun "a = a + 1"
endin
instr 2
;print out the state of a at this instrument's initialization
pyruni "print 'instr 2: a = %d' % a"
endin
instr 3
;perform two more increments and print out immediately
kCount timeinstk
pyrun "a += 1"
pyrun "print 'instr 3: a = %d' % a"
;;turnoff after k-cycle number two
if kCount == 2 then
turnoff
endif
endin
</CsInstruments>
<CsScore>
i 1 0 1 ;Adds to a for 1 second
i 2 1 0 ;Prints a
i 1 2 2 ;Adds to a for another two seconds
i 3 4 1 ;Prints a again
</CsScore>
</CsoundSynthesizer>
;Example by Andrés Cabrera and Joachim Heintz
Prints:
instr 2: a = 100
instr 3: a = 301
instr 3: a = 302
In this case, the script myscript.py will be executed at k-rate. You can give full or relative path
names.
There are other versions of the pyexec opcode, which run at initialization only (pyexeci) and others
that include an additional trigger argument (pyexect).
733
Python in Csound using the Python Opcodes 12 B. PYTHON AND CSOUND
pyinit
pyruni "a = 1"
pyruni "b = 2"
instr 1
ival pyevali "a + b"
prints "a + b = %d\n", ival
endin
</CsInstruments>
<CsScore>
i 1 0 0
</CsScore>
</CsoundSynthesizer>
What happens is that Python has delivered an integer to Csound, which expects a floating-point
number. Csound always works with numbers which are not integers (to represent a 1, Csound
actually uses 1.0). This is equivalent mathematically, but in computer memory these two numbers
are stored in a different way. So what you need to do is tell Python to deliver a floating-point number
to Csound. This can be done by Python’s float() facility. So this code should work:
EXAMPLE 12B04_pyevali.csd
<CsoundSynthesizer>
<CsOptions>
-ndm0
</CsOptions>
<CsInstruments>
pyinit
pyruni "a = 1"
pyruni "b = 2"
instr 1
ival pyevali "float(a + b)"
prints "a + b = %d\n", ival
endin
</CsInstruments>
<CsScore>
i 1 0 0
734
12 B. PYTHON AND CSOUND Python in Csound using the Python Opcodes
</CsScore>
</CsoundSynthesizer>
;Example by Andrés Cabrera and Joachim Heintz
Prints:
a + b = 3
EXAMPLE 12B05_pyassigni.csd
<CsoundSynthesizer>
<CsOptions>
-ndm0
</CsOptions>
<CsInstruments>
pyinit
</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Unfortunately, you can neither pass strings from Csound to Python via pyassign, nor from Python
to Csound via pyeval. So the interchange between both worlds is actually limited to numbers.
735
Python in Csound using the Python Opcodes 12 B. PYTHON AND CSOUND
are allowed) directly to Csound i- or k-rate variables. You must choose the appropriate opcode
depending on the number of return values from the function, and the Csound rate (i- or k-rate) at
which you want to run the Python function. Just add a number from 1 to 8 after to pycall, to select
the number of outputs for the opcode. If you just want to execute a function without return value
simply use pycall. For example, the function average defined above, can be called directly from
Csound using:
kave pycall1 "average", ka, kb
The output variable kave, will calculate the average of the variable ka and kb at k-rate.
As you may have noticed, the Python opcodes run at k-rate, but also have i-rate versions if an i is
added to the opcode name. This is also true for pycall. You can use pycall1i, pycall2i, etc. if you
want the function to be evaluated at instrument initialization, or in the header. The following csd
shows a simple usage of the pycall opcodes:
EXAMPLE 12B06_pycall.csd
<CsoundSynthesizer>
<CsOptions>
-dnm0
</CsOptions>
<CsInstruments>
pyinit
pyruni {{
def average(a,b):
ave = (a + b)/2
return ave
}} ;Define function "average"
instr 1 ;call it
iave pycall1i "average", p4, p5
prints "a = %i\n", iave
endin
</CsInstruments>
<CsScore>
i 1 0 1 100 200
i 1 1 1 1000 2000
</CsScore>
</CsoundSynthesizer>
;example by andrés cabrera and joachim heintz
736
12 B. PYTHON AND CSOUND Python in Csound using the Python Opcodes
opcodes like pylruni, pylcall1t and pylassigni, which will behave just like their global counterparts,
but they will affect local Python variables only. It is important to have in mind that this locality
applies to instrument instances, not instrument numbers. The next example shows both, local
and global behaviour.
EXAMPLE 12B07_local_vs_global.csd
<CsoundSynthesizer>
<CsOptions>
-dnm0
</CsOptions>
<CsInstruments>
ksmps=32
pyinit
</CsInstruments>
<CsScore>
; p4
i 1.1 0.0 1 100
i 1.2 0.1 1 200
i 1.3 0.2 1 300
i 1.4 0.3 1 400
Prints:
737
Python in Csound using the Python Opcodes 12 B. PYTHON AND CSOUND
Both instruments pass the value of the score parameter field p4 to the python variable value. The
only difference is that instrument 1 does this local (with pylassign and pyleval) and instrument
2 does it global (with pyassign and pyeval). Four instances of instrument 1 are called with 0.1
seconds time offset, for the duration of one second. Printout is done in the first and the last k-
cycle of the instrument.
At start, all instruments show that they have set the python variable value correctly to the p4 value.
This does not change in instrument 1, because the settings als local here. In instrument 2, however,
the now global python variable value is being reset by each of the four instances. At start of the
first instance (Csound time 2.0), it is 100. At start of instance 2 (time 2.1), it is 200. It is set to 400
at Csound time 2.3. So at time 2.999, when the first instance finishes its performance, the value
is not any more 100, but 400. This is reported in the at end printout.
738
12 B. PYTHON AND CSOUND Python in Csound using the Python Opcodes
EXAMPLE 12B08_markov.csd
<CsoundSynthesizer>
<CsOptions>
-odac -dm0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
pyinit
; Python script to define probabilities for each note as lists within a list
; Definition of the get_new_note function which randomly generates a new
; note based on the probabilities of each note occuring.
; Each note list must total 1, or there will be problems!
pyruni {{
c = [0.1, 0.2, 0.05, 0.4, 0.25]
d = [0.4, 0.1, 0.1, 0.2, 0.2]
e = [0.2, 0.35, 0.05, 0.4, 0]
g = [0.7, 0.1, 0.2, 0, 0]
a = [0.1, 0.2, 0.05, 0.4, 0.25]
markov = [c, d, e, g, a]
seed()
def get_new_note(previous_note):
number = random()
accum = 0
i = 0
while accum < number:
accum = accum + markov[int(previous_note)] [int(i)]
i = i + 1
return i - 1.0
}}
739
Python in Csound using the Python Opcodes 12 B. PYTHON AND CSOUND
</CsInstruments>
<CsScore>
; frequency of Octave of
; note generation melody
i 1 0 30 3 7
i 1 5 25 6 9
i 1 10 20 7.5 10
i 1 15 15 1 8
</CsScore>
</CsoundSynthesizer>
;Example by Andrés Cabrera
740
12 C. LUA AND CSOUND
The Lua programming language originated in Brazil in 1993. It became a flexible and popular
scripting language in the 2000s, especially for people working in Game and app design. The key
characterics of Lua derive from its simplicity, including a fairly small size and good performance.
Compared to Python and simliar languages, Lua is faster.
So running Csound in Lua is a good option if someone is building an app which needs to be both
fast and also simple to code. (Compared to the potential complexity of writing an application
in C or C++.) Throughout the 2010s, Csounders from all over the world built interesting and rich
audio applications with Lua and Csound. For Indi Developers working with the Löve 2D Engine,
which is based on Lua, Csound can be a sophisticated option for controlling the sound, as well as
creating/controlling the sounds in Csound.
Installing
In order to run Csound Code in Lua, the luaCsnd6 shared object is needed. Currently (Csound 6.14)
it is not available in the Windows and Mac installer. In other words, it requires an own build of
Csound on these platforms.
On Linux, the luaCsnd.so should be found in /usr/lib, if you install Csound via the package manager.
For own builds of Csound, it should be found in /usr/local/lib or in your build directory.
741
Running Csound in Lua 12 C. LUA AND CSOUND
LUA_CPATH=“/usr/lib/luaCsnd6.so”
local c = luaCsnd6.Csound()
c:SetOption("-odac") -- Using SetOption() to configure Csound
-- Note: use only one commandline flag at a time
742
12 D. CSOUND IN iOS
The first part of this chapter is a guide which aims to introduce and illustrate some of the power
that the Csound language offers to iOS Developers. It assumes that the reader has a rudimentary
background in Csound, and some experience and understanding of iOS development with either
Swift or Objective-C. The most recent Csound iOS SDK can be downloaded on Csound’s download
page. Older versions can be found here. The Csound for iOS Manual (Lazzarini, Yi, Boulanger)
that ships with the Csound for iOS API is intended to serve as a lighter reference for developers.
This guide is distinct from it in that it is intended to be a more thorough, step-by-step approach to
learning the API for the first time.
The second part of this chapter is a detailed discussion of the full integration of Csound into ths
iOS Core Audio system.
Getting Started
There are a number of ways in which one might begin to learn to work with the Csound for iOS API.
Here, to aid in exploring it, we first describe how the project of examples that ships with the API is
structured. We then talk about how to go about configuring a new iOS Xcode project to work with
Csound from scratch.
The Csound for iOS Examples project contains a number of simple examples (in both Objective-C
and Swift) of how one might use Csound’s synthesis and signal processing capabilities, and the
communicative functionality of the API. It is available both in the download bundle or online in the
Csound sources.
In the ViewControllers group, a number of subgroups exist to organize the various individual exam-
ples into a single application. This is done using the Master-Detail application layout paradigm,
wherein a set of options, all of them listed in a master table, correlates to a single detail View-
Controller. Familiar examples of this design model, employed by Apple and provided with every
743
I. Features of Csound in iOS 12 D. CSOUND IN iOS
iOS device, are the Settings app, and the Notes app – each of these contains a master table upon
which the detail ViewController’s content is predicated.
In each of these folders, you will find a unique example showcasing how one might use some
of the features of the Csound for iOS API to communicate with Csound to produce and process
sounds and make and play music. These are designed to introduce you to these features in a
practical setting, and etch of these has a unifying theme that informs its content, interactions, and
structure.
If you are working in Objective-C, adding Csound for iOS to your project is as simple as dragging
the csound-iOS folder into your project. You should select Groups rather than Folder References,
and it is recommended that you elect to copy the csound-iOS folder into your project folder (“Copy
Items if Needed”).
Once you have successfully added this folder, including the CsoundObj class (the class that man-
ages Csound on iOS) is as simple as adding an import statement to the class. For example:
//
// ViewController.h
//
#import "CsoundObj.h"
Note that this only makes the CsoundObj class available, which provides an interface for Csound.
There are other objects containing UI and CoreMotion bindings, as well as MIDI handling. These
are discussed later in this document, and other files will need to be imported in order to access
them.
For Swift users, the process is slightly different: you will need to first create a bridging header: a
.h header file that can import the Objective-C API for access in Swift. The naming convention
is [YourProjectName]-Bridging Header.h and this file can be easily created manually in Xcode by
choosing File > New > File > Header File (under Source), and using the naming convention described
above. After this, you will need to navigate to your project build settings and add the path to this
file (relative to your project’s .xcodeproj project file).
Once this is done, navigate to the bridging header in Xcode and add your Objective-C #import
statements here. For example:
//
// CsoundiOS_ExampleSwift-Bridging-Header.h
// CsoundiOS_ExampleSwift
//
#ifndef CsoundiOS_ExampleSwift_Bridging_Header_h
#define CsoundiOS_ExampleSwift_Bridging_Header_h
#import "CsoundObj.h"
#endif /* CsoundiOS_ExampleSwift_Bridging_Header_h */
You do not need to add any individual import statements to Swift files, CsoundObj’s functionality
should be accessible in your .swift files after this process is complete.
744
12 D. CSOUND IN iOS I. Features of Csound in iOS
The first thing we will do so that we can play a .csd file is add our .csd file to our project. In this
case, we will add a simple .csd (in this case named test.csd) that plays a sine tone with a frequency
of 440Hz for ten seconds. Sample Csound code for this is:
EXAMPLE 12D01_iOS_simple.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 128
nchnls = 2
0dbfs = 1
instr 1
asig poscil 0.5 , 440
outs asig , asig
endin
</CsInstruments>
<CsScore>
i1 0 10
</CsScore>
</CsoundSynthesizer>
We will add this to our Xcode project by dragging and dropping it into our project’s main folder,
making sure to select Copy items if needed and to add it to our main target.
In order to play this .csd file, we must first create an instance of the CsoundObj class. We can do
this by creating a property of our class as follows, in our .h file (for example, in ViewController.h):
//
// ViewController.h
// CsoundiOS_ExampleProject
//
#import <UIKit/UIKit.h>
#import "CsoundObj.h"
@end
Once we’ve done this, we can move over to the corresponding .m file (in this case, ViewCon-
troller.m) and instantiate our Csound object. Here we will do this in our viewDidLoad method,
that is called when our ViewController’s view loads.
//
// ViewController.m
// CsoundiOS_ExampleProject
//
@interface ViewController()
745
I. Features of Csound in iOS 12 D. CSOUND IN iOS
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
// Allocate memory for and initialize a CsoundObj
self.csound = [[CsoundObj alloc] init];
}
Note: in order to play our .csd file, we must first get a path to it that we can give Csound. Because
part of this path can vary depending on certain factors (for example, the user’s native language
setting), we cannot pass a static or “hard-coded” path. Instead, we will access the file using the
NSBundle class (or ‘Bundle’ in Swift).
The .csd file is copied as a resource (you can see this under the Build Phases tab in your target’s
settings), and so we will access it and tell Csound to play it as follows:
- (void) viewDidLoad {
[super viewDidLoad];
self.csound = [[CsoundObj alloc] init];
// CsoundObj *csound is declared as a property in .h
NSString *pathToCsd =
[[NSBundle mainBundle] pathForResource:@"test" ofType:@"csd"];
[self.csound play:pathToCsd];
}
Note that in Swift, this is a little easier and we can simply use:
import UIKit
class ViewController: UIViewController {
var csound = CsoundObj()
With this, the test.csd file should load and play, and we should hear a ten-second long sine tone
shortly after the application runs (i.e. when the main ViewController’s main view loads).
To record the output of Csound in real-time, instead of the play method, use:
// Objective-C
NSURL *docsDirURL = [[[NSFileManager defaultManager]
URLsForDirectory:NSDocumentDirectory
inDomains:NSUserDomainMask] lastObject];
NSURL *file = [docsDirURL URLByAppendingPathComponent:@"outputFile.aif"];
NSString *csdPath =
[[NSBundle mainBundle] pathForResource:@"csdToRecord" ofType:@"csd"];
[self.csound record:csdPath toURL:file];
746
12 D. CSOUND IN iOS I. Features of Csound in iOS
// Swift
let docsDirURL =
FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let file = docsDirURL.appendingPathComponent("outFile.aif")
let csdPath = Bundle.main.path(forResource: "csdFile", ofType: "csd")
csound.record(csdPath, to: file)
Alternatively, the recordToURL method can be used while Csound is already running to begin
recording:
// Objective-C
NSURL *docsDirURL = [[[NSFileManager defaultManager]
URLsForDirectory:NSDocumentDirectory
inDomains:NSUserDomainMask] lastObject];
NSURL *file = [docsDirURL URLByAppendingPathComponent:@"outputFile.aif"];
[self.csound recordToURL:file];
// Swift
let docsDirURL =
FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let file = docsDirURL.appendingPathComponent("outFile.aif")
csound.record(to: file)
Note: the stopRecording method is used to stop recording without also stopping Csound’s real-
time rendering.
Rendering (Offline)
You can also render a .csd to an audio file offline. To render Csound offline to disk, use the
record:toFile: method, which takes a path rather than a URL as its second argument. For example:
// Objective-C
NSString *docsDir = NSSearchPathForDirectoriesInDomains(
NSDocumentDirectory, NSUserDomainMask, YES)[0];
NSString *file = [docsDir stringByAppendingPathComponent:@"outFile.aif"];
NSString *csdPath =
[[NSBundle mainBundle] pathForResource:@"csdFile" ofType:@"csd"];
[self record:csdPath toFile:file];
// Swift
let docsDir = NSSearchPathForDirectoriesInDomains(
.documentDirectory, .userDomainMask, true
)[0]
let file = docsDir.appending("/outFile.aif")
let csdPath = Bundle.main.path(forResource: "csdFile", ofType: "csd")
csound.record(csdPath, toFile: file)
These demonstrations above save the audio files in the app’s documents directory, which allows
write access for file and subdirectory storage on iOS. Note that the -W and -A flags behave as
usual on iOS: they will decide whether the file rendered is a WAV or an AIFF file. In the event that
neither is provided, the latter will be used as a default.
747
I. Features of Csound in iOS 12 D. CSOUND IN iOS
this communication between iOS and Csound, it can, in many cases, abstract the process of setting
up a UI object binding to a single line of code. To initialize a CsoundUI object, we must give it a
reference to our Csound object:
//Objective-C
CsoundUI *csoundUI = [[CsoundUI alloc] initWithCsoundObj: self.csound];
// Swift
var csoundUI = CsoundUI(csoundObj: csound)
Normally, however, these objects are declared as properties rather than locally in methods. As
mentioned, CsoundUI uses named channels for communicating to and from Csound. Once set-
up, values passed to these named channels are normally accessed through the chnget opcode,
for example:
instr 1
kfreq chnget "frequency"
asig oscil 0.5 , kfreq
outs asig , asig
endin
Conversely, in order to pass values from Csound, the chnset opcode is normally used with two
arguments. The first is the variable, and it is followed by the channel name:
instr 1
krand randomi 300 , 2000 , 1 , 3
asig poscil 0.5 , krand
outs asig , asig
chnset krand , "randFreq"
endin
UIButton Binding
The UIButton binding is predominantly contained within the CsoundButtonBinding class, which
CsoundUI uses to create individual button bindings. To add a button binding, use:
//Objective-C
[self.csoundUI addButton:self.button forChannelName:"channelName"];
// Swift
csoundUI.add(button, forChannelName: "channelName")
Where self.button is the button you would like to bind to, and the string channelName contains
the name of the channel referenced by chnget in Csound.
The corresponding value in Csound will be equal to 1 while the button is touched, and reset to 0
when it is released. A simple example of how this might be used in Csound, based on the pvscross
example by Joachim Heintz, is shown below:
instr 1
kpermut chnget "crossToggle "
ain1 soundin "fox .wav"
ain2 soundin "wave .wav"
748
12 D. CSOUND IN iOS I. Features of Csound in iOS
if kpermut == 1 then
fcross pvscross fftin2 , fftin1 , .5 , .5
else
fcross pvscross fftin1 , fftin2 , .5 , .5
endif
UISwitch Binding
The UISwitch binding provides a connection between the UISwitch object and a named channel in
Csound. This binding is managed in the CsoundSwitchBinding class and you can create a UISwitch
binding by using:
//Objective-C
[self.csoundUI addSwitch:self.uiSwitch forChannelName:"channelName"];
// Swift
csoundUI.add(switch, forChannelName: "channelName")
As in the case of the UIButton binding, the UISwitch binding provides an on-off state value (1 or 0
respectively) to Csound. Below we use it to turn on or off a simple note generator:
; Triggering instrument
instr 1
kTrigFreq randomi gkTrigFreqMin , gkTrigFreqMax , 5
ktrigger metro kTrigFreq
kdur randomh .1 , 2 , 5
konoff chnget " instrToggle "
if konoff == 1 then
schedkwhen ktrigger , 0 , 0 , 2 , 0 , kdur
endif
endin
UILabel Binding
The UILabel binding allows you to display any value from Csound in a UILabel object. This can
often be a helpful way of providing feedback to the user. You can add a label binding with:
//Objective-C
[self.csoundUI addLabel:self.label forChannelName:"channelName"];
// Swift
csoundUI.add(label, forChannelName: "channelName")
However, in this case the channel is an output channel. To demonstrate, let us add an output
749
I. Features of Csound in iOS 12 D. CSOUND IN iOS
channel in Csound to display the frequency of the sound generating instrument’s oscillator from
the previous example (for UISwitch):
; Triggering instrument
instr 1
kTrigFreq randomi gkTrigFreqMin , gkTrigFreqMax , 5
ktrigger metro kTrigFreq
kdur randomh .1 , 2 , 5
konoff chnget " instrToggle "
if konoff == 1 then
schedkwhen ktrigger , 0 , 0 , 2 , 0 , kdur
endif
endin
Note additionally that the desired precision of the value display can be set beforehand using the
labelPrecision property of the CsoundUI object. For example:
self.csoundUI.labelPrecision = 4;
UISlider Binding
The UISlider binding is possibly the most commonly used UI binding - it allows the value of a UIS-
lider object to be passed to Csound whenever it changes. This is set up in the CsoundSliderBinding
class and we access it via CsoundUI using:
// Objective-C
[self.csoundUI addSlider:self.slider
forChannelName:"channelName"];
// Swift
csoundUI.add(slider, forChannelName: "channelName")
Note that this restricts you to using the slider’s actual value, rather than a rounded verion of it or
some other variation, which would normally be best suited to a manual value binding, which is
addressed later in this guide. An example is provided below of two simple such UISlider-bound
values in Csound:
sr = 44100
ksmps = 128
nchnls = 2
0dbfs = 1
instr 1
kfreq chnget "frequency" ; input 0 - 1
kfreq expcurve kfreq , 500 ; exponential distribution
kfreq *= 19980 ; scale to range
kfreq += 20 ;add offset
kamp chnget " amplitude "
750
12 D. CSOUND IN iOS I. Features of Csound in iOS
Above we get around being restricted to the value of the UISlider by creating an exponential dis-
tribution in Csound. Of course we could simply make the minimum and maximum values of the
UISlider 20 and 20000 respectively, but that would be a linear distribution by default. In both cases
here, the UISlider’s range of floating point values is set to be from 0 to 1.
The momentary button binding is similar to the normal UIButton binding in that it uses a UIButton,
however it differs in how it uses this object. The UIButton binding passes a channel value of 1 for
as long as the UIButton is held, whereas the momentary button binding sets the channel value to 1
for one Csound k-period (i.e. one k-rate sample). It does this by setting an intermediate value to 1
when the button is touched, passing this to Csound on the next k-cycle, and immediately resetting
it to 0 after passing it. This is all occurring predominantly in the CsoundMomentaryButtonBinding
class, which we access using:
// Objective-C
[self.csoundUI
addMomentaryButton:self.triggerButton
forChannelName:"channelName"
];
// Swift
csoundUI.addMomentaryButton(triggerButton, forChannelName: "channelName")
This replaces the automatic instrument triggering with a manual trigger. Every time the UIButton
is touched, a note (by way of an instance of instr 2) will be triggered. This may seem like a more
esoteric binding, but there are a variety of potential uses.
751
I. Features of Csound in iOS 12 D. CSOUND IN iOS
// Swift
var csoundMotion = CsoundMotion(csoundObj: csound)
As with CsoundUI, it may often be advantageous to declare the CsoundMotion object as a property
rather than locally.
Accelerometer Binding
// Swift
csoundMotion.enableAccelerometer()
Gyroscope Binding
The gyroscope binding, implemented in the CsoundGyroscopeBinding class and enabled through
the CsoundMotion class, allows access to an iOS device’s gyroscope data along its three axes (X,
Y, Z). The accelerometer is a device that allows rotational velocity to be determined, and together
with the accelerometer forms a system with six degrees of freedom. To enable it, use:
// Objective-C
[csoundMotion enableGyroscope];
// Swift
csoundMotion.enableGyroscope()
Attitude Binding
Finally, the attitude binding, implemented in CsoundAttitudeBinding and enabled through Csound-
Motion, allows access to an iOS device’s attitude data. As the Apple reference notes, attitude refers
752
12 D. CSOUND IN iOS I. Features of Csound in iOS
to the orientation of a body relative to a given frame of reference. CsoundMotion enables this as
three Euler angle valies: roll, pitch, and yaw (rotation around X, Y, and Z respectively). To enable
the attitude binding, use:
// Objective-C
[csoundMotion enableAttitude];
// Swift
csoundMotion.enableAttitude()
Together, these bindings enable control of Csound parameters with device motion in ways that are
very simple and straightforward. In the following subsection, an example demonstrating each of
the pre-set channel names as well as how some of this information might be used is provided.
Here is an example of a Csound instrument that accesses all of the data, and demonstrates uses
for some of it. This example is taken from the Csound for iOS Examples project.
instr 1
kaccelX chnget " accelerometerX "
kaccelY chnget " accelerometerY "
kaccelZ chnget " accelerometerZ "
Each of the channel names is shown here, and each corresponds to what is automatically set in
the relevant binding. A little experimenting can be very helpful in determining what to use these
values for in your particular application, and of course one is never under any obligation to use
all of them. Regardless, they can be helpful and very straightforward ways to add now-familiar
interactions.
753
I. Features of Csound in iOS 12 D. CSOUND IN iOS
// Swift
csound.addBinding(self)
Note that you will need to conform to the CsoundBinding protocol, and implement. at minimum,
the required setup method. The CsoundBinding setup method will be called on every object added
as a binding, and the remaining methods, marked with the @optional directive will be called on any
bindings that implement them.
Named channels allow us to pass data to and from Csound while it is running. These channels
refer to memory locations that we can write to and Csound can read from, and vice-versa. The two
most common channel types are: CSOUND_CONTROL_CHANNEL refers to a floating point control
channel, normally associated with a k-rate variable in Csound. CSOUND_AUDIO_CHANNEL refers
to an array of floating point audio samples of length ksmps.
Each of these can be an input or output channel depending on whether values are being passed
to or from Csound.
Given below is an example of using named channels in a simplified Csound instrument. The poly-
morphic chnget and chnset opcodes are used, and the context here implies that kverb received
its value from an input control channel named verbMix, and that asig outputs to an audio channel
named samples.
giSqr ftgen 2, 0, 8192, 10, 1,0,.33,0,.2,0,.14,0,.11,0,.09
instr 1
kfreq = p4
kverb chnget " verbMix "
aosc poscil .5 , kfreq , 2
arvb reverb aosc , 1.5
asig = (aosc * (1 - kverb) ) + (arvb * kverb)
chnset asig , " samples "
outs asig , asig
endin
The section that follows will describe how to set up and pass values to and from this instrument’s
channels in an iOS application.
The setup method is called before Csound’s first performance pass, and this is typically where
channel references are created. For example:
754
12 D. CSOUND IN iOS I. Features of Csound in iOS
// Objective-C
// verbPtr and samplesPtr are instance variables of type float*
// Swift
var verbPtr: UnsafeMutablePointer<Float>?
var samplesPtr: UnsafeMutablePointer<Float>?
The cleanup method from CsoundBinding, also optional, is intended for use in removing bindings
once they are no longer active. This can be done using CsoundObj’s removeBinding method:
// Objective-C
// verbPtr and samplesPtr are instance variables of type float*
-(void)cleanup {
[self.csound removeBinding:self];
}
// Swift
func cleanup() {
csound.removeBinding(self)
}
// Swift
func updateValuesToCsound() {
verbPtr?.pointee = verbSlider.value
}
This updates the value at a memory location that Csound has already associated with a named
channel (in the setup method). This process has essentially replicated the functionality of the
CsoundUI API’s slider binding. The advantage here is that we could perform any transformation
on the slider value, or associate another value (that might not be associated with a UI object) with
the channel altogether. To pass values back from Csound, we use the updateValuesFromCsound
method.
755
I. Features of Csound in iOS 12 D. CSOUND IN iOS
// Objective-C
-(void)updateValuesFromCsound {
float *samps = samplesPtr;
}
Note that in Swift, we have do a little extra work in order to get an array of samples that we can
easily index into:
// Swift
func updateValuesFromCsound() {
let samps = samplesPtr?.pointee
let sampsArray = [Float](UnsafeBufferPointer(start: audioPtr,
count: Int(csound.getKsmps())))
}
Note that there are no methods that an object is required to adopt in order to conform to this
protocol. These methods simply allow an object to elect to be notified when Csound either begins,
completes running, or both. Note that these methods are not called on the main thread, so any UI
work must be explicitly run on the main thread. For example:
// Objective-C
- (void)viewDidLoad {
[super viewDidLoad];
[self.csound addListener:self];
}
- (void)csoundObjStarted:(CsoundObj *)csoundObj {
[self.runningLabel performSelectorOnMainThread:@selector(setText:)
withObject:@"Csound Running"
waitUntilDone:NO];
}
// Swift
override func viewDidLoad() {
super.viewDidLoad()
csound.add(self)
}
func csoundObjCompleted(_ csoundObj: CsoundObj) {
DispatchQueue.main.async { [unowned self] in
self.runningLabel.text = "Csound Stopped"
}
}
756
12 D. CSOUND IN iOS I. Features of Csound in iOS
Console Output
Console output from Csound is handled via a callback. You can set the method that handles con-
sole info using CsoundObj’s setMessageCallbackSelector method, and passing in an appropriate
selector, for instance:
// Objective-C
[self.csound setMessageCallbackSelector:@selector(printMessage:)];
// Swift
csound.setMessageCallbackSelector(#selector(printMessage(_:)))
An object of type NSValue will be passed in. This object is acting as a wrapper for a C struct of
type Message. The definition for Message in CsoundObj.h is:
typedef struct {
CSOUND *cs;
int attr;
const char *format;
va_list valist;
} Message;
The two fields of interest to us for the purposes of console output are format and valist. The former
is a format string, and the latter represents a list of arguments to match its format specifiers.
The process demonstrated in the code examples below can be described as:
757
I. Features of Csound in iOS 12 D. CSOUND IN iOS
In both cases above, we are printing the resulting string objects to Xcode’s console. This can be
very useful for finding and addressing issues that have to do with Csound or with a .csd file you
might be using.
We could also pass the resulting string object around in our program; for example, we could insert
the contents of this string object into a UITextView for a simulated Csound console output.
Note that you must also set the appropriate command-line flag in your csd, under CsOptions. For
example, -M0. Additionally, the MIDI device must be connected before the application is started.
MidiWidgetsManager
The second way that is provided to communicate MIDI information to Csound is indirect, via the
use of UI widgets and CsoundUI. In this case, the MidiWidgetsManager uses a MidiWidgetsWrap-
per to connect a MIDI CC to a UI object, and then CsoundUI can be used to connect this UI object’s
value to a named channel in Csound. For instance:
// Objective-C
MidiWidgetsManager *widgetsManager = [[MidiWidgetsManager alloc] init];
[widgetsManager addSlider:self.cutoffSlider forControllerNumber:5];
[csoundUI addSlider:self.cutoffSlider forChannelName:@"cutoff"];
[widgetsManager openMidiIn];
// Swift
let widgetsManager = MidiWidgetsManager()
widgetsManager.add(cutoffSlider, forControllerNumber: 5)
csoundUI?.add(cutoffSlider, forChannelName: "cutoff")
widgetsManager.openMidiIn()
An advantage of this variant is that MIDI connections to the UI widgets are active even when
Csound is not running, so visual feedback can still be provided, for example. At the time of writing,
support is only built-in for UISliders.
Other Functionality
This section describes a few methods of CsoundObj that are potentially helpful for more complex
applications.
758
12 D. CSOUND IN iOS II. How to Fully Integrate Csound into Apples iOS CoreAudio
getCsound
(CSOUND *)getCsound;
The getCsound method returns a pointer to a struct of type CSOUND, the underlying Csound
instance in the C API that the iOS API wraps. Because the iOS API only wraps the most commonly
needed functionality from the Csound C API, this method can be helpful for accessing it directly
without needing to modify the Csound iOS API to do so.
Note that this returns an opaque pointer because the declaration of this struct type is not directly
accessible. This should, however, still allow you to pass it into Csound C API functions in either
Objective-C or Swift if you would like to access them.
getAudioUnit
(AudioUnit *)getAudioUnit;
The getAudioUnit method returns a pointer to a CsoundObj instance’s I/O AudioUnit, which
provides audio input and output to Csound from iOS.
This can have several potential purposes. As a simple example, you can use the AudioOutputUnit-
Stop() function with the returned value’s pointee to pause rendering, and AudioOutputUnitStart()
to resume.
updateOrchestra
(void)updateOrchestra:(NSString *)orchestraString;
The updateOrchestra method allows you to supply a new Csound orchestra as a string.
Other
Additionally, getKsmps returns the current ksmps value, and getNumChannels returns the num-
ber of audio channels in use by the current Csound instance. These both act directly as wrappers
to Csound C API functions.
759
II. How to Fully Integrate Csound into Apples iOS CoreAudio 12 D. CSOUND IN iOS
Getting Started
The development of professional audio applications involves to consider some important aspects
of iOS in order to maximize the compatibility and versatility of the app.
The code for the User Interface (UI) is written in Objective-C, whilst the Csound API (i.e. Application
Programming Interface) is written in C. This duality allows us to understand in detail the interac-
tion between both. As we will see in the next section, the control unit is based on the callback
mechanism rather than the pull mechanism.
No Objective-C code was deliberately written in the C audio callback, since it is not recommended
as well it is not recommended to allocate/de-allocate memory.
Since often we will refer to the tutorials (XCode projects), it would be useful to have on hand the
xCode environment. These files can be downloaded here.
760
12 D. CSOUND IN iOS II. How to Fully Integrate Csound into Apples iOS CoreAudio
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer =
kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
// Create a new unit based on this that we will use for output
err = AudioComponentInstanceNew(HALOutput, &csAUHAL);
err = AudioUnitInitialize(csAUHAL);
This code is common to many audio applications, easily available online or from the Apple docu-
mentation. Basically, we setup the app as PlayAndRecord category, then we create the AudioUnit.
The PlayAndRecord category allows receiving audio from the system and simultaneously produce
audio.
IMPORTANT:
For proper operation with Audiobus (AB) and Inter-App Audio (IAA), we must instantiate and ini-
tialize one Audio Unit (AU) only once for the entire life cycle of the app. To destroy and recreate
the AU would involve to require more memory (for each instance). If the app is connected to IAA
or AB it will stop responding and we will experience unpredictable behavior, which may lead to an
unexpected crash.
Actually there is no way to tell at runtime AB and / or IAA that the AU address has changed. The
InitializeAudio function should be called only once, unlike the run/stop functions of Csound.
As we will see in the next section, all links are established graphically with the Interface Builder.
761
II. How to Fully Integrate Csound into Apples iOS CoreAudio 12 D. CSOUND IN iOS
The main CSOUND structure is allocated in the AudioDSP constructor and initializes the audio
system. This approach foresees that the _cs (CSOUND*) class variable persists for the entire life
cycle of the app. As mentioned, the initializeAudio function should be called only once.
- (instancetype)init {
self = [super init];
if (self) {
// Setup CoreAudio
[self initializeAudio];
}
return self;
}
Since we have the CSOUND structure allocated and the CoreAudio properly configured, we can
manage Csound asynchronously.
The main purpose of this simple example is to study how the user interface (UI) interacts with
Csound. All connections have been established and managed graphically through the Interface
Builder.
The UISwitch object is connected with the toggleOnOff, which has the task to toggle on/off Csound
in this way:
- (IBAction)toggleOnOff:(id)component {
if (uiswitch.on) {
NSString *tempFile =
[[NSBundle mainBundle] pathForResource:@"test" ofType:@"csd"];
[self stopCsound];
[self startCsound:tempFile];
} else {
[self stopCsound];
}
}
In the example the test.csd is performed which implements a simple sinusoidal oscillator. The
frequency of the oscillator is controlled by the UISlider object. This is linked with the sliderAction
callback.
As anticipated, the mechanism adopted is driven by events (callback). This means that the func-
tion associated with the event is called only when the user performs an action on the UI slider.
In this case the action is of type Value Changed. The Apple documentation concerning the UICon-
trol framework should be consulted, for further clarification in this regard.
- (IBAction)sliderAction:(id)sender {
if (!_cs || !running)
return;
762
12 D. CSOUND IN iOS II. How to Fully Integrate Csound into Apples iOS CoreAudio
*value = (float)sld.value;
}
As we can see, we get the pointer through csoundGetChannelPtr, this is relative to incoming control
signals. From the point of view of Csound, the signals in the input (CSOUND_INPUT_CHANNEL)
are sampled from the software bus via chnget, while in the output (CSOUND_OUTPUT_CHANNEL)
chnset is used.
or
value[0] = (float) sld.value;
The channelName string freq is the reference text used by the chnget opcode in the instr 1 of the
Csound Orchestra.
kfr chnget "freq"
Since the control architecture is based on the callback mechanism and therefore depends on the
user actions, we must send all values when Csound starts. We can use Csound’s delegate:
-(void)csoundObjDidStart {
[_freq sendActionsForControlEvents:UIControlEventAllEvents];
}
This operation must be repeated for all UI widgets in practice. Immediately after Csound is running
we send an UIControlEventAllEvents message to all widgets. So we are sure that Csound receives
properly the current state of the UI’s widgets values.
In this case _freq is the reference (IBOutlet) of the UISlider in the Main.storyboard.
The XCode tutorials do not include the Audiobus SDK since it is covered by license, see the website
for more information and to consult the official documentation here.
However, the existing code to Audiobus should ensure proper functioning after the inclusion of the
library.
In the file AudioDSP.h there are two macros: AB and IAA. These are used to include or exclude the
needed code. The first step is to configure the two AudioComponentDescriptions for the types:
763
II. How to Fully Integrate Csound into Apples iOS CoreAudio 12 D. CSOUND IN iOS
AudioComponentDescription desc_fx = {
kAudioUnitType_RemoteEffect,
'xcso',
'xyou', 0, 0
};
This point is crucial because you have to enter the same information in the file Info.plist-
In the Info.plist (i.e. Information Property List), the Bundle display name key and Require background
modes must absolutely be defined to enable the audio in the background.
The app must continue to play audio even when it is not in the foreground. Here we configure the
Audio Components (i.e. AU).
typedef struct AudioComponentDescription {
OSType componentType;
OSType componentSubType;
OSType componentManufacturer;
UInt32 componentFlags;
UInt32 componentFlagsMask;
} AudioComponentDescription;
As said, the AudioComponentDescription structure used for the configuration of the AU, must nec-
essarily coincide in the Info.plist,
The structure fields (OSType) are of FourCharCode, so they must consist of four characters.
IMPORTANT: it is recommended to use different names for both componentSubType and com-
ponentManufacturer of each AudioComponent. In the example the characters ’i’ and ’x’ refer to
Instrument and Fx.
Only for the first field (componentType) of the AudioComponentDescription structure we can use
the enumerator
enum {
kAudioUnitType_RemoteEffect = 'aurx',
kAudioUnitType_RemoteGenerator = 'aurg',
764
12 D. CSOUND IN iOS II. How to Fully Integrate Csound into Apples iOS CoreAudio
kAudioUnitType_RemoteInstrument = 'auri',
kAudioUnitType_RemoteMusicEffect = 'aurm'
};
where auri identifies the Instrument (Instr) and aurx the effect (Fx), at which point the app will
appear on the lists of the various IAA Host as Instr and Fx and in Audiobus as Sender or Receiver.
In the following sections we will see how to manage the advanced settings for Csound’s ksmps,
according to the system BufferFrame.
IOS allows power-of-two BufferFrame values in the range 64, 128, 256, 512, 1024, etc …
It is not recommended to use values bigger than 1024 or smaller than 64. A good compromise is
256, as suggests the default value of GarageBand and Other similar applications.
In the Csound language, the vector size (i.e. BufferFrame in the example) is expressed as ksmps.
So it is necessary to manage appropriately the values of BufferFrame and ksmps.
All three cases have advantages and disadvantages. In the first case the BufferFrame must be
always >= ksmps, and in the second case we must implement a spartan workaround tp synchronize
ksmps with BufferFrame. The third and more complex case requires a control at run-time on the
audio callback and we must manage an accumulation buffer. Thanks to this, the BufferFrame can
be bigger than ksmps or vice versa. However there are some limitations. In fact, this approach
does not always lead to the benefits hoped for in terms of performance.
Static ksmps
To keep the ksmps static with a very low value, such as 32 or 64, we will set ksmps in the Csound
Orchestra to this value. As mentioned, the BufferFrame of iOS is always greater or equal than 64.
The operation is assured thanks to the for statement in the Csound_Render:
765
II. How to Fully Integrate Csound into Apples iOS CoreAudio 12 D. CSOUND IN iOS
//…
}
This C routine is called from the CoreAudio every inNumberFrames (i.e. BufferFrame). The ioData
pointer contains inNumberFrames of audio samples incoming from the input (mic/line). Csound
reads this data and returns ksmps processed samples.
When the inNumberFrames and ksmps are identical, we can simply copy out the processed buffer,
this is done by calling the csoundPerformKsmps() procedure. Since that ksmps is less or equal to
inNumberFrames, we need to call N slices the csoundPerformKsmps(). This is safe, as ksmps will
in this situation never be greater than inNumberFrames.
Example:
ksmps = 64
inNumberFrames = 512
In other words, every Csound_Render call involves eight sub-calls to csoundPerformKsmps(), for
every sub-call we fill the ioData with ksmps samples.
This is a workaround but it works properly; we just have to set placeholders in the Orchestra header.
766
12 D. CSOUND IN iOS II. How to Fully Integrate Csound into Apples iOS CoreAudio
<CsInstruments>
sr = 44100
ksmps = 512
nchnls = 2
0dbfs = 1
The two univocal strings are the placeholders for sr and ksmps. They begin with the semicolon
character so that Csound recognizes it as a comment. The following function in Objective-C looks
for the placeholders in the myOrchestra.csd and replaces them with new sr and ksmps values.
- (void)csoundApplySrAndKsmpsSettings:(Float64)sr withBuffer:(Float64)ksmps
{
myString = [myString
stringByReplacingOccurrencesOfString:@";;;;SR;;;;"
withString:[NSString
stringWithFormat:@"sr = %f", sr]];
myString = [myString
stringByReplacingOccurrencesOfString:@";;;;KSMPS;;;;"
withString:[NSString
stringWithFormat:@"ksmps = %f",
ksmps]];
NSString* pathAndNameRUN =
[NSString stringWithFormat:@"%@dspRUN.csd", NSTemporaryDirectory()];
// Run Csound
[self startCsound:pathAndNameRUN];
} else
NSLog(@"file %@ Does Not Exists At Path!!!", pathAndName);
}
The NSString pathAndName contains the file path of myOrchestra.csd in the Resources folder.
This path is used to copy in myString the entire file (as NSString). Subsequently the stringByRe-
placingOccurrencesOfString method replaces the placeholders with the valid strings.
Since iOS does not allow to edit files in the application Resources folder (i.e. pathAndName), we
767
II. How to Fully Integrate Csound into Apples iOS CoreAudio 12 D. CSOUND IN iOS
need to save the modified version in the new file dspRUN.csd that is saved in the temporary folder
(i.e. pathAndNameRUN). This is achieved through the writeToFile method.
As a final step it is necessary to re-initialise Csound by calling the runCsound function which runs
Csound and sends the appropriate values of sr and ksmps.
As seen the second case is a good compromise, however it is not suitable in some particular
conditions. So far we have only considered the aspect in which the app works on the main audio
thread, with a BufferFrame imposed by iOS. But there are special cases in which the app is called
to work on a different thread and with a different BufferFrame.
For instance the freeze track feature implemented by major Host IAA apps (such as Cubasis, Auria
etc …) bypasses the current setup of iOS and imposes an arbitrary BufferFrame (usually 64).
Since Csound is still configured with the iOS BufferFrame (the main audio thread), but during the
freeze track process the Csound_Perform routine is called with a different BufferFrame, Csound
cannot work properly.
In order to solve this limitation we need a run-time control on the audio callback and handle the
exception.
On the Csound_Render we will evaluate the condition for which slices is < 1:
OSStatus Csound_Perform(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp, UInt32 dump,
UInt32 inNumberFrames, AudioBufferList *ioData) {
//…
/* CSOUND PERFORM */
if (slices < 1.0) {
/* inNumberFrames < ksmps */
Csound_Perform_DOWNSAMP(inRefCon, ioActionFlags, inTimeStamp, dump,
inNumberFrames, ioData);
} else {
/* inNumberFrames => ksmps */
for (int i = 0; i < (int)slices; ++i) {
ret = csoundPerformKsmps(cs);
}
}
//…
}
Every time the ksmps (for some reason) is greater than BufferFrame, we will perform the
Csound_Perform_DOWNSAMP procedure.
// Called when inNumberFrames < ksmps
OSStatus Csound_Perform_DOWNSAMP(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp, UInt32 dump,
UInt32 inNumberFrames,
AudioBufferList *ioData
768
12 D. CSOUND IN iOS II. How to Fully Integrate Csound into Apples iOS CoreAudio
) {
AudioDSP *cdata = (__bridge AudioDSP *)inRefCon;
/* DOWNSAMPLING FACTOR */
int UNSAMPLING = csoundGetKsmps(cs) / inNumberFrames;
cdata->ret = ret;
return noErr;
}
As mentioned we need a buffer for the accumulation. It is, however, not necessary to create a new
one since you can directly use Csound’s spin and spout buffer.
/* DOWNSAMPLING FACTOR */
int UNSAMPLING = csoundGetKsmps(cs)/inNumberFrames;
UNSAMPLING is 8
769
II. How to Fully Integrate Csound into Apples iOS CoreAudio 12 D. CSOUND IN iOS
This value represents the required steps to accumulate the input signal in spin for every call of
csoundPerformKsmps().
if (cdata->counter < UNSAMPLING-1) {
cdata->counter++;
}
else {
cdata->counter = 0;
The Csound_Perform_DOWNSAMP routine is called by iOS every 64 samples, while we must call
csoundPerformKsmps() after 512 samples. This means we need to skip eight times (i.e. UNSAM-
PLING) until we have collected the input buffer.
From another point of view, before calling csoundPerformKsmps() we must accumulate eight in-
NumberFrames in spin, and for every call of Csound_Perform_DOWNSAMP we must return inNum-
berFrames from spout.
In the next example, the iOS audio is in buffer which is a pointer of the ioData structure.
/* INCREMENTS DOWNSAMPLING COUNTER */
int slice_downsamp = inNumberFrames * cdata->counter;
Ignoring the implementation details regarding the de-interlacing of the audio, we can focus on the
slice_downsamp which serves as offset-index for the arrays spin and spout.
The implementation of both second and third cases guarantees that the app works properly in
every situation.
Plot a Waveform
In this section we will see a more complex example to access memory of Csound and display the
contents on a UIView.
The waveDrawView class interacts with the waveLoopPointsView, the loopoints allow us to select
a portion of the file via the zoom on the waveform (pinch in / out). These values (loopoints) are
770
12 D. CSOUND IN iOS II. How to Fully Integrate Csound into Apples iOS CoreAudio
managed by Csound which ensures the correct reading of the file and returns the normalized value
of the instantaneous phase of reading.
The two classes are instantiated in Main.storyboard. Please note the hierarchy that must be re-
spected for the setup of other projects as well as the three UIView must have the same size (frame)
and cannot be dynamically resized.
In the score of the file csound_waveform.csd, two GEN Routines are declared to load WAV files in
memory:
f2 0 0 1 "TimeAgo.wav" 0 0 1
f3 0 0 1 "Density_Sample08.wav" 0 0 1
In order to access the audio files in the app Resources folder, we need to setup some environment
variables for Csound. This is done in the runCsound function. Here we set the SFDIR (Sound File
Directory) and the SADIR (Sound analysis directory):
// Set Environment Sound Files Dir
NSString *resourcesPath = [[NSBundle mainBundle] resourcePath];
NSString *envFlag = @"--env:SFDIR+=";
envFlag = @"--env:SADIR+=";
char *SADIR = (char *)[[envFlag stringByAppendingString:resourcesPath]
cStringUsingEncoding:NSASCIIStringEncoding];
char *argv[4] = {
"csound", SFDIR, SADIR,
(char *)[csdFilePath cStringUsingEncoding:NSASCIIStringEncoding]};
The interaction between Csound and the UI is two-way, the class method drawWaveForm draws
the contents of the genNum.
[waveView drawWaveFromCsoundGen:_cs genNumber:genNum];
After calling this method, we need to enable an NSTimer object in order to read continuosly (pull)
the phase value returned by Csound. Please examine the loadSample_1 function code for insights.
The timer is disabled when the DSP is switched off, in the timer-callback we get the pointer, this
time from CSOUND_OUTPUT_CHANNEL, finally we use this value to synchronize the graphics cur-
sor on the waveform (scrub) in the GUI.
- (void)updateScrubPositionFromTimer {
if (!running) return;
771
II. How to Fully Integrate Csound into Apples iOS CoreAudio 12 D. CSOUND IN iOS
CSOUND_CONTROL_CHANNEL | CSOUND_OUTPUT_CHANNEL);
if (channelPtr_file_position) {
[waveView updateScrubPosition:*channelPtr_file_position];
}
}
In the Orchestra we find the corresponding code for writing in the software bus.
chnset kfilposphas, "file_position_from_csound"
The goal is to modify a table in realtime while being read (played) by an oscillator LUT (i.e. look-up
table). A Pad XY, to the left in the UI, manages the interpolation on the four prototypes and, to the
right of the interface, a 16-slider surface controls the harmonic content of a wave.
Concerning the first example (pad morph), the waveform interpolations are implemented in the
Orchestra file and performed by Csound. The UI communicates with Csound, by activating an
instrument (instr 53) through a score message. Instead, in the second example (16-slider surface)
the code is implemented in the AudioDSP.m file and, precisely, in the didValueChanged delegate.
The architecture of this second example is based on addArm procedure that write in a temporary
array. The resulting waveform is then copied to the GEN-table, via the csoundTableCopyIn API.
In the first example, instr 53 is activated via a score message for every action on the pad, this is
performed in ui_wavesMorphPad:
NSString* score = [NSString stringWithFormat:
@"i53 0 %f %f %f",
UPDATE_RES,
pad.xValue,
pad.yValue];
The instr 53 is kept active for UPDATE_RES sec (0.1), the maxalloc opcode limits the number of
simultaneous instances (notes). Thus, any score events which fall inside UPDATE_RES time, are
ignored.
maxalloc 53, 1 ;iPad UI Waveforms morphing only 1 instance
This results in a sub-sampling of Csound’s instr 53, compared to the UI pad-callback. The wave-
form display process is done by the Waveview class, it is a simplified version of the WaveDrawView
class, introduced in the tutorial (04_plotWaveForm), that does not deserve particular investigation.
As mentioned, the waveforms’s interpolations are performed by Csound, followed by the instr 53
code:
tableimix giWaveTMP1, 0, giWaveSize, giSine, \
0, 1.-p4, giTri, 0, p4
tableimix giWaveTMP2, 0, giWaveSize, giSawSmooth, \
0, 1.-p4, giSquareSmooth, 0, p4
772
12 D. CSOUND IN iOS II. How to Fully Integrate Csound into Apples iOS CoreAudio
The p4 and p5 p-fields are the XY pad axes used as weights for the three vector-interpolations
which are required. The tablemix opcode mixes two tables with different weights into the gi-
WaveTMP1 destination table. In this case we interpolate a Sine Wave (i.e. giSine) with a triangular
(i.e. giTri), then in the second line between giSawSmooth and giSquareSmooth, mixing the result
in giWaveTMP2. At the end of the process, giWaveMORPH contains the interpolated values of the
two giWaveTMP1 and giWaveTMP2 arrays.
The global ftgen-tables, deliberately have been declared with the first argument set to zero. This
means that the GEN_-table number is assigned dynamically from Csound at compile time. Since
we do not know the number assigned, we must return the number of the table through chnset at
runtime.
The APE_MULTISLIDER class returns, through its own delegate method didValueChanged, an ar-
ray with the indexed values of the sliders. These are used as amplitude-weights for the generation
of the harmonic additive waveform. Let us leave out the code about the wave’s amplitude normal-
ization and we focus on this code:
MYFLT *tableNumFloat;
csoundGetChannelPtr(_cs, &tableNumFloat,
[@"harm_func_table"
cStringUsingEncoding:NSASCIIStringEncoding],
CSOUND_CONTROL_CHANNEL | CSOUND_INPUT_CHANNEL);
/* Is invalid? Return */
if (tableLength <= 0 || tableNum <= 0 || !tablePtr)
return;
This function also can be sub-sampled by de-commenting the DOWNSAMP_FUNC macro. This code
is purely for purpose of example as it can be significantly optimized, in the case of vectors’s oper-
ations, the Apple vDSP framework could be an excellent solution.
773
II. How to Fully Integrate Csound into Apples iOS CoreAudio 12 D. CSOUND IN iOS
The first step must be done in the runCsound process, before calling csoundCompile.
csoundAppendOpcode(
cs, "MOOGLADDER", sizeof(MOOGLADDER_OPCODE),
0, 3, "a", "akk", iMOOGLADDER, kMOOGLADDER, aMOOGLADDER
);
This appends an opcode implemented by external software to Csound’s internal opcode list. The
opcode list is extended by one slot, and the parameters are copied into the new slot.
Basically, what we have done is declaring three pointers to functions (iMOOGLADDER, kMOOGLAD-
DER and aMOOGLADDER) implemented in the class AudioDSP.
The second step is to declare the data structure used by the opcode in the AudioDSP.h. So the
header file csdl.h must be included according to the documentation:
Plugin opcodes can extend the functionality of Csound, providing new functionality that is exposed
as opcodes in the Csound language. Plugins need to include this header file only, as it will bring
all necessary data structures to interact with Csound. It is not necessary for plugins to link to the
libcsound library, as plugin opcodes will always receive a CSOUND* pointer (to the CSOUND_struct)
which contains all the API functions inside. This is the basic template for a plugin opcode. See the
manual for further details on accepted types and function call rates. The use of the LINKAGE macro
is highly recommended, rather than calling the functions directly.
typedef struct {
OPDS h;
MYFLT *ar, *asig, *kcutoff, *kresonance;
//…
} MOOGLADDER_OPCODE;
774
12 D. CSOUND IN iOS II. How to Fully Integrate Csound into Apples iOS CoreAudio
{
//…
}
In the Orchestra code, we can call MOOGLADDER in the same way as the native opcodes compiled:
aOutput MOOGLADDER aInput, kcutoff, kres
Conclusion
1. The descriptions here describe the essential audio integrations in iOS. Some of the topics
will be soon out of date, like the Inter-Audio App (IAA) which is deprecated form Apple since
iOS 13, or the Audiobus which is replaced as well from the modern AUv3 technology.
2. This approach cover the inalienable features for the audio integration using Csound for pro-
fessional software audio applications and presents some workarounds to solve some intrin-
sic idiosyncratic issue related to the Csound world.
3. A separate study deserves the integration of Csound for the Av3 architecture, meanwhile in
the tutorial repository you can download an Xcode project template using Csound as audio
engine for an AUv3 plugins extension. The template is self-explanatory.
4. All the tutorials are using the latest Csound 6.14 compiled for Apple Catalyst SDK it means
that the app can runs as universal in both iOS and macOS (since Catalina >= 10.15).
Links
Csound for iOS (look for the iOS-zip file)
Online Tutorial
apeSoft
Audiobus
A Tasty
775
II. How to Fully Integrate Csound into Apples iOS CoreAudio 12 D. CSOUND IN iOS
776
12 E. CSOUND ON ANDROID
There is no essential difference between running Csound on a computer and running it on a smart-
phone. Csound has been available on the Android platform since 2012 (Csound 5.19), thanks to
the work of Victor Lazzarini and Steven Yi. Csound 6 was ported to Android, and enhanced, by
Michael Gogins and Steven Yi in the summer of 2013.
1. The CsoundAndroid library, which is intended to be used by developers for creating apps
based on Csound. This is available for download at Csound’s download page.
2. The Csound for Android app, which is a self-contained environment for creating, editing, de-
bugging, and performing Csound pieces on Android. The app includes a number of built-
in example pieces. This is available from the Google Play store, or for download from the
csound-extended repository releases page.
For more information about these packages, download them and consult the documentation con-
tained therein.
The app has a built-in, pre-configured user interface with nine sliders, five push buttons, one track-
pad, and a 3-dimensional accelerometer that are pre-assigned to control channels which can be
read using Csound’s chnget opcode.
The app also contains an embedded Web browser, based on WebKit, that implements most fea-
tures of the HTML5 standard. This embedded browser can run Csound pieces written as .html
files. In addition, the app can render HTML and JavaScript code that is contained in an optional
<html> element of a regular .csd file.
777
Installing the App 12 E. CSOUND ON ANDROID
In both cases, the JavaScript context of the Web page will contain a global Csound object with
a JavaScript interface that implements useful functions of the Csound API. This can be used to
control Csound from JavaScript, handle events from HTML user interfaces, generate scores, and
do many other things. For a more complete introduction to the use of HTML with Csound, see 12
G.
The app has some limitations and missing features compared with the longer-established plat-
forms:
However, some of the more useful plugins are indeed available on Android:
1. The signal flow graph opcodes for routing audio from instruments to effects, etc.
2. The FluidSynth opcodes for playing SoundFonts.
3. The Open Sound Control (OSC) opcodes.
4. The libstdutil library, which enables Csound to be used for various time/frequency analysis
and resynthesis tasks, and for other purposes.
Using the Csound for Android app is similar to using an application on a regular computer. You
need to be able to browse the file system.
There are a number of free and paid apps that give users the ability to browse the Linux file system
that exists on all Android devices. If you don’t already have such a utility, you should install a file
browser that provides access to as much as possible of the file system on your device, including
778
12 E. CSOUND ON ANDROID User Interface
system storage and external store such as an SD card. The free AndroZip app can do this, for
instance.
If you render soundfiles, they take up a lot of space. For example, CD-quality stereo soundfiles (44.1
KHz, 16 bit) take up about 1 megabytes per minute of sound. Higher quality or more channels take
up even more room. But even without extra storage, a modern smartphone should have gigabytes,
thousands of megabytes, of free storage. This is actually enough to make an entire album of
pieces.
On most devices, installing extra storage is easy and not very expensive. Obtain the largest possi-
ble SD card, if your device supports them. This will vastly expand the amount of available space,
up to 32 or 64 gigabytes or even more.
Download to Device
To download the Csound for Android app to your device, go online using Google Search or a Web
browser. You can find the application package file, CsoundApplication-release.apk, on the csound-
extended releases page (you may first have to allow your Android device to install an app which is
not in Google Play).
Click on the filename to download the package. The download will happen in the background.
You can then go to the notifications bar of your device and click on the downloaded file. You
will be presented with one or more options for how to install it. The installer will ask for certain
permissions, which you need to grant.
It’s also easy to download the CsoundApplication-release.apk file to a personal computer. Once
you have downloaded the file from GitHub, connect your device to the computer with a USB ca-
ble. The file system of the device should then automatically be mounted on the file system of
the computer. Find the CsoundApplication-release.apk in the computer’s download directory, and
copy the CsoundApplication-release.apk file. Find your device’s download directory, and paste the
CsoundApplication-release.apk file there.
Then you will need to use a file browser that is actually on your device, such as AndroZip. Browse
to your Download directory, select the CsoundApplication-release.apk file, and you should be pre-
sented with a choice of actions. Select the Install action. The installer will ask for certain permis-
sions, which you should give.
User Interface
Tabs
The Csound for Android app has a tabbed user interface. The tabs include:
779
User Interface 12 E. CSOUND ON ANDROID
HTML – Displays the Web page specified by HTML code in the piece, may include interactive wid-
gets, 3-dimensional graphics, etc., etc.
WIDGETS – Displays built-in widgets bound to control channels with predefined names.
780
12 E. CSOUND ON ANDROID User Interface
HELP – Displays the online Csound Reference Manual in an embedded Web browser. ABOUT –
Displays the Csound home page in an embedded Web browser.
Main Menu
The app also has a top-level menu with the following commands:
NEW… creates a blank template CSD file in the root directory of the user’s storage for the user to
edit. The CSD file will be remembered and performed by Csound.
OPEN… – opens an existing CSD file in the root directory of the user’s storage. The user’s storage
filesystem can be navigated to find other files.
RUN/STOP – if a CSD file has been loaded, pushing the button starts running Csound; if Csound
is running, pushing the button stops Csound. If the <CsOptions> element of the CSD file con-
tains -odac, Csound’s audio output will go to the device audio output. If the element contains
-osoundfilename, Csound’s audio output will go to the file soundfilename, which should be
a valid Linux pathname in the user’s storage filesystem.
Privacy policy – presents the Csound for Android app’s privacy policy.
The widgets are assigned control channel names slider1 through slider9, butt1 through butt5, track-
pad.x, and trackpad.y. In addition, the accelerometer on the Android device is available as ac-
celerometerX, accelerometerY, and accelerometerZ.
The values of these widgets are normalized between 0 and 1, and can be read into Csound during
performance using the chnget opcode, like this:
kslider1_value chnget "slider1"
The area below the trackpad prints messages output by Csound as it runs.
781
Loading and Performing a Piece 12 E. CSOUND ON ANDROID
Settings Menu
The Settings menu on your device offers the following choices:
Audio driver – selects an Automatic choice of the optimal audio driver for your device (this is
the default), the older OpenSL ES driver which supports both audio input and audio output, and
the newer AAudio driver that provides lower audio output latency on Oreo or later. Plugins – an
(additional) directory for plugin opcodes. Output – overrides the default soundfile output directory.
Samples – overrides the default directory from which load sound samples. Analysis – overrides
the default directory from which to load analysis files. Include – overrides the default directory
from which to load Csound #include files.
These settings are not required, but they can make using Csound easier and faster to use.
Example Pieces
From the app’s menu, select the Examples command, then select one of the listed examples, for
example Xanadu by Joseph Kung. You may then click on the RUN button to perform the example,
or the EDITOR tab to view the code for the piece. If you want to experiment with the piece, you can
use the Save as… command to save a copy on your device’s file system under a different name.
You can then edit the piece and save your changes.
782
12 E. CSOUND ON ANDROID Creating a New Piece
Just to prove that everything is working, start the Csound for Android app. Go to the app menu,
select the Examples item, select the Xanadu example, and it will be loaded into Csound. Then click
on the RUN command. Its name should change to STOP, and Csound’s runtime messages should
begin to scroll down the MESSAGES tab. At the same time, you should hear the piece play. You can
stop the performance at any time by selecting the STOP command, or you can let the performance
complete on its own.
That’s all there is to it. You can scroll up and down in the messages pane if you need to find a
particular message, such as an error or warning.
If you want to look at the text of the piece, or edit it, select the Edit button. If you have installed
Jota, that editor should open with the text of the piece, which you can save, or not. You can edit
the piece with the this editor, and any changes you make and save will be performed the next time
you start the piece.
Run the Csound for Android app and select the NEW… command. You should be presented with an
file dialog asking you for a filename for your piece. Type in toot.csd, and select the SAVE button.
The file will be stored in the root directory of your user storage on your device. You can save the
file to another place if you like.
The text editor should open with a template CSD file. Your job is to fill out this template to hear
something.
Create a blank line between <CsOptions> and </CsOptions>, and type -odac -d -m3. This
means send audio to the real-time output (-odac), do not display any function tables (-d), and log
some informative messages during Csound’s performance (-m3).
Create a blank line between <CsInstruments> and </CsInstruments> and type the following
text:
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
asignal poscil 0.2, 440
out asignal
endin
This is just about the simplest possible Csound orchestra. The orchestra header specifies an audio
signal sampling rate of 44,100 frames per second, with 32 audio frames per control signal sample,
and one channel of audio output. The instrument is just a simple sine oscillator. It plays a tone at
concert A.
783
Using the Widgets 12 E. CSOUND ON ANDROID
i1 0 5
Select the Csound app’s RUN button. You should hear a loud sine tone for 5 seconds. If you don’t
hear anything, perhaps your device doesn’t support audio at 44100 Hertz, so try sr = 48000
instead.
If you want to save your audio output to a soundfile named test.wav, change -odac above to,
for example, -o/storage/emulated/0/Music/test.wav. Android is fussy about writing to
device storage, so you may need to use exactly the directory printed in the MESSAGES tab when
the app starts.
That’s it!
The Csound for Android app provides access to a set of predefined on-screen widgets, as well
as to the accelerometer on the device. All of these controllers are permanently assigned to pre-
defined control channels with pre-defined names, and mapped to a pre-defined range of values,
from 0 to 1.
You should be able to cut and paste this code into your own pieces without many changes.
The first step is to declare one global variable for each of the control channels, with the same name
as the control channel, at the top of the orchestra header, initialized to a value of zero:
gkslider1 init 0
gkslider2 init 0
gkslider3 init 0
gkslider4 init 0
gkslider5 init 0
gkslider6 init 0
gkslider7 init 0
gkslider8 init 0
gkslider9 init 0
gkbutt1 init 0
gkbutt2 init 0
gkbutt3 init 0
gkbutt4 init 0
gkbutt5 init 0
gktrackpadx init 0
gktrackpady init 0
gkaccelerometerx init 0
gkaccelerometery init 0
gkaccelerometerz init 0
Then write an always-on instrument that reads each of these control channels into each of those
global variables. At the top of the orchestra header:
784
12 E. CSOUND ON ANDROID Using the Widgets
alwayson "Controls"
So far, everything is common to all pieces. Now, for each specific piece and specific set of instru-
ments, write another always-on instrument that will map the controller values to the names and
ranges required for your actual instruments. This code, in addition, can make use of the peculiar
button widgets, which only signal changes of state and do not report continuously whether they
are on or off. These examples are from Gogins/Drone-IV.csd.
785
Using the Widgets 12 E. CSOUND ON ANDROID
Now, the controllers are re-mapped to sensible ranges, and have names that make sense for your
instruments. They can be used as follows. Note particularly that, just above the instrument def-
inition, in other words actually in the orchestra header, these global variables are initialized with
values that will work in performance, in case the user does not set up the widgets in appropriate
positions before starting Csound. This is necessary because the widgets in the Csound for An-
droid app, unlike say the widgets in CsoundQt, do not “remember” their positions and values from
performance to performance.
gkratio1 init 1
gkratio2 init 1/3
gkindex1 init 1
gkindex2 init 0.0125
instr Phaser
insno = p1
istart = p2
iduration = p3
ikey = p4
ivelocity = p5
iphase = p6
ipan = p7
iamp = ampdb(ivelocity) * 8
iattack = gioverlap
idecay = gioverlap
isustain = p3 - gioverlap
p3 = iattack + isustain + idecay
kenvelope transeg 0.0, iattack / 2.0, 1.5, iamp / 2.0, iattack / 2.0,
-1.5, iamp, isustain, 0.0, iamp, idecay / 2.0, 1.5, iamp / 2.0,
idecay / 2.0, -1.5, 0
ihertz = cpsmidinn(ikey)
print insno, istart, iduration, ikey, ihertz, ivelocity, iamp, iphase, ipan
isine ftgenonce 0,0,65536,10,1
khertz = ihertz
ifunction1 = isine
ifunction2 = isine
a1,a2 crosspm gkratio1, gkratio2, gkindex1, gkindex2,
khertz, ifunction1, ifunction2
aleft, aright pan2 a1+a2, ipan
adamping linseg 0, 0.03, 1, p3 - 0.1, 1, 0.07, 0
aleft = adamping * aleft * kenvelope
aright = adamping * aright * kenvelope
outleta "outleft", aleft
outleta "outright", aright
endin
786
12 F. CSOUND AND HASKELL
Csound-expression
Csound-expression is a framework for creation of computer music. It is a Haskell library to ease
the use of Csound. It generates Csound files out of Haskell code.
With the help of the library Csound instruments can be created on the fly. A few lines in the in-
terpreter is enough to get cool sound. Some of the features of the library are heavily inspired by
reactive programming. Instruments can be evoked with event streams. Event streams can be com-
bined in the manner of reactive programming. The GUI-widgets are producing the event streams
as control messages. Moreover with Haskell all standard types and functions like lists, maps and
trees can be used. By this, code and data can be organized easily.
One of the great features that comes with the library is a big collection of solid patches which are
predefined synthesizers with high quality sound. They are provided with the library csound-catalog.
Csound-expression is an open source library. It’s available on Hackage (the main base of Haskell
projects).
Key principles
Here is an overview of the features and principles:
ä Support for interactive music coding. We can create our sounds in the REPL. So we can
chat with our audio engine and can quickly test ideas. It greatly speeds up development
comparing to traditional compile-listen style.
ä With the library we can create our own libraries. We can create a palette of instruments and
use it as a library. It means we can just import the instruments and there is no need for
copy and paste and worry for collision of names while pasting. In fact there is a library on
hackage that is called csound-catalog. It defines great high quality instruments from the
Csound Catalog and other sources.
787
Key principles 12 F. CSOUND AND HASKELL
ä Try to hide low level Csound’s wiring as much as we can (no IDs for ftables, instruments,
global variables). Haskell is a modern language with a rich set of abstractions. The author
tried to keep the Csound primitives as close to the haskell as possible. For example, invoca-
tion of the instrument is just an application of the function.
ä No distinction between audio and control rates on the type level. Derive all rates from the
context. If the user plugs signal to an opcode that expects an audio rate signal the argument
is converted to the right rate. Though user can force signal to be of desired type.
ä Less typing, more music. Use short names for all types. Make a library so that all expressions
can be built without type annotations. Make it simple for the compiler to derive all types.
Don’t use complex type classes or brainy language concepts.
ä Ensure that output signal is limited by amplitude. Csound can produce signals with HUGE
amplitudes. Little typo can damage your ears and your speakers. In generated code all
signals are clipped by 0dbfs value. 0dbfs is set to 1. Just as in Pure Data. So 1 is absolute
maximum value for amplitude.
ä Remove score/instrument barrier. Let instrument play a score within a note and trigger other
instruments. Triggering the instrument is just an application of the function. It produces the
signal as output which can be used in another instrument and so on.
ä Set Csound flags with meaningful (well-typed) values. Derive as much as you can from the
context. This principle let us start for very simple expressions. We can create our audio
signal, apply the function dac to it and we are ready to hear the result in the speakers. No
need for XML copy and paste form. It’s as easy as typing the line
> dac (osc 440)
in the interpreter.
ä The standard functions for musical needs. We often need standard waveforms and filters
and adsr’s. Some functions are not so easy to use in the Csound. So there are a lot of
predefined functions that capture lots of musical ideas. the library strives to defines audio
DSP primitives in the most basic easiest form.
– There are audio waves: osc, saw, tri, sqr, pw, ramp, and their unipolar friends (usefull for
LFOs).
– There are filters: lp, hp, bp, br, mlp (moog low pass), filt (for packing several filters in
chain), formant filters with ppredefined vowels.
– There are handy envelopes: fades, fadeOut, fadeIn, linseg (with held last value).
– There noisy functions: white, pink.
– There are step sequencers: sqrSeq, sawSeq, adsrSeq, and many more. Step sequencer
can produce the sequence of unipolar shapes for a given wave-form. The scale factors
are defined as the list of values.
ä Composable GUIs. Interactive instruments should be easy to make. The GUI widget is a
container for signal. It carries an output alongside with visual representation. There are
standard ways of composition for the visuals (like horizontal or vertical grouping). It gives
us the easy way to combine GUIs. That’s how we can create a filtered saw-tooth that is
controlled with sliders:
> dac $ vlift2 (\cps q -> mlp (100 + 5000 * cps) q (saw 110))
(uslider 0.5) (uslider 0.5)
The function uslider produces slider which outputs a unipolar signal (ranges from 0 to 1).
788
12 F. CSOUND AND HASKELL How to try out the library
The single argument is an initial value. The function vlift2 groups visuals vertically and
applies a function of two arguments to the outputs of the sliders. This way we get a new
widget that produces the filtered sawtooth wave and contains two sliders. It can become a
part of another expression. No need for separate declarations.
ä Event streams inspired with FRP (functional reactive programming). Event stream can
produce values over time. It can be a metronome click or a push of the button, switch of the
toggle button and so on. We have rich set of functions to combine events. We can map over
events and filter the stream of events, we can merge two streams, accumulate the result.
That’s how we can count the number of clicks:
let clicks = lift1 (\evt -> appendE (0 :: D) (+) $ fmap (const 1)
evt) $ button "Click me!"
ä There is a library that greatly simplifies the creation of the music that is based on samples.
It’s called csound-sampler. With it we can easily create patterns out of wav-files, we can
reverse files or play random segments of files.
As you install all those tools you can type in the terminal:
cabal install csound-catalog --lib
It will install csound-expression and batteries. If you want just the main library use csound-
expression instead of csound-catalog.
If your cabal version is lower than 3.0 version you can skip the flag --lib. The version of cabal
can be checked with:
cabal --version
After that library is installed and is ready to be used. You can try in the haskell interpreter to import
the library and hear the greeting test sound:
> ghci
> import Csound.Base
> dac (testDrone3 220)
It works and you can hear the sound if you have installed evrything and the system audio is properly
configured to work with default Csound settings.
Next step to go would be to read through the tutorial. The library covers almost all features of
Csound so it is as huge as Csound but most concepts are easy to grasp and it is driven by com-
positions of small parts.
789
Links 12 F. CSOUND AND HASKELL
Links
The library tutorial: https://github.com/spell-music/csound-expression/blob/
master/tutorial/Index.md
The library homepage on hackage (it’s haskell stock of open source projects): http://hackage.
haskell.org/package/csound-expression
790
12 G. CSOUND IN HTML AND JAVASCRIPT
Introduction
Currently it is possible to use Csound together with HTML and JavaScript in at least the following
environments:
1. CsoundQt, described in 10 A.
4. Csound built for WebAssembly, which has two slightly different forms:
For instructions on installing any of these environments, please consult the documentation pro-
vided in the links mentioned above.
All of these environments provide a JavaScript interface to Csound, which appears as a global
Csound object in the JavaScript context of a Web page. Please note, there may be minor differ-
ences in the JavaScript interface to Csound between these environments.
With HTML and JavaScript it is possible to define user interfaces, to control Csound, and to gen-
erate Csound scores and even orchestras.
In all of these environments, a piece may be written in the form of a Web page (an .html file), with
access to a global instance of Csound that exists in the JavaScript context of that Web page. In
such pieces, it is common to embed the entire .orc or .csd file for Csound into the .html code as a
JavaScript multiline string literal or an invisible TextArea widget.
In CsoundQt and Csound for Android, the HTML code may be embedded in an optional <html>
element of the Csound Structured Data (.csd) file. This element essentially defines a Web page
that contains Csound, but the host application is responsible for editing the Csound orchestra and
running it.
791
Introduction 12 G. CSOUND IN HTML AND JAVASCRIPT
3. Conclusion
HTML must be understood here to represent not only Hyper Text Markup Language, but also all
of the other Web standards that currently are supported by Web browsers, Web servers, and the
Internet, including cascading style sheets (CSS), HTML5 features such as drawing on a graph-
ics canvas visible in the page, producing animated 3-dimensional graphics with WebGL including
shaders and GPU acceleration, Web Audio, various forms of local data storage, Web Sockets, and
so on and so on. This whole conglomeration of standards is currently defined and maintained
under the non-governmental leadership of the World Wide Web Consortium (W3C) which in turn is
primarily driven by commercial interests belonging to the Web Hypertext Application Technology
Working Group (WHATWG). Most modern Web browsers implement almost all of the W3C stan-
dards up to and including HTML5 at an impressive level of performance and consistency. To see
what features are available in your own Web browser, go to this test page. All of this stuff is now
usable in Csound pieces.
An Example of Use
For an example of a few of the things are possible with HTML in Csound, take a look at the following
piece, Scrims, which runs in contemporary Web browsers using a WebAssembly build of Csound
and JavaScript code. In fact, it’s running right here on this page!
Scrims is a demanding piece, and may not run without dropouts unless you have a rather fast
computer. However, it demonstrates a number of ways to use HTML and JavaScript with Csound:
1. Use of the Three.js library to generate a 3-dimensional animated image of the popcorn frac-
tal.
2. Use of an external JavaScript library, silencio, to sample the moving image and to gen-
erate Csound notes from it, that are sent to Csound in real time with the Csound API
csound.readScore function.
3. Use of a complex Csound orchestra that is embedded in a hidden TextArea on the page.
4. Use of the dat.gui library to easily create sliders and buttons for controlling the piece in real
time.
5. Use of the jQuery library to simplify handling events from sliders, buttons, and other HTML
elements.
6. Use of a TextArea widget as a scrolling display for Csound’s runtime messages.
To see this code in action, you can right-click on the piece and select the Inspect command. Then
you can browse the source code, set breakpoints, print values of variables, and so on.
It is true that LaTeX can do a better job of typesetting than HTML and CSS. It is true that game
engines can do a better job for interactive, 3-dimensional computer animation with scene graphs
than WebGL. It is true that compiled C or C++ code runs faster than JavaScript. It is true that
Haskell is a more fully-featured functional programming language than JavaScript. It is true that
MySQL is a more powerful database than HTML5 storage.
But the fact is, there is no single program except for a Web browser that manages to be quite as
functional in all of these categories in a way that beginning to intermediate programmers can use,
and for which the only required runtime is the Web browser itself.
792
12 G. CSOUND IN HTML AND JAVASCRIPT Introduction
For this reason alone, HTML makes a very good front end for Csound. Furthermore, the Web stan-
dards are maintained in a stable form by a large community of competent developers representing
diverse interests. So I believe HTML as a front end for Csound should be quite stable and remain
backwardly compatible, just as Csound itself remains backwardly compatible with old pieces.
How it Works
The Web browser embedded into CsoundQt is the Qt WebEngine. The Web browser embedded
into Csound for Android is the WebView available in the Android SDK.
For a .html piece, the front end renders the HTML as a Web page and displays it in an embedded
Web browser. The front end injects an instance of Csound into the JavaScript context of the Web.
For a .csd piece, the front end parses the <html> element out of the .csd file. The front end then
loads this Web page into its embedded browser, and injects the same instance of Csound that is
running the .csd into the JavaScript context of the Web page.
It is important to understand that any valid HTML code can be used in Csound’s <html> element.
It is just a Web page like any other Web page.
In general, the different Web standards are either defined as JavaScript classes and libraries, or
glued together using JavaScript. In other words, HTML without JavaScript is dead, but HTML with
JavaScript handlers for HTML events and attached to the document elements in the HTML code,
comes alive. Indeed, JavaScript can itself define HTML documents by programmatically creating
Document Object Model objects.
JavaScript is the engine and the major programming language of the World Wide Web in general,
and of code that runs in Web browsers in particular. JavaScript is a standardized language, and
it is a functional programming language similar to Scheme. JavaScript also allows classes to be
defined by prototypes.
The JavaScript execution context of a Csound Web page contains Csound itself as a csound
JavaScript object that has at least the following methods:
;; [returns a number]
getVersion()
;; [returns the numeric result of the evaluation]
compileOrc(orchestra_code)
evalCode(orchestra_code)
readScore(score_lines)
setControlChannel(channel_name,number)
;; [returns a number representing the channel value]
getControlChannel(channel_name)
message(text)
;; [returns a number]
getSr()
;; [returns a number]
getKsmps()
;; [returns a number]
getNchnls()
# [returns 1 if Csound is playing, 0 if not]
isPlaying()
The front end contains a mechanism for forwarding JavaScript calls in the Web page’s JavaScript
context to native functions that are defined in the front end, which passes them on to Csound. This
793
Tutorial User Guide 12 G. CSOUND IN HTML AND JAVASCRIPT
involves a small amount of C++ glue code that the user does not need to know about. In CsoundQt,
the glue code uses some JavaScript proxy generator that is injected into the JavaScript context of
the Web page, but again, the user does not need to know anything about this.
In the future, more functions from the Csound API will be added to this JavaScript interface, in-
cluding, at least in some front ends, the ability for Csound to appear as a Node in a Web Audio
graph (this already is possible in the Emscripten built of Csound).
Let’s get started and do a few things in the simplest possible way, in a series of toots. All of these
pieces are completely contained in unfolding boxes here, from which they can be copied and then
pasted into the CsoundQt editor, and some pieces are included as HTML examples in CsoundQt.
HelloWorld.csd
This is about the shortest CSD that shows some HTML output.
EXAMPLE 12G01_Hello_HTML_World.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<html>
Hello, World, this is Csound!
</html>
<CsScore>
e 1
</CsScore>
</CsoundSynthesizer>
;example by Michael Gogins
794
12 G. CSOUND IN HTML AND JAVASCRIPT Tutorial User Guide
Minimal_HTML_Example.csd
This is a simple example that shows how to control Csound using an HTML slider.
EXAMPLE 12G02_Minimal_HTML.csd
<CsoundSynthesizer>
<CsOptions>
-odac -d
</CsOptions>
<html>
<head> </head>
<body bgcolor="lightblue">
<script>
function onGetControlChannel(value) {
document.getElementById(
'testChannel'
).innerHTML = value;
} // to test csound.getControlChannel with QtWebEngine
</script>
<h2>Minimal Csound-Html5 example</h2>
<br />
<br />
Frequency:
<input
type="range"
id="slider"
oninput='csound.setControlChannel("testChannel",this.value/100.0); '
/>
<br />
<button
id="button"
onclick='csound.readScore("i 1 0 3")'
>
Event
</button>
<br /><br />
Get channel from csound with callback (QtWebchannel):
<label id="getchannel"></label>
<button
onclick='csound.getControlChannel("testChannel", onGetControlChannel)'
>
Get</button
><br />
Value from channel "testChannel":
<label id="testChannel"></label><br />
<br />
Get as return value (QtWebkit)
<button
onclick='alert("TestChannel: "+csound.getControlChannel("testChannel"))'
>
Get as retrun value
</button>
<br />
</body>
</html>
<CsInstruments>
795
Tutorial User Guide 12 G. CSOUND IN HTML AND JAVASCRIPT
sr = 44100
nchnls = 2
0dbfs = 1
ksmps = 32
instr 1
kfreq= 200+chnget:k("testChannel")*500
printk2 kfreq
aenv linen 1,0.1,p3,0.25
out poscil(0.5,kfreq)*aenv
endin
; schedule 1,0,0.1, 1
</CsInstruments>
<CsScore>
i 1 0 0.5 ; to hear if Csound is loaded
f 0 3600
</CsScore>
</CsoundSynthesizer>
;example by Tarmo Johannes
;reformatted for flossmanual by Hlödver Sigurdsson
Styled_Sliders.csd
And now a more complete example where the user controls both the compositional algorithm, the
logistic equation, and the sounds of the instruments. In addition, HTML styles are used to create
a more pleasing user interface.
First the entire piece is presented, then the parts are discussed separately.
EXAMPLE 12G03_Extended_HTML.csd
<CsoundSynthesizer>
; Example about using CSS in html section of CSD
; By Michael Gogins 2016
; Reformatted for flossmanual by Hlödver Sigurdsson
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
796
12 G. CSOUND IN HTML AND JAVASCRIPT Tutorial User Guide
alwayson "Reverberation"
alwayson "MasterOutput"
alwayson "Controls"
//////////////////////////////////////////////
// By Michael Gogins.
//////////////////////////////////////////////
instr ModerateFM
i_instrument = p1
i_time = p2
i_duration = p3
i_midikey = p4
i_midivelocity = p5
i_phase = p6
i_pan = p7
i_depth = p8
i_height = p9
i_pitchclassset = p10
i_homogeneity = p11
iattack = 0.002
isustain = p3
idecay = 8
irelease = 0.05
iHz = cpsmidinn(i_midikey)
idB = i_midivelocity
iamplitude = ampdb(idB) * 4.0
kcarrier = gk_FmCarrier
imodulator = 0.5
ifmamplitude = 0.25
kindex = gk_FmIndex * 20
ifrequencyb = iHz * 1.003
kcarrierb = kcarrier * 1.004
aindenv transeg 0.0, iattack, -11.0, 1.0, idecay, -7.0, 0.025, isustain, \
0.0, 0.025, irelease, -7.0, 0.0
aindex = aindenv * kindex * ifmamplitude
isinetable ftgenonce 0, 0, 65536, 10, 1, 0, .02
797
Tutorial User Guide 12 G. CSOUND IN HTML AND JAVASCRIPT
gkReverberationWet init .5
gk_ReverberationDelay init .6
instr Reverberation
ainleft inleta "inleft"
ainright inleta "inright"
aoutleft = ainleft
aoutright = ainright
kdry = 1.0 - gkReverberationWet
awetleft, awetright reverbsc ainleft, ainright, gk_ReverberationDelay, 18000
aoutleft = ainleft * kdry + awetleft * gkReverberationWet
aoutright = ainright * kdry + awetright * gkReverberationWet
outleta "outleft", aoutleft
outleta "outright", aoutright
prints "instr %4d t %9.4f d %9.4f k %9.4f v %9.4f p %9.4f\n", \
p1, p2, p3, p4, p5, p7
endin
gk_MasterLevel init 1
instr MasterOutput
ainleft inleta "inleft"
ainright inleta "inright"
aoutleft = gk_MasterLevel * ainleft
aoutright = gk_MasterLevel * ainright
outs aoutleft, aoutright
prints "instr %4d t %9.4f d %9.4f k %9.4f v %9.4f p %9.4f\n", \
p1, p2, p3, p4, p5, p7
endin
instr Controls
gk_FmIndex_ chnget "gk_FmIndex"
if gk_FmIndex_ != 0 then
gk_FmIndex = gk_FmIndex_
endif
</CsInstruments>
<CsScore>
</CsScore>
</CsoundSynthesizer>
<html>
<head> </head>
<body>
<style type="text/css">
input[type="range"] {
-webkit-appearance: none;
798
12 G. CSOUND IN HTML AND JAVASCRIPT Tutorial User Guide
border-radius: 5px;
box-shadow: inset 0 0 5px #333;
background-color: #999;
height: 10px;
width: 100%;
vertical-align: middle;
}
input[type="range"]::-webkit-slider-thumb {
-webkit-appearance: none;
border: none;
height: 16px;
width: 16px;
border-radius: 50%;
background: yellow;
margin-top: -4px;
border-radius: 10px;
}
table td {
border-width: 2px;
padding: 8px;
border-style: solid;
border-color: transparent;
color: yellow;
background-color: teal;
font-family: sans-serif;
}
</style>
<h1>Score Generator</h1>
<script>
var c = 0.99;
var y = 0.5;
function generate() {
csound.message("generate()...\n");
for (i = 0; i < 50; i++) {
var t = i * (1.0 / 3.0);
var y1 = 4.0 * c * y * (1.0 - y);
y = y1;
var key = Math.round(36.0 + y * 60.0);
var note = "i 1 " + t + " 2.0 " + key + " 60 0.0 0.5\n";
csound.readScore(note);
}
}
function on_sliderC(value) {
c = parseFloat(value);
document.querySelector("#sliderCOutput").value = c;
}
function on_sliderFmIndex(value) {
var numberValue = parseFloat(value);
document.querySelector("#sliderFmIndexOutput").value = numberValue;
csound.setControlChannel("gk_FmIndex", numberValue);
}
function on_sliderFmRatio(value) {
var numberValue = parseFloat(value);
document.querySelector("#sliderFmRatioOutput").value = numberValue;
csound.setControlChannel("gk_FmCarrier", numberValue);
}
function on_sliderReverberationDelay(value) {
799
Tutorial User Guide 12 G. CSOUND IN HTML AND JAVASCRIPT
function on_sliderMasterLevel(value) {
var numberValue = parseFloat(value);
document.querySelector("#sliderMasterLevelOutput").value = \
numberValue;
csound.setControlChannel("gk_MasterLevel", numberValue);
}
</script>
<table>
<col width="2*" />
<col width="5*" />
<col width="100px" />
<tr>
<td>
<label for="sliderC">c</label>
</td>
<td>
<input
type="range"
min="0"
max="1"
value=".5"
id="sliderC"
step="0.001"
oninput="on_sliderC(value)"
/>
</td>
<td>
<output for="sliderC" id="sliderCOutput">.5</output>
</td>
</tr>
<tr>
<td>
<label for="sliderFmIndex">Frequency modulation index</label>
</td>
<td>
<input
type="range"
min="0"
max="1"
value=".5"
id="sliderC"
step="0.001"
oninput="on_sliderFmIndex(value)"
/>
</td>
<td>
<output for="sliderFmIndex" id="sliderFmIndexOutput">.5</output>
</td>
</tr>
<tr>
<td>
<label for="sliderFmRatio">Frequency modulation ratio</label>
</td>
<td>
<input
type="range"
800
12 G. CSOUND IN HTML AND JAVASCRIPT Tutorial User Guide
min="0"
max="1"
value=".5"
id="sliderFmRatio"
step="0.001"
oninput="on_sliderFmRatio(value)"
/>
</td>
<td>
<output for="sliderFmRatio" id="sliderFmRatioOutput">.5</output>
</td>
</tr>
<tr>
<td>
<label for="sliderReverberationDelay">Reverberation delay</label>
</td>
<td>
<input
type="range"
min="0"
max="1"
value=".5"
id="sliderReverberationDelay"
step="0.001"
oninput="on_sliderReverberationDelay(value)"
/>
</td>
<td>
<output
for="sliderReverberationDelay"
id="sliderReverberationDelayOutput"
>.5</output
>
</td>
</tr>
<tr>
<td>
<label for="sliderMasterLevel">Master output level</label>
</td>
<td>
<input
type="range"
min="0"
max="1"
value=".5"
id="sliderMasterLevel"
step="0.001"
oninput="on_sliderMasterLevel(value)"
/>
</td>
<td>
<output for="sliderMasterLevel" id="sliderMasterLevelOutput"
>.5
</output>
</td>
</tr>
<tr>
<td>
<button onclick="generate()">Generate score</button>
</td>
</tr>
</table>
801
Tutorial User Guide 12 G. CSOUND IN HTML AND JAVASCRIPT
</body>
</html>
Here I have introduced a simple Csound orchestra consisting of a single frequency modulation
instrument feeding first into a reverberation effect, and then into a master output unit. These are
connected using the signal flow graph opcodes. The actual orchestra is of little interest here.
This piece has no score, because the score will be generated at run time. In the <html> element,
I also have added this button:
<button onclick="generate()"> Generate score </button>
When this button is clicked, it calls a JavaScript function that uses the logistic equation, which is
a simple quadratic dynamical system, to generate a Csound score from a chaotic attractor of the
system. This function also is quite simple. Its main job, aside from iterating the logistic equation
a few hundred times, is to translate each iteration of the system into a musical note and send
that note to Csound to be played using the Csound API function readScore(). So the following
<script> element is added to the body of the <html> element:
<script>
var c = 0.99;
var y = 0.5;
function generate() {
csound.message("generate()...\n");
for (i = 0; i < 200; i++) {
var t = i * (1.0 / 3.0);
var y1 = 4.0 * c * y * (1.0 - y);
y = y1;
var key = Math.round(36.0 + y * 60.0);
var note = "i 1 " + t + " 2.0 " + key + " 60 0.0 0.5\n";
csound.readScore(note);
}
}
</script>
Adding Sliders
The next step is to add more user control to this piece. We will enable the user to control the
attractor of the piece by varying the constant c, and we will enable the user to control the sound
of the Csound orchestra by varying the frequency modulation index, frequency modulation carrier
ratio, reverberation time, and master output level.
This code is demonstrated on a low level, so that you can see all of the details and understand
exactly what is going on. A real piece would most likely be written at a higher level of abstraction,
for example by using a third party widget toolkit, such as jQuery UI.
802
12 G. CSOUND IN HTML AND JAVASCRIPT Tutorial User Guide
oninput="on_sliderC(value)"
/>
This element has attributes of minimum value 0, maximum value 1, which normalizes the user’s
possible values between 0 and 1. This could be anything, but in many musical contexts, for ex-
ample VST plugins, user control values are always normalized between 0 and 1. The tiny step
attribute simply approximates a continuous range of values.
The most important thing is the oninput attribute, which sets the value of a JavaScript event
handler for the oninput event. This function is called whenever the user changes the value of the
slider.
For ease of understanding, a naming convention is used here, with sliderC being the basic name
and other names of objects associated with this slider taking names built up by adding prefixes or
suffixes to this basic name.
Normally a slider has a label, and it is convenient to show the actual numerical value of the slider.
This can be done like so:
<table>
<col width="2*" />
<col width="5*" />
<col width="100px" />
<tr>
<td>
<label for="sliderC">c</label>
</td>
<td>
<input
type="range"
min="0"
max="1"
value=".5"
id="sliderC"
step="0.001"
oninput="on_sliderC(value)"
/>
</td>
<td>
<output for="sliderC" id="sliderCOutput">.5</output>
</td>
</tr>
</table>
If the slider, its label, and its numeric display are put into an HTML table, that table will act like a
layout manager in a standard widget toolkit, and will resize the contained elements as required to
get them to line up.
The variable c was declared at global scope just above the generate() function, so that variable is
accessible within the on_sliderC function.
803
Tutorial User Guide 12 G. CSOUND IN HTML AND JAVASCRIPT
Keep in mind, if you are playing with this code, that a new value of c will only be heard when a new
score is generated.
Very similar logic can be used to control variables in the Csound orchestra. The value of the slider
has to be sent to Csound using the channel API, like this:
function on_sliderFmIndex(value) {
var numberValue = parseFloat(value);
document.querySelector("#sliderFmIndexOutput").value = numberValue;
csound.setControlChannel("gk_FmIndex", numberValue);
}
Then, in the Csound orchestra, that value has to be retrieved using the chnget opcode and applied
to the instrument to which it pertains. It is most efficient if the variables controlled by channels
are global variables declared just above their respective instrument definitions. The normalized
values can be rescaled as required in the Csound instrument code.
gk_FmIndex init 0.5
instr ModerateFM
...
kindex = gk_FmIndex * 20
...
endin
Also for the sake of efficiency, a global, always-on instrument can be used to read the control
channels and assign their values to these global variables:
instr Controls
gk_FmIndex_ chnget "gk_FmIndex"
if gk_FmIndex_ != 0 then
gk_FmIndex = gk_FmIndex_
endif
gk_FmCarrier_ chnget "gk_FmCarrier"
if gk_FmCarrier_ != 0 then
gk_FmCarrier = gk_FmCarrier_
endif
gk_ReverberationDelay_ chnget "gk_ReverberationDelay"
if gk_ReverberationDelay_ != 0 then
gk_ReverberationDelay = gk_ReverberationDelay_
endif
gk_MasterLevel_ chnget "gk_MasterLevel"
if gk_MasterLevel_ != 0 then
gk_MasterLevel = gk_MasterLevel_
endif
endin
Note that each actual global variable has a default value, which is only overridden if the user actu-
ally operates its slider.
The default appearance of HTML elements is brutally simple. But each element has attributes that
can be used to change its appearance, and these offer a great deal of control.
Of course, setting for example the font attribute for each label on a complex HTML layout is te-
dious. Therefore, this example shows how to use a style sheet. We don’t need much style to get
a much improved appearance:
804
12 G. CSOUND IN HTML AND JAVASCRIPT Conclusion
<style type="text/css">
input[type="range"] {
-webkit-appearance: none;
border-radius: 5px;
box-shadow: inset 0 0 5px #333;
background-color: #999;
height: 10px;
width: 100%;
vertical-align: middle;
}
input[type="range"]::-webkit-slider-thumb {
-webkit-appearance: none;
border: none;
height: 16px;
width: 16px;
border-radius: 50%;
background: yellow;
margin-top: -4px;
border-radius: 10px;
}
table td {
border-width: 2px;
padding: 8px;
border-style: solid;
border-color: transparent;
color: yellow;
background-color: teal;
font-family: sans-serif;
}
</style>
This little style sheet is generic, that is, it applies to every element on the HTML page. It says, for
example, that table td (table cells) are to have a yellow sans-serif font on a teal background, and
this will apply to every table cell on the page. Style sheets can be made more specialized by giving
them names. But for this kind of application, that is not usually necessary.
Conclusion
Most, if not all all, of the functions performed by other Csound front ends could be encompassed
by HTML and JavaScript. However, there are a few gotchas. For CsoundQt and other front ends
based on Chrome, there may be extra latency and processing overhead required by inter-process
communications. For Emscripten and other applications that use Web Audio, there may also be
additional latency.
Obviously, much more can be done with HTML, JavaScript, and other Web standards found in
contemporary Web browsers. Full-fledged, three-dimensional, interactive, multi-player computer
games are now being written with HTML and JavaScript. Other sorts of Web applications also are
being written this way.
Sometimes, JavaScript is embedded into an application for use as a scripting language. The
Csound front ends discussed here are examples, but there are others. For example, Max for Live
can be programmed in JavaScript, and so can the open source score editor MuseScore. In fact, in
MuseScore, JavaScript can be used to algorithmically generate notated scores.
805
Conclusion 12 G. CSOUND IN HTML AND JAVASCRIPT
806
13 A. DEVELOPING PLUGIN OPCODES
Csound is possibly one of the most easily extensible of all modern music programming languages.
The addition of unit generators (opcodes) and function tables is generally the most common type
of extension to the language. This is possible through two basic mechanisms: user-defined op-
codes (UDOs), written in the Csound language itself and pre-compiled/binary opcodes, written in
C or C++.1
To facilitate the latter case, Csound offers a simple opcode development API, from which
dynamically-loadable, or plugin unit generators can be built. A similar mechanism for function
tables is also available. For this we can use either the C++ or the C languages. C++ opcodes
are written as classes derived from a template (“pseudo-virtual”) base class OpcodeBase. In the
case of C opcodes, we normally supply a module according to a basic description. The sections
on plugin opcodes will use the C language. For those interested in object-oriented programming,
alternative C++ class implementations for the examples discussed in this text can be extrapolated
from the original C code.
You may find additional information and examples at Csound’s Opcode SDK repository.
The other types are used to hold scalar (k-type) , vectorial (a-type) and spectral-frame (f) signal
variables. These will change in performance, so parameters assigned to these variables are set
and modified in the opcode processing function. Scalars will hold a single value, whereas vectors
hold an array of values (a vector). These values are floating-point numbers, either 32- or 64-bit,
depending on the executable version used, defined in C/C++ as a custom MYFLT type.
807
Plugin opcodes 13 A. DEVELOPING PLUGIN OPCODES
Plugin opcodes will use pointers to input and output parameters to read and write their input/out-
put. The Csound engine will take care of allocating the memory used for its variables, so the
opcodes only need to manipulate the pointers to the addresses of these variables.
A Csound instrument code can use any of these variables, but opcodes will have to accept specific
types as input and will generate data in one of those types. Certain opcodes, known as polymor-
phic opcodes, will be able to cope with more than one type for a specific parameter (input or out-
put). This generally implies that more than one version of the opcode will have to be implemented,
which will be called depending on the parameter types used.
Plugin opcodes
Originally, Csound opcodes could only be added to the system as statically-linked code. This re-
quired that the user recompiled the whole Csound code with the added C module. The introduc-
tion of a dynamic-loading mechanism has provided a simpler way for opcode addition, which only
requires the C code to be compiled and built as a shared, dynamic library. These are known in
Csound parlance as plugin opcodes and the following sections are dedicated to their development
process.
Anatomy of an opcode
The C code for a Csound opcode has three main programming components: a data structure to
hold the internal data, an initialising function and a processing function. From an object-oriented
perspective, an opcode is a simple class, with its attributes, constructor and perform methods. The
data structure will hold the attributes of the class: input/output parameters and internal variables
(such as delays, coefficients, counters, indices etc.), which make up its dataspace.
The constructor method is the initialising function, which sets some attributes to certain values,
allocates memory (if necessary) and anything that is needed for an opcode to be ready for use.
This method is called by the Csound engine when an instrument with its opcodes is allocated in
memory, just before performance, or when a reinitialisation is required.
Performance is implemented by the processing function, or perform method, which is called when
new output is to be generated. This happens at every control period, or ksmps samples. This
implies that signals are generated at two different rates: the control rate, kr, and the audio rate, sr,
which is kr * ksmps samples/sec. What is actually generated by the opcode, and how its perform
method is implemented, will depend on its input and output Csound language data types.
Opcoding basics
C-language opcodes normally obey a few basic rules and their development require very little in
terms of knowledge of the actual processes involved in Csound. Plugin opcodes will have to pro-
vide the three main programming components outlined above: a data structure plus the initialisa-
808
13 A. DEVELOPING PLUGIN OPCODES Plugin opcodes
tion and processing functions. Once these elements are supplied, all we need to do is to add a line
telling Csound what type of opcode it is, whether it is an i-, k- or a-rate based unit generator and
what arguments it takes.
1. The OPDS data structure, holding the common components of all opcodes.
The Csound opcode API is defined by csdl.h, which should be included at the top of the source
file. The example below shows a simple data structure for an opcode with one output and three
inputs, plus a couple of private internal variables:
#include "csdl.h"
OPDS h;
MYFLT *out;/* output pointer */
MYFLT *in1,*in2,*in3; /* input pointers */
MYFLT var1; /* internal variables */
MYFLT var2;
} newopc;
Initialisation
The initialisation function is only there to initialise any data, such as the internal variables, or al-
locate memory, if needed. The plugin opcode model in Csound 6 expects both the initialisation
function and the perform function to return an int value, either OK or NOTOK. Both methods take
two arguments: pointers to the CSOUND data structure and the opcode dataspace. The following
example shows an example initialisation function. It initialises one of the variables to 0 and the
other to the third opcode input parameter.
int newopc_init(CSOUND *csound, newopc *p){
p->var1 = (MYFLT) 0;
p->var2 = *p->in3;
return OK;
}
Control-rate performance
The processing function implementation will depend on the type of opcode that is being created.
For control rate opcodes, with k- or i-type input parameters, we will be generating one output value
at a time. The example below shows an example of this type of processing function. This simple
809
Plugin opcodes 13 A. DEVELOPING PLUGIN OPCODES
example just keeps ramping up or down depending on the value of the second input. The output
is offset by the first input and the ramping is reset if it reaches the value of var2 (which is set to
the third input argument in the constructor above).
int newopc_process_control(CSOUND *csound, newopc *p){
MYFLT cnt = p->var1 + *(p->in2);
if(cnt > p->var2) cnt = (MYFLT) 0; /* check bounds */
*(p->out) = *(p->in1) + cnt; /* generate output */
p->var1 = cnt; /* keep the value of cnt */
return OK;
}
Audio-rate performance
For audio rate opcodes, because it will be generating audio signal vectors, it will require an internal
loop to process the vector samples. This is not necessary with k-rate opcodes, because, as we
are dealing with scalar inputs and outputs, the function has to process only one sample at a time.
If we were to make an audio version of the control opcode above (disregarding its usefulness), we
would have to change the code slightly. The basic difference is that we have an audio rate output
instead of control rate. In this case, our output is a whole vector (a MYFLT array) with ksmps
samples, so we have to write a loop to fill it. It is important to point out that the control rate and
audio rate processing functions will produce exactly the same result. The difference here is that
in the audio case, we will produce ksmps samples, instead of just one sample. However, all the
vector samples will have the same value (which actually makes the audio rate function redundant,
but we will use it just to illustrate our point).
/* processing loop */
for(i=offset; i < n; i++) aout[i] = *(p->in1) + cnt;
810
13 A. DEVELOPING PLUGIN OPCODES Plugin opcodes
In order for Csound to be aware of the new opcode, we will have to register it. This is done by filling
an opcode registration structure OENTRY array called localops (which is static, meaning that only
one such array exists in memory at a time):
static OENTRY localops[] = {
{ "newopc", sizeof(newopc), 0, 7, "s", "kki",(SUBR) newopc_init,
(SUBR) newopc_process_control, (SUBR) newopc_process_audio }
};
Linkage
The OENTRY structure defines the details of the new opcode:
Since we have defined our output as “s”, the actual processing function called by csound will de-
pend on the output type. For instance
k1 newopc kin1, kin2, i1
will use newopc_process_audio(). This type of code is found for instance in the oscillator opcodes,
which can generate control or audio rate (but in that case, they actually produce a different output
for each type of signal, unlike our example).
Finally, it is necessary to add, at the end of the opcode C code the LINKAGE macro, which defines
some functions needed for the dynamic loading of the opcode.
811
Building opcodes 13 A. DEVELOPING PLUGIN OPCODES
Building opcodes
The plugin opcode is build as a dynamic module. All we need is to build the opcode as a dynamic
library, as demonstrated by the examples below.
On OSX:
gcc -O2 -dynamiclib -o myopc.dylib opsrc.c -DUSE_DOUBLE
-I/Library/Frameworks/CsoundLib64.framework/Headers
Linux:
gcc -O2 -shared -o myopc.so -fPIC opsrc.c -DUSE_DOUBLE
-I<path to Csound headers>
Windows (MinGW+MSYS):
gcc -O2 -shared -o myopc.dll opsrc.c -DUSE_DOUBLE
-I<path to Csound headers>
CSD Example
To run Csound with the new opcodes, we can use the --opcode-lib=libname option.
EXAMPLE 13A01_newop.csd
<CsoundSynthesizer>
<CsOptions>
--opcode-lib=newopc.so ; OSX: newopc.dylib; Windows: newopc.dll
</CsOptions>
<CsInstruments>
schedule 1,0,100,440
instr 1
endin
</CsInstruments>
</CsoundSynthesizer>
;example by victor lazzarini
812
14 A. OPCODE GUIDE
If Csound is called from the command line with the option -z, a list of all opcodes is printed. The
total number of all opcodes is more than 1500. There are already overviews of all of Csound’s op-
codes in the Opcodes Overview and the Opcode Quick Reference of the Canonical Csound Manual.
This guide is another attempt to provide some orientation within Csound’s wealth of opcodes — a
wealth which is often frightening for beginners and still overwhelming for experienced users.
Three selections are given here, each larger than the other:
1. The 33 Most Essential Opcodes. This selection might be useful for beginners. Learning
ten opcodes a day, Csound can be learned in three days, and many full-featured Csound
programs can be written with these 33 opcodes.
2. The Top 100 Opcodes. Adding 67 more opcodes to the first collection pushes the csound
programmer to the next level. This should be sufficient for doing most of the jobs in Csound.
3. The third overview is rather extended already, and follows mostly the classification in the
Csound Manual. It comprises nearly 500 opcodes.
Although these selections come from some experience in using and teaching Csound, they must
remain subjective, as working in Csound can go in quite different directions.
33 ESSENTIAL OPCODES
Oscillators
poscil(3) — high precision oscillator with linear (cubic) interpolation
vco(2) — analog modelled oscillator
813
33 ESSENTIAL OPCODES 14 A. OPCODE GUIDE
Envelopes
linen(r) — linear fade in/out
Line Generators
linseg(r) — one or more linear segments
transeg(r) — one or more user-definable segments
Line Smooth
sc_lag(ud) — exponential lag (with different smoothing times) (traditional alternatives are port(k)
and tonek)
Audio I/O
inch — read audio from one or more input channel
out — write audio to one or more output channels (starting from first hardware output)
Control
if — if clause
changed(2) — k-rate signal change detector
Instrument Control
schedule(k) — perform instrument event
turnoff(2) — turn off this or another instrument
814
14 A. OPCODE GUIDE 33 ESSENTIAL OPCODES
Time
metro(2) — trigger metronome
Software Channels
chnset/chnget — set/get value in channel
MIDI
massign — assign MIDI channel to Csound instrument
notnum — note number received
veloc — velocity received
Key
sensekey — sense computer keyboard
Panning
pan2 — stereo panning with different options
Reverb
reverbsc — stereo reverb after Sean Costello
Delay
vdelayx — variable delay with highest quality interpolation
Distortion
distort(1) — distortion via waveshaping
815
TOP 100 OPCODES 14 A. OPCODE GUIDE
Filter
butbp(hp/lp) — second order butterworth filter
Level
rms — RMS measurement
balance(2) — adjust audio signal level according to comparator
Math / Conversion
ampdb/dbamp — dB to/from amplitude
mtof/ftom — MIDI note number to/from frequency
Print
print(k) — print i/k-values
Oscillators / Phasors
poscil(3) — high precision oscillator with linear (cubic) interpolation
vco(2) — analog modelled oscillator
(g)buzz — buzzer
mpulse — single sample impulses
phasor — standard phasor
816
14 A. OPCODE GUIDE TOP 100 OPCODES
Envelopes
linen(r) — linear fade in/out
(m)adsr — traditional ADSR envelope
Line Generators
linseg(r) — one or more linear segments
expseg(r) — one or more exponential segments
cosseg — one or more cosine segments
transeg(r) — one or more user-definable segments
Line Smooth
sc_lag(ud) — exponential lag (with different smoothing times)
Audio I/O
inch — read audio from one or more input channels
out — write audio to one or more output channels (starting from first hardware output)
outch — write audio to arbitrary output channel(s)
monitor — monitor audio output channels
Tables (Buffers)
ftgen — create any table with a GEN subroutine
table(i/3) — read from table (with linear/cubic interpolation)
817
TOP 100 OPCODES 14 A. OPCODE GUIDE
Arrays
fillarray — fill array with values
lenarray — length of array
getrow/getcol — get a row/column from a two-dimensional array
setrow/setcol — set a row/column of a two-dimensional array
Program Control
if — if clause
while — while loop
changed(2) — k-rate signal change detector
trigger — threshold trigger
Instrument Control
active — number of active instrument instances
maxalloc — set maximum number of instrument instances
schedule(k) — perform instrument event
turnoff(2) — turn off this or another instrument
nstrnum — number of a named instrument
Time
metro(2) — trigger metronome
timeinsts — time of instrument instance in seconds
Software Channels
chnget/chnset — get/set value from/to channel
chnmix/chnclear — mix value to channel / clear channel
818
14 A. OPCODE GUIDE TOP 100 OPCODES
MIDI
massign — assign MIDI channel to Csound instrument
notnum — note number received
veloc — velocity received
ctrl7(14/21) — receive controller
OSC
OSClisten — receive messages
OSCraw — listens to all messages
OSCsend — send messages
Key
sensekey — sense computer keyboard
Panning / Spatialization
pan2 — stereo panning with different options
vbap — vector base amplitude panning for multichannel (also 3d)
bformenc1/bformdec1 — B-format encoding/decoding
Reverb
freeverb — stereo reverb after Jezar
reverbsc — stereo reverb after Sean Costello
Spectral Processing
pvsanal — spectral analysis with audio signal input
pvstanal — spectral analysis from sampled sound
pvsynth — spectral resynthesis
pvscale — scale frequency components (pitch shift)
pvsmorph — morphing between two f-signals
pvsftw/pvsftr — write/read anplitude and/or frequency data to/from tables
pvs2array/pvsfromarray — write/read spectral data to/from arrays
819
TOP 100 OPCODES 14 A. OPCODE GUIDE
Convolution
pconvolve — partitioned convolution
Granular Synthesis
partikkel — complete granular synthesis
Physical Models
pluck — plucked string (Karplus-Strong) algorithm
Delay
vdelayx — variable delay with highest quality interpolation
(v)comb — comb filter
Distortion
distort(1) — distortion via waveshaping
powershape — waveshaping by raising to a variable exponent
Filter
(a)tone — first order IIR low (high) pass filter
reson — second order resonant filter
butbp(hp/lp) — second order butterworth filter
mode — mass-spring system modelled
zdf_ladder — zero delay feedback implementation of 4 pole ladder filter
Level
rms — RMS measurement
balance(2) — adjust audio signal level according to comparator
820
14 A. OPCODE GUIDE EXTENDED OPCODE OVERVIEW IN CATEGORIES
Math / Conversion
ampdb/dbamp — dB to/from amplitude
mtof/ftom — MIDI note number to/from frequency
cent — cent to scaling factor
log2 — return 2 base log
abs — absolute value
int/frac — integer/fractional part
linlin — signal scaling
Print
print(k) — print i/k-values
printarray — print array
ftprint — print table
File IO
fout — write out real-time audio output (for rendered audio file output see chapter 02E and 06A)
ftsave(k) — save table(s) to text file or binary
fprint(k)s — formatted printing to file
readf(i) — reads an external file line by line
directory — files in a directory as string array
821
EXTENDED OPCODE OVERVIEW IN CATEGORIES 14 A. OPCODE GUIDE
General Settings and Queries Note that modern Csound frontends handle most of the Audio I/O
settings. For command line usage, see this section in the Csound Options.
Signal Input and Output inch — read audio from one or more input channels
out — write audio to one or more output channels (starting from first hardware output)
outch — write audio to arbitrary output channel(s)
monitor — monitor audio output channels
Sound File Playback diskin — sound file read/playback with different options
mp3in — mp3 read/playback
Time Stretch and Pitch Shift filescal — phase-locked vocoder processing with time and pitch
scale
mincer — phase-locked vocoder processing on table loaded sound
mp3scal — tempo scaling of mp3 files
paulstretch — extreme time stretch
sndwarp(st) — granular-based time and pitch modification
NOTE that any granular synthesis opcode and some of the pvs opcodes (pvstanal, pvsbufred) can
also be used for this approach
822
14 A. OPCODE GUIDE EXTENDED OPCODE OVERVIEW IN CATEGORIES
Sound File Output fout — write out real-time audio output (for rendered audio file output see chap-
ter 02E and 06A)
Standard Oscillators poscil(3) — high precision oscillator with linear (cubic) interpolation
oscili(3) — standard oscillator with linear (cubic) interpolation
lfo — low frequency oscillator of various shapes
oscilikt — interpolating oscillator with k-rate changeable tables
more … — more standard oscillators …
Note: oscil is not recommended as it has integer indexing which can result in low quality
823
EXTENDED OPCODE OVERVIEW IN CATEGORIES 14 A. OPCODE GUIDE
Random Generators with Interpolating or Hold Numbers randi(c) — bipolar random generator
with linear (cubic) interpolation
randh — bipolar random generator with hold numbers
randomi — random numbers between min/max with interpolation
randomh — random numbers between min/max with hold numbers
more … — more random generators …
Signal Smooth port(k) — portamento-like smoothing for control signals (with variable half-time)
sc_lag(ud) — exponential lag (with different smoothing times)
(t)lineto — generate glissando from control signal
824
14 A. OPCODE GUIDE EXTENDED OPCODE OVERVIEW IN CATEGORIES
FILTERS
Compare the extensive Standard Filters and Specialized Filters overviews in the Csound Manual.
Band Pass And Resonant Filters reson — second order resonant filter
resonx/resony — serial/parallel connection of several reson filters
resonr/resonz — variants of the reson filter
butbp — second order butterworth filter
825
EXTENDED OPCODE OVERVIEW IN CATEGORIES 14 A. OPCODE GUIDE
REVERB
826
14 A. OPCODE GUIDE EXTENDED OPCODE OVERVIEW IN CATEGORIES
SPATIALIZATION
827
EXTENDED OPCODE OVERVIEW IN CATEGORIES 14 A. OPCODE GUIDE
Real-time Analysis and Resynthesis pvsanal — spectral analysis with audio signal input
pvstanal — spectral analysis from sampled sound
pvstrace — retain only N loudest bins
pvsynth — spectral resynthesis
pvsadsyn — spectral resynthesis using fast oscillator bank
Writing Spectral Data to a File and Reading from it pvsfwrite — writing f-sig to file
pvsfread — read f-sig data from a file loaded into memory
pvsdiskin — read f-sig data directly from disk
Writing Spectral Data to a Buffer or Array and Reading from it pvsbuffer — create and write f-sig
to circular buffer
pvsbufread(2) — read f-sig from pvsbuffer
pvsftw — write anplitude and/or frequency data to tables
pvsftr — read amplitude and/or frequency data from table
pvs2array(pvs2tab) — write spectral data to arrays
pvsfromarray(tab2pvs) — read spectral data from arrays
828
14 A. OPCODE GUIDE EXTENDED OPCODE OVERVIEW IN CATEGORIES
CONVOLUTION
V. DATA
BUFFERS / FUNCTION TABLES
Creating/Deleting Function Tables (Buffers) ftgen — create any table with a GEN subroutine
GEN Routines — overview of subroutines
ftfree — delete function table
ftgenonce — create table inside an instrument
ftgentmp — create table bound to instrument instance
tableicopy — copy table from other table
copya2ftab — copy array to a function table
Reading From Tables table(i/3) — read from table (with linear/cubic interpolation)
tablexkt — reads function tables with linear/cubic/sinc interpolation
829
EXTENDED OPCODE OVERVIEW IN CATEGORIES 14 A. OPCODE GUIDE
Loading Tables From Files ftload(k) — load table(s) from file written with ftsave
GEN23 — read numeric values from a text file
GEN01 — load audio file into table
GEN49 — load mp3 sound file into table
ARRAYS
830
14 A. OPCODE GUIDE EXTENDED OPCODE OVERVIEW IN CATEGORIES
Functions See chapter 03E for a list of mathematical function which can directly be applied to
arrays.
STRINGS
FILES
831
EXTENDED OPCODE OVERVIEW IN CATEGORIES 14 A. OPCODE GUIDE
832
14 A. OPCODE GUIDE EXTENDED OPCODE OVERVIEW IN CATEGORIES
PRINTING
SOFTWARE CHANNELS
833
EXTENDED OPCODE OVERVIEW IN CATEGORIES 14 A. OPCODE GUIDE
MATHEMATICAL CALCULATIONS
CONVERTERS
834
14 A. OPCODE GUIDE EXTENDED OPCODE OVERVIEW IN CATEGORIES
835
EXTENDED OPCODE OVERVIEW IN CATEGORIES 14 A. OPCODE GUIDE
OTHER
SYSTEM
836
14 A. OPCODE GUIDE EXTENDED OPCODE OVERVIEW IN CATEGORIES
PLUGINS
This overview was compiled by Joachim Heintz in may 2020, based on Csound 6.14.
Thanks to Tarmo Johannes, Victor Lazzarini, Gleb Rogozinsky, Steven Yi, Oeyvind Brandtsegg,
Richard Boulanger, John ffitch, Luis Jure, Rory Walsh, Eduardo Moguillansky and others for their
feedback which made the selections at least a tiny bit less subjective.
837
EXTENDED OPCODE OVERVIEW IN CATEGORIES 14 A. OPCODE GUIDE
838
14 B. METHODS OF WRITING CSOUND SCORES
Although the use of Csound real-time has become more prevalent and arguably more important
whilst the use of the score has diminished and become less important, composing using score
events within the Csound score remains an important bedrock to working with Csound. There are
many methods for writing Csound score several of which are covered here; starting with the clas-
sical method of writing scores by hand, then with the definition of a user-defined score language,
and concluding several external Csound score generating programs.
The most basic score event as described above might be something like this:
i 1 0 5
which would demand that instrument number 1 play a note at time zero (beats) for 5 beats. After
time of constructing a score in this manner it quickly becomes apparent that certain patterns and
repetitions recur. Frequently a single instrument will be called repeatedly to play the notes that
form a longer phrase therefore diminishing the worth of repeatedly typing the same instrument
number for p1, an instrument may play a long sequence of notes of the same duration as in a phrase
of running semiquavers rendering the task of inputting the same value for p3 over and over again
slightly tedious and often a note will follow on immediately after the previous one as in a legato
phrase intimating that the p2 start-time of that note might better be derived from the duration and
start-time of the previous note by the computer than to be figured out by the composer. Inevitably
839
Writing Score by Hand 14 B. METHODS OF WRITING CSOUND SCORES
where . would indicate that that p-field would reuse the same p-field value from the previous score
event, where +, unique for p2, would indicate that the start time would follow on immediately after
the previous note had ended and > would create a linear ramp from the first explicitly defined value
(60) to the next explicitly defined value (64) in that p-field column (p4).
A more recent refinement of the p2 shortcut allows for staccato notes where the rhythm and tim-
ing remain unaffected. Each note lasts for 1/10 of a beat and each follows one second after the
previous.
i 1 0 .1 60
i . ^+1 . >
i . ^+1 . >
i . ^+1 . >
i . ^+1 . 64
The benefits offered by these short cuts quickly becomes apparent when working on longer scores.
In particular the editing of critical values once, rather than many times is soon appreciated.
Taking a step further back, a myriad of score tools, mostly also identified by a single letter, exist
to manipulate entire sections of score. As previously mentioned Csound defaults to giving each
beat a duration of 1 second which corresponds to this t statement at the beginning of a score:
t 0 60
“At time (beat) zero set tempo to 60 beats per minute”; but this could easily be anything else or
evena string of tempo change events following the format of a linsegb statement.
t 0 120 5 120 5 90 10 60
This time tempo begins at 120bpm and remains steady until the 5th beat, whereupon there is an
immediate change to 90bpm; thereafter the tempo declines in linear fashion until the 10th beat
when the tempo has reached 60bpm.
m statements allow us to define sections of the score that might be repeated (s statements mark-
ing the end of that section). n statements referencing the name given to the original m statement
via their first parameter field will call for a repetition of that section.
m verse
i 1 0 1 60
i . ^+1 . >
i . ^+1 . >
i . ^+1 . >
i . ^+1 . 64
s
840
14 B. METHODS OF WRITING CSOUND SCORES Extension of the Score Language: bin=“…”
n verse
n verse
n verse
Here a verse section is first defined using an m section (the section is also played at this stage). s
marks the end of the section definition and n recalls this section three more times.
Just a selection of the techniques and shortcuts available for hand-writing scores have been in-
troduced here (refer to the Csound Reference Manual for a more encyclopedic overview). It has
hopefully become clear however that with a full knowledge and implementation of these tech-
niques the user can adeptly and efficiently write and manipulate scores by hand.
1. If just a binary is specified, this binary is called and two files are passed to it:
1. A copy of the user written score. This file has the suffix .ext
2. An empty file which will be read after the interpretation by Csound. This file has the
usual score suffix .sco
2. If a binary and a script is specified, the binary calls the script and passes the two files to the
script.
If you have Python installed on your computer, you should be able to run the following examples.
They do actually nothing but print the arguments (= file names).
<CsoundSynthesizer>
<CsInstruments>
instr 1
endin
</CsInstruments>
<CsScore bin="python3">
from sys import argv
print("File to read = '%s'" % argv[0])
print("File to write = '%s'" % argv[1])
</CsScore>
</CsoundSynthesizer>
When you execute this .csd file in the terminal, your output should include something like this:
File to read = '/tmp/csound-idWDwO.ext'
File to write = '/tmp/csound-EdvgYC.sco'
841
Extension of the Score Language: bin=“…” 14 B. METHODS OF WRITING CSOUND SCORES
And there should be a complaint because the empty .sco file has not been written:
cannot open scorefile /tmp/csound-EdvgYC.sco
EXAMPLE 14A02_Score_bin_script.csd
<CsoundSynthesizer>
<CsInstruments>
instr 1
endin
</CsInstruments>
<CsScore bin="python3 print.py">
</CsScore>
</CsoundSynthesizer>
CsBeats
As an alternative to the classical Csound score, CsBeats is included with Csound. This is a domain
specific language tailored to the concepts of beats, rhythm and standard western notation. To use
Csbeat, specify “csbeats” as the CsScore bin option in a Csound unified score file.
<CsScore bin="csbeats">
842
14 B. METHODS OF WRITING CSOUND SCORES Pysco
there will be a different value for the first three random statements, while the last two statements
will always generate the same values.
EXAMPLE 14A03_Score_perlscript.csd
<CsoundSynthesizer>
<CsInstruments>
;example by tito latini
instr 1
prints "amp = %f, freq = %f\n", p4, p5;
endin
</CsInstruments>
<CsScore bin="perl cs_sco_rand.pl">
</CsScore>
</CsoundSynthesizer>
# cs_sco_rand.pl
my ($in, $out) = @ARGV;
open(EXT, "<", $in);
open(SCO, ">", $out);
while (<EXT>) {
s/SEED\s+(\d+)/srand($1);$&/e;
s/rand\(\d*\)/eval $&/ge;
print SCO;
}
Pysco
Pysco is a modular Csound score environment for event generation, event processing, and the
fashioning musical structures in time. Pysco is non-imposing and does not force composers into
any one particular compositional model; Composers design their own score frameworks by im-
porting from existing Python libraries, or fabricate their own functions as needed. It fully supports
the existing classical Csound score, and runs inside a unified CSD file. The sources are on github,
so although the code is still using Python2, it can certainly serve as an example about the possi-
bilities of using Python as score scripting language.
Pysco is designed to be a giant leap forward from the classical Csound score by leveraging Python,
a highly extensible general-purpose scripting language. While the classical Csound score does
feature a small handful of score tricks, it lacks common computer programming paradigms, of-
fering little in terms of alleviating the tedious process of writing scores by hand. Python plus the
Pysco interface transforms the limited classical score into highly flexible and modular text-based
843
Pysco 14 B. METHODS OF WRITING CSOUND SCORES
compositional environment.
score('''
f 1 0 8192 10 1
t 0 144
i 1 0.0 1.0 0.7 8.02
i 1 1.0 1.5 0.4 8.05
i 1 2.5 0.5 0.3 8.09
i 1 3.0 1.0 0.4 9.00
''')
</CsScore>
Boiler plate code that is often associated with scripting and scoring, such as file management and
string concatenation, has been conveniently factored out.
The last step in transitioning is to learn a few of Python or Pysco features. While Pysco and Python
offers an incredibly vast set of tools and features, one can supercharge their scores with only a
small handful.
In the classical Csound score model, there is only the concept of beats. This forces composers
to place events into the global timeline, which requires an extra added incovenience of calculating
start times for individual events. Consider the following code in which measure 1 starts at time
0.0 and measure 2 starts at time 4.0.
; Measure 1
i 1 0.0 1.0 0.7 8.02
i 1 1.0 1.5 0.4 8.05
i 1 2.5 0.5 0.3 8.09
i 1 3.0 1.0 0.4 9.00
; Measure 2
i 1 4.0 1.0 0.7 8.07
i 1 5.0 1.5 0.4 8.10
i 1 6.5 0.5 0.3 9.02
i 1 7.0 1.0 0.4 9.07
In an ideal situation, the start times for each measure would be normalized to zero, allowing com-
posers to think local to the current measure rather than the global timeline. This is the role of
844
14 B. METHODS OF WRITING CSOUND SCORES Pysco
Pysco’s cue() context manager. The same two measures in Pysco are rewritten as follows:
# Measure 1
with cue(0):
score('''
i 1 0.0 1.0 0.7 8.02
i 1 1.0 1.5 0.4 8.05
i 1 2.5 0.5 0.3 8.09
i 1 3.0 1.0 0.4 9.00
''')
# Measure 2
with cue(4):
score('''
i 1 0.0 1.0 0.7 8.07
i 1 1.0 1.5 0.4 8.10
i 1 2.5 0.5 0.3 9.02
i 1 3.0 1.0 0.4 9.07
''')
The start of measure 2 is now 0.0, as opposed to 4.0 in the classical score environment. The
physical layout of these time-based block structure also adds visual cues for the composer, as
indentation and with cue() statements adds clarity when scanning a score for a particular event.
Moving events in time, regardless of how many there are, is nearly effortless. In the classical score,
this often involves manually recalculating entire columns of start times. Since the cue() supports
nesting, it’s possible and rather quite easy, to move these two measures any where in the score
with a new with cue() statement.
# Movement 2
with cue(330):
# Measure 1
with cue(0):
i 1 0.0 1.0 0.7 8.02
i 1 1.0 1.5 0.4 8.05
i 1 2.5 0.5 0.3 8.09
i 1 3.0 1.0 0.4 9.00
#Measure 2
with cue(4):
i 1 0.0 1.0 0.7 8.07
i 1 1.0 1.5 0.4 8.10
i 1 2.5 0.5 0.3 9.02
i 1 3.0 1.0 0.4 9.07
These two measures now start at beat 330 in the piece. With the exception of adding an extra
level of indentation, the score code for these two measures are unchanged.
Generating Events
Pysco includes two functions for generating a Csound score event. The score() function simply
accepts any and all classical Csound score events as a string. The second is event_i(), which
generates a properly formatted Csound score event. Take the following Pysco event for example:
event_i(1, 0, 1.5, 0.707 8.02)
The event_i() function transforms the input, outputting the following Csound score code:
845
Pysco 14 B. METHODS OF WRITING CSOUND SCORES
These event score functions combined with Python’s extensive set of features aid in generating
multiple events. The following example uses three of these features: the for statement, range(),
and random().
from random import random
score('t 0 160')
Python’s for statement combined with range() loops through the proceeding code block eight times
by iterating through the list of values created with the range() function. The list generated by
range(8) is:
[0, 1, 2, 3, 4, 5, 6, 7]
As the script iterates through the list, variable time assumes the next value in the list; The time
variable is also the start time of each event. A hint of algorithmic flair is added by importing the
random() function from Python’s random library and using it to create a random frequency between
100 and 1000 Hz. The script produces this classical Csound score:
t 0 160
i 1 0 1 0.707 211.936363038
i 1 1 1 0.707 206.021046104
i 1 2 1 0.707 587.07781543
i 1 3 1 0.707 265.13585797
i 1 4 1 0.707 124.548796225
i 1 5 1 0.707 288.184408335
i 1 6 1 0.707 396.36805871
i 1 7 1 0.707 859.030151952
Processing Events
Pysco includes two functions for processing score event data called p_callback() and pmap(). The
p_callback() is a pre-processor that changes event data before it’s inserted into the score object
while pmap() is a post-processor that transforms event data that already exists in the score.
p_callback(event_type, instr_number, pfield, function, *args)
pmap(event_type, instr_number, pfield, function, *args)
The following examples demonstrates a use case for both functions. The p_callback() function
pre-processes all the values in the pfield 5 column for instrument 1 from conventional notation (D5,
G4, A4, etc) to hertz. The pmap() post-processes all pfield 4 values for instrument 1, converting
from decibels to standard amplitudes.
p_callback('i', 1, 5, conv_to_hz)
score('''
t 0 120
i 1 0 0.5 -3 D5
846
14 B. METHODS OF WRITING CSOUND SCORES CMask
i 1 + . . G4
i 1 + . . A4
i 1 + . . B4
i 1 + . . C5
i 1 + . . A4
i 1 + . . B4
i 1 + . . G5
''')
pmap('i', 1, 4, dB)
CMask
CMask is an application that produces score files for Csound, i.e. lists of notes or rather events. Its
main application is the generation of events to create a texture or granular sounds. The program
takes a parameter file as input and makes a score file that can be used immediately with Csound.
The basic concept in CMask is the tendency mask. This is an area that is limited by two time
variant boundaries. This area describes a space of possible values for a score parameter, for
example amplitude, pitch, pan, duration etc. For every parameter of an event (a note statement
pfield in Csound) a random value will be selected from the range that is valid at this time.
There are also other means in CMask for the parameter generation, for example cyclic lists, oscil-
lators, polygons and random walks. Each parameter of an event can be generated by a different
method. A set of notes / events generated by a set of methods lasting for a certain time span is
called a field.
{
f1 0 8193 10 1 ;sine wave
}
p1 const 1
p2 ;decreasing density
rnd uni ;from .03 - .08 sec to .5 - 1 sec
mask [.03 .5 ipl 3] [.08 1 ipl 3] map 1
prec 2
p3 ;increasing duration
847
nGen 14 B. METHODS OF WRITING CSOUND SCORES
Cmask can be downloaded for MacOS9, Win, Linux (by André Bartetzki) and is ported to OSX(by
Anthony Kozar).
nGen
nGen is a free multi-platform generation tool for creating Csound event-lists (score files) and stan-
dard MIDI files. It is written in C and runs on a variety of platforms (version 2.1.2 is currently avail-
able for OSX 10.5 and above, Linux Intel and Windows 7+). All versions run in the UNIX command-
line style (at a command-line shell prompt). nGen was designed and written by composer Mikel
Kuehn and was inspired in part by the basic syntax of Aleck Brinkman’s Score11 note list prepro-
cessor (Score11 is available for Linux Intel from the Eastman Computer Music Center) and Leland
Smith’s Score program.
nGen will allow you to do several things with ease that are either difficult or not possible using
Csound and/or MIDI sequencing programs; nGen is a powerful front-end for creating Csound
score-files and basic standard MIDI files. Some of the basic strengths of nGen are:
ä Event-based granular textures can be generated quickly. Huge streams of values can be
generated with specific random-number distributions (e.g., Gaussian, flat, beta, exponential,
848
14 B. METHODS OF WRITING CSOUND SCORES nGen
etc.).
ä Note-names and rhythms can be entered in intuitive formats (e.g., pitches: C4, Df3; rhythms:
4, 8, 16, 32).
ä “Chords” can be specified as a single unit (e.g., C4:Df:E:Fs). Textual and numeric macros are
available.
Additionally, nGen supplies a host of conversion routines that allow p-field data to be converted
to different formats in the resulting Csound score file (e.g., octave.pitch-class can be formatted to
Hz values, etc.). A variety of formatting routines are also supplied (such as the ability to output
floating-point numbers with a certain precision width).
nGen is a portable text-based application. It runs on most platforms (Windows, Mac, Linux, Irix,
UNIX, etc.) and allows for macro- and micro-level generation of event-list data by providing many
dynamic functions for dealing with statistical generation (such as interpolation between values
over the course of many events, varieties of pseudo-random data generation, p-field extraction and
filtering, 1/f data, the use of “sets” of values, etc.) as well as special modes of input (such as note-
name/octave-number, reciprocal duration code, etc.). Its memory allocation is dynamic, making it
useful for macro-level control over huge score-files. In addition, nGen contains a flexible text-based
macro pre-processor (identical to that found in recent versions of Csound), numeric macros and
expressions, and also allows for many varieties of data conversion and special output formatting.
nGen is command-line based and accepts an ASCII formatted text-file which is expanded into a
Csound score-file or a standard MIDI file. It is easy to use and is extremely flexible making it
suitable for use by those not experienced with high-level computer programming languages.
i1 = 7 0 10 {
p2 .01 ;intervalic start time
/* The duration of each event slowly changes over time starting at 20 the
initial start time interval to 1x the ending start-time interval. The "T"
variable is used to control the duration of both move statements (50% of
the entire i-block duration). */
p3 mo(T*.5 1. 20 1) mo(T*.5 1. 1 10)
849
AthenaCL 14 B. METHODS OF WRITING CSOUND SCORES
AthenaCL
The athenaCL system is a software tool for creating musical structures. Music is rendered as
a polyphonic event list, or an EventSequence object. This EventSequence can be converted into
diverse forms, or OutputFormats, including scores for the Csound synthesis language, Musical
Instrument Digital Interface (MIDI) files, and other specialized formats. Within athenaCL, Orchestra
and Instrument models provide control of and integration with diverse OutputFormats. Orchestra
models may include complete specification, at the code level, of external sound sources that are
created in the process of OutputFormat generation.
850
14 B. METHODS OF WRITING CSOUND SCORES Common Music
The athenaCL system features specialized objects for creating and manipulating pitch structures,
including the Pitch, the Multiset (a collection of Pitches), and the Path (a collection of Multisets).
Paths define reusable pitch groups. When used as a compositional resource, a Path is interpreted
by a Texture object (described below).
The athenaCL system features three levels of algorithmic design. The first two levels are pro-
vided by the ParameterObject and the Texture. The ParameterObject is a model of a low-level one-
dimensional parameter generator and transformer. The Texture is a model of a multi-dimensional
generative musical part. A Texture is controlled and configured by numerous embedded Parame-
terObjects. Each ParameterObject is assigned to either event parameters, such as amplitude and
rhythm, or Texture configuration parameters. The Texture interprets ParameterObject values to
create EventSequences. The number of ParameterObjects in a Texture, as well as their function
and interaction, is determined by the Texture’s parent type (TextureModule) and Instrument model.
Each Texture is an instance of a TextureModule. TextureModules encode diverse approaches to
multi-dimensional algorithmic generation. The TextureModule manages the deployment and in-
teraction of lower level ParameterObjects, as well as linear or non-linear event generation. Spe-
cialized TextureModules may be designed to create a wide variety of musical structures.
The third layer of algorithmic design is provided by the Clone, a model of the multi-dimensional
transformative part. The Clone transforms EventSequences generated by a Texture. Similar to
Textures, Clones are controlled and configured by numerous embedded ParameterObjects.
Each Texture and Clone creates a collection of Events. Each Event is a rich data representation
that includes detailed timing, pitch, rhythm, and parameter data. Events are stored in EventSe-
quence objects. The collection all Texture and Clone EventSequences is the complete output of
athenaCL. These EventSequences are transformed into various OutputFormats for compositional
deployment.
Common Music
Common Music is a music composition system that transforms high-level algorithmic represen-
tations of musical processes and structure into a variety of control protocols for sound synthesis
and display. It generates musical output via MIDI, OSC, CLM, FOMUS and CSOUND. Its main user
851
Common Music 14 B. METHODS OF WRITING CSOUND SCORES
852
14 C. AMPLITUDE AND PITCH TRACKING
Tracking the amplitude of an audio signal is a relatively simple procedure but simply following
the amplitude values of the waveform is unlikely to be useful. An audio waveform will be bipolar,
expressing both positive and negative values, so to start with, some sort of rectifying of the nega-
tive part of the signal will be required. The most common method of achieving this is to square it
(raise to the power of 2) and then to take the square root. Squaring any negative values will provide
positive results (-2 squared equals 4). Taking the square root will restore the absolute values.
An audio signal is an oscillating signal, periodically passing through amplitude zero but these zero
amplitudes do not necessarily imply that the signal has decayed to silence as our brain perceives
it. Some sort of averaging will be required so that a tracked amplitude of close to zero will only
be output when the signal has settled close to zero for some time. Sampling a set of values and
outputting their mean will produce a more acceptable sequence of values over time for a signal’s
change in amplitude. Sample group size will be important: too small a sample group may result
in some residual ripple in the output signal, particularly in signals with only low frequency content,
whereas too large a group may result in a sluggish response to sudden changes in amplitude.
Some judgement and compromise is required.
The procedure described above is implemented in the following example. A simple audio note is
created that ramps up and down according to a linseg envelope. In order to track its amplitude,
audio values are converted to k-rate values and are then squared, then square rooted and then
written into sequential locations of an array 31 values long. The mean is calculated by summing
all values in the array and divided by the length of the array. This procedure is repeated every k-
cycle. The length of the array will be critical in fine tuning the response for the reasons described in
the preceding paragraph. Control rate (kr) will also be a factor therefore is taken into consideration
when calculating the size of the array. Changing control rate (kr) or number of audio samples in a
control period (ksmps) will then no longer alter response behaviour.
EXAMPLE 14C01_Amplitude_Tracking_First_Principles.csd
<CsoundSynthesizer>
<CsOptions>
-dm128 -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 16
nchnls = 2
0dbfs = 1
853
14 C. AMPLITUDE AND PITCH TRACKING
; a rich waveform
giwave ftgen 1,0, 512, 10, 1,1/2,1/3,1/4,1/5
instr 1
; create an audio signal
aenv linseg 0,p3/2,1,p3/2,0 ; triangle shaped envelope
aSig poscil aenv,300,giwave ; audio oscillator
out aSig, aSig ; send audio to output
; track amplitude
kArr[] init 500 / ksmps ; initialise an array
kNdx init 0 ; initialise index for writing to array
kSig downsamp aSig ; create k-rate version of audio signal
kSq = kSig ^ 2 ; square it (negatives become positive)
kRoot = kSq ^ 0.5 ; square root it (restore absolute values)
kArr[kNdx] = kRoot ; write result to array
kMean = sumarray(kArr) / lenarray(kArr) ; calculate mean of array
printk 0.1,kMean ; print mean to console
; increment index and wrap-around if end of the array is met
kNdx wrap kNdx+1, 0, lenarray(kArr)
endin
</CsInstruments>
<CsScore>
i 1 0 5
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
In practice it is not necessary for us to build our own amplitude tracker as Csound already offers
several opcodes for the task. rms outputs a k-rate amplitude tracking signal by employing math-
ematics similar to those described above. follow outputs at a-rate and uses a sample and hold
method as it outputs data, probably necessitating some sort of low-pass filtering of the output sig-
nal. follow2 also outputs at a-rate but smooths the output signal by different amounts depending
on whether the amplitude is rising or falling.
A quick comparison of these three opcodes and the original method from first principles is given
below:
The sound file used in all three comparisons is fox.wav which can be found as part of the Csound
HTML Manual download. This sound is someone saying: “the quick brown fox jumps over the lazy
dog”.
First of all by employing the technique exemplified in example 14C01, the amplitude following sig-
nal is overlaid upon the source signal:
854
14 C. AMPLITUDE AND PITCH TRACKING
It can be observed that the amplitude tracking signal follows the amplitudes of the input signal
reasonably well. A slight delay in response at sound onsets can be observed as the array of val-
ues used by the averaging mechanism fills with appropriately high values. As discussed earlier,
reducing the size of the array will improve response at the risk of introducing ripple. Another ap-
proach to dealing with the issue of ripple is to low-pass filter the signal output by the amplitude
follower. This is an approach employed by the follow2 opcode. The second thing that is apparent
is that the amplitude following signal does not attain the peak value of the input signal. At its
peaks, the amplitude following signal is roughly 1/3 of the absolute peak value of the input signal.
How close it gets to the absolute peak amplitude depends somewhat on the dynamic nature of
the input signal. If an input signal sustains a peak amplitude for some time then the amplitude
following signal will tend to this peak value.
The rms opcode employs a method similar to that used in the previous example but with the conve-
nience of an encapsulated opcode. Its output superimposed upon the waveform is shown below:
Its method of averaging uses filtering rather than simply taking a mean of a buffer of amplitude
values. rms allows us to set the cutoff frequency (kCf) of its internal filter:
kRms rms aSig, kCf
This is an optional argument which defaults to 10. Lowering this value will dampen changes in
rms and smooth out ripple, raising it will improve the response but increase the audibility of ripple.
A choice can be made based on some foreknowledge of the input audio signal: dynamic percus-
sive input audio might demand faster response whereas audio that dynamically evolves gradually
might demand greater smoothing.
The follow opcode uses a sample-and-hold mechanism when outputting the tracked amplitude.
This can result in a stepped output that might require addition lowpass filtering before use. We
actually defined the period, the duration for which values are held, using its second input argument.
The update rate will be one over the period. In the following example the audio is amplitude tracked
using the following line:
aRms follow aSig, 0.01
855
Dynamic Gating and Amplitude Triggering 14 C. AMPLITUDE AND PITCH TRACKING
The hump over the word spoken during the third and fourth time divisions initially seem erroneous
but it is a result of greater amplitude excursion into the negative domain. follow provides a better
reflection of absolute peak amplitude.
follow2 uses a different algorithm with smoothing on both upward and downward slopes of the
tracked amplitude. We can define different values for attack and decay time. In the following
example the decay time is much longer than the attack time. The relevant line of code is:
iAtt = 0.04
iRel = 0.5
aTrk follow2 aSig, 0.04, 0.5
This technique can be used to extend the duration of short input sound events or triggers. Note
that the attack and release times for follow2 can also be modulated at k-rate.
EXAMPLE 14C02_Simple_Dynamic_Gate.csd
<CsoundSynthesizer>
<CsOptions>
-dm128 -odac
</CsOptions>
<CsInstruments>
856
14 C. AMPLITUDE AND PITCH TRACKING Dynamic Gating and Amplitude Triggering
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
; this is a necessary definition,
; otherwise amplitude will be -32768 to 32767
instr 1
aSig diskin "fox.wav", 1 ; read sound file
kRms rms aSig ; scan rms
iThreshold = 0.1 ; rms threshold
kGate = kRms > iThreshold ? 1 : 0 ; gate either 1 or zero
aGate interp kGate ; interpolate to create smoother on->off->on switching
aSig = aSig * aGate ; multiply signal by gate
out aSig, aSig ; send to output
endin
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Once a dynamic threshold has been defined, in this case 0.1, the RMS value is interrogated every
k-cycle as to whether it is above or below this value. If it is above, then the variable kGate adopts
a value of 1 (open) or if below, kGate is zero (closed). This on/off switch could just be multiplied
to the audio signal to turn it on or off according to the status of the gate but clicks would manifest
each time the gates opens or closes so some sort of smoothing or ramping of the gate signal is
required. In this example I have simply interpolated it using the interp opcode to create an a-rate
signal which is then multiplied to the original audio signal. This means that a linear ramp with
be added across the duration of a k-cycle in audio samples – in this case 32 samples. A more
elaborate approach might involve portamento and low-pass filtering.
The threshold is depicted as a red line. It can be seen that each time the RMS value (the black line)
drops below the threshold the audio signal (blue waveform) is muted.
The simple solution described above can prove adequate in applications where the user wishes
to sense sound event onsets and convert them to triggers but in more complex situations, in par-
ticular when a new sound event occurs whilst the previous event is still sounding and pushing the
RMS above the threshold, this mechanism will fail. In these cases triggering needs to depend upon
dynamic change rather than absolute RMS values. If we consider a two-event sound file where two
notes sound on a piano, the second note sounding while the first is still decaying, triggers gener-
ated using the RMS threshold mechanism from the previous example will only sense the first note
onset. (In the diagram below this sole trigger is illustrated by the vertical black line.) Raising the
857
Dynamic Gating and Amplitude Triggering 14 C. AMPLITUDE AND PITCH TRACKING
threshold might seem to be remedial action but is not ideal as this will prevent quietly played notes
from generating triggers.
It will often be more successful to use magnitudes of amplitude increase to decide whether to
generate a trigger or not. The two critical values in implementing such a mechanism are the time
across which a change will be judged (iSampTim in the example) and the amount of amplitude
increase that will be required to generate a trigger (iThresh). An additional mechanism to prevent
double triggerings if an amplitude continues to increase beyond the time span of a single sample
period will also be necessary. What this mechanism will do is to bypass the amplitude change
interrogation code for a user-definable time period immediately after a trigger has been generated
(iWait). A timer which counts elapsed audio samples (kTimer) is used to time how long to wait
before retesting amplitude changes.
If we pass our piano sound file through this instrument, the results look like this:
This time we correctly receive two triggers, one at the onset of each note.
The example below tracks audio from the sound-card input channel 1 using this mechanism.
EXAMPLE 14C03_Dynamic_Trigger.csd
<CsoundSynthesizer>
<CsOptions>
-dm0 -iadc -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
iThresh = 0.1 ; change threshold
858
14 C. AMPLITUDE AND PITCH TRACKING Pitch Tracking
instr 2
aEnv transeg 0.2, p3, -4, 0 ; decay envelope
aSig poscil aEnv, 400 ; 'ping' sound indicator
out aSig ; send audio to output
endin
</CsInstruments>
<CsScore>
i 1 0 [3600*24*7]
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
Pitch Tracking
Csound currently provides five opcode options for pitch tracking. In ascending order of newness
they are: pitch, pitchamdf, pvspitch, ptrack and plltrack. Related to these opcodes are pvscent
and centroid but rather than track the harmonic fundamental, they track the spectral centroid of a
signal. An example and suggested application for centroid is given a little later on in this chapter.
Each offers a slightly different set of features – some offer simultaneous tracking of both ampli-
tude and pitch, some only pitch tracking. None of these opcodes provide more than one output
for tracked frequency therefore none offer polyphonic tracking although in a polyphonic tone the
fundamental of the strongest tone will most likely be tracked. Pitch tracking presents many more
challenges than amplitude tracking therefore a degree of error can be expected and will be an is-
sue that demands addressing. To get the best from any pitch tracker it is important to consider
preparation of the input signal – either through gating or filtering – and also processing of the
output tracking data, for example smoothing changes through the use of filtering opcode such as
port, median filtering to remove erratic and erroneous data and a filter to simply ignore obviously
incorrect data. Parameters for these procedures will rely upon some prior knowledge of the input
signal, the pitch range of an instrument for instance. A particularly noisy environment or a distant
microphone placement might demand more aggressive noise gating. In general some low-pass
filtering of the input signal will always help in providing a more stable frequency tracking signal.
Something worth considering is that the attack portion of a note played on an acoustic instrument
generally contains a lot of noisy, harmonically chaotic material. This will tend to result in slightly
859
Pitch Tracking 14 C. AMPLITUDE AND PITCH TRACKING
chaotic movement in the pitch tracking signal, we may therefore wish to sense the onset of a note
and only begin tracking pitch once the sustain portion has begun. This may be around 0.05 sec-
onds after the note has begun but will vary from instrument to instrument and from note to note.
In general lower notes will have a longer attack. However we do not really want to overestimate
the duration of this attack stage as this will result in a sluggish pitch tracker. Another specialised
situation is the tracking of pitch in singing – we may want to gate sibilant elements (sss, t etc.).
pvscent can be useful in detecting the difference between vowels and sibilants.
pitch is the oldest of the pitch tracking opcodes on offer and provides the widest range of input
parameters.
koct, kamp pitch asig, iupdte, ilo, ihi, idbthresh [, ifrqs] [, iconf]
[, istrt] [, iocts] [, iq] [, inptls] [, irolloff] [, iskip]
This makes it somewhat more awkward to use initially (although many of its input parameters are
optional) but some of its options facilitate quite specialised effects. Firstly it outputs its tracking
signal in oct format. This might prove to be a useful format but conversion to other formats is easy
anyway. Apart from a number of parameters intended to fine tune the production of an accurate
signal it allows us to specify the number of octave divisions used in quantising the output. For
example if we give this a value of 12 we have created the basis of a simple chromatic autotune
device. We can also quantise the procedure in the time domain using its update period input. Ma-
terial with quickly changing pitch or vibrato will require a shorter update period (which will demand
more from the CPU). It has an input control for threshold of detection which can be used to filter
out and disregard pitch and amplitude tracking data beneath this limit. Pitch is capable of very
good pitch and amplitude tracking results in real-time.
pitchamdf uses the so-called Average Magnitude Difference Function method. It is perhaps slightly
more accurate than pitch as a general purpose pitch tracker but its CPU demand is higher.
pvspitch uses streaming FFT technology to track pitch. It takes an f-signal as input which will have
to be created using the pvsanal opcode. At this step the choice of FFT size will have a bearing upon
the performance of the pvspitch pitch tracker. Smaller FFT sizes will allow for faster tracking but
with perhaps some inaccuracies, particularly with lower pitches whereas larger FFT sizes are likely
to provide for more accurate pitch tracking at the expense of some time resolution. pvspitch tries
to mimic certain functions of the human ear in how it tries to discern pitch. pvspitch works well in
real-time but it does have a tendency to jump its output to the wrong octave – an octave too high
– particularly when encountering vibrato.
ptrack also makes uses of streaming FFT but takes an normal audio signal as input, performing the
FFT analysis internally. We still have to provide a value for FFT size with the same considerations
mentioned above. ptrack is based on an algorithm by Miller Puckette, the co-creator of MaxMSP
and creator of PD. ptrack also works well in real-time but it does have a tendency to jump to erro-
neous pitch tracking values when pitch is changing quickly or when encountering vibrato. Median
filtering (using the mediank opcode) and filtering of outlying values might improve the results.
plltrack uses a phase-locked loop algorithm in detecting pitch. plltrack is another efficient real-
time option for pitch tracking. It has a tendency to gliss up and down from very low frequency
values at the start and end of notes, i.e. when encountering silence. This effect can be minimised
by increasing its feedback parameter but this can also make pitch tracking unstable over sustained
notes.
In conclusion, pitch is probably still the best choice as a general purpose pitch tracker, pitchamdf
860
14 C. AMPLITUDE AND PITCH TRACKING Pitch Tracking
is also a good choice. pvspitch, ptrack and plltrack all work well in real-time but might demand
additional processing to remove errors.
pvscent and centroid are a little different to the other pitch trackers in that, rather than try to discern
the fundemental of a harmonic tone, they assess what the centre of gravity of a spectrum is. An
application for this is in the identification of different instruments playing the same note. Softer,
darker instruments, such as the french horn, will be characterised by a lower centroid to that of
more shrill instruments, such as the violin.
Both opcodes use FFT. Centroid works directly with an audio signal input whereas pvscent requires
an f-sig input. Centroid also features a trigger input which allows us to manually trigger it to update
its output. In the following example we use centroid to detect individual drums sounds – bass
drum, snare drum, cymbal – within a drum loop. We will use the dynamic amplitude trigger from
earlier on in this chapter to detect when sound onsets are occurring and use this trigger to activate
centroid and also then to trigger another instrument with a replacement sound. Each percussion
instrument in the original drum loop will be replaced with a different sound: bass drums will be
replaced with a kalimba/thumb piano sound, snare drums will be replaced by hand claps (a la
TR-808), and cymbal sounds will be replaced with tambourine sounds. The drum loop used is
beats.wav which can be found with the download of the Csound HTML manual (and within the
Csound download itself). This loop is not ideal as some of the instruments coincide with one
another – for example, the first consists of a bass drum and a snare drum played together. The
beat replacer will inevitably make a decision one way or the other but is not advanced enough to
detect both instruments playing simultaneously. The critical stage is the series of if … elseifs … at
the bottom of instrument 1 where decisions are made about instruments’ identities according to
what centroid band they fall into. The user can fine tune the boundary division values to modify
the decision making process. centroid values are also printed to the terminal when onsets are
detected which might assist in this fine tuning.
EXAMPLE 14C04_Drum_Replacement.csd
<CsoundSynthesizer>
<CsOptions>
-dm0 -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
asig diskin "beats.wav",1
iThreshold = 0.05
iWait = 0.1*sr
kTimer init iWait+1
iSampTim = 0.02 ; time across which RMS change is measured
kRms rms asig ,20
kRmsPrev delayk kRms,iSampTim ; rms from earlier
kChange = kRms - kRmsPrev ; change (+ve or -ve)
861
Pitch Tracking 14 C. AMPLITUDE AND PITCH TRACKING
ifftsize = 1024
; centroid triggered 0.02 after sound onset to avoid noisy attack
kDelTrig delayk kTrigger,0.02
kcent centroid asig, kDelTrig, ifftsize ; scan centroid
printk2 kcent ; print centroid values
if kDelTrig==1 then
if kcent>0 && kcent<2500 then ; first freq. band
event "i","Cowbell",0,0.1
elseif kcent<8000 then ; second freq. band
event "i","Clap",0,0.1
else ; third freq. band
event "i","Tambourine",0,0.5
endif
endif
endin
instr Cowbell
kenv1 transeg 1,p3*0.3,-30,0.2, p3*0.7,-30,0.2
kenv2 expon 1,p3,0.0005
kenv = kenv1*kenv2
ipw = 0.5
a1 vco2 0.65,562,2,0.5
a2 vco2 0.65,845,2,0.5
amix = a1+a2
iLPF2 = 10000
kcf expseg 12000,0.07,iLPF2,1,iLPF2
alpf butlp amix,kcf
abpf reson amix, 845, 25
amix dcblock2 (abpf*0.06*kenv1)+(alpf*0.5)+(amix*0.9)
amix buthp amix,700
amix = amix*0.5*kenv
out amix
endin
instr Clap
if frac(p1)==0 then
event_i "i", p1+0.1, 0, 0.02
event_i "i", p1+0.1, 0.01, 0.02
event_i "i", p1+0.1, 0.02, 0.02
event_i "i", p1+0.1, 0.03, 2
else
kenv transeg 1,p3,-25,0
iamp random 0.7,1
anoise dust2 kenv*iamp, 8000
iBPF = 1100
ibw = 2000
iHPF = 1000
iLPF = 1
kcf expseg 8000,0.07,1700,1,800,2,500,1,500
asig butlp anoise,kcf*iLPF
asig buthp asig,iHPF
ares reson asig,iBPF,ibw,1
asig dcblock2 (asig*0.5)+ares
out asig
endif
endin
862
14 C. AMPLITUDE AND PITCH TRACKING Pitch Tracking
instr Tambourine
asig tambourine 0.3,0.01 ,32, 0.47, 0, 2300 , 5600, 8000
out asig ;SEND AUDIO TO OUTPUTS
endin
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
863
Pitch Tracking 14 C. AMPLITUDE AND PITCH TRACKING
864
14 D. PYTHON IN CSOUNDQT
If CsoundQt is built with PythonQt support,1 it enables a lot of new possibilities, mostly in three
main fields: interaction with the CsoundQt interface, interaction with widgets and using classes
from Qt libraries to build custom interfaces in python.
If you start CsoundQt and can open the panels Python Console and Python Scratch Pad, you are
ready to go.
It enables the control of a large part of CsoundQt’s possibilities from the python interpreter, the
python scratchpad, from scripts or from inside of a running Csound file via Csound’s python op-
codes.2
By default, a PyQcsObject is already available in the python interpreter of CsoundQt called “q”. To
use any of its methods, we can use a form like
1 If
not, have a look at the releases page. Python 2.7 must be installed, too. For building CsoundQt with Python support,
have a look at the descriptions in CsoundQt’s Wiki.
2 See chapter 12 B for more information on the python opcodes and ctcsound.
865
File and Control Access 14 D. PYTHON IN CSOUNDQT
q.stopAll()
ä access CsoundQt’s interface (open or close files, start or stop performance etc)
ä edit Csound files which has already been opened as tabs in CsoundQt
ä manage CsoundQt’s widgets
ä interface with the running Csound engine
If you close this file and then execute the line q.loadDocument('cs_floss_1.csd'), you
should see the file again as tab in CsoundQt.
Let us have a look how these two methods newDocument and loadDocument are described in the
sources:
int newDocument(QString name)
int loadDocument(QString name, bool runNow = false)
The method newDocument needs a name as string (“QString”) as argument, and returns an
integer. The method loadDocument also takes a name as input string and returns an inte-
ger as index for this csd. The additional argument runNow is optional. It expects a boolean
3 Toevaluate multiple lines of Python code in the Scratch Pad, choose either Edit->Evaluate Section (Alt+E), or select and
choose Edit->Evaluate Selection (Alt+Shift+E).
866
14 D. PYTHON IN CSOUNDQT File and Control Access
value (True/False or 1/0). The default is false which means “do not run immediately af-
ter loading”. So if you type instead q.loadDocument('cs_floss_1.csd', True) or
q.loadDocument('cs_floss_1.csd', 1), the csd file should start immediately.
EXAMPLE 14B01_run_pause_stop.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 1
instr 1
kPitch expseg 500, p3, 1000
aSine poscil .2, kPitch, giSine
out aSine
endin
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
This instrument performs a simple pitch glissando from 500 to 1000 Hz in ten seconds. Now make
sure that this csd is the currently active tab in CsoundQt, and execute this:
q.play()
This starts the performance. If you do nothing, the performance will stop after ten seconds. If you
type instead after some seconds
q.pause()
the performance will pause. The same task q.pause() will resume the performance. Note that this
is different from executing q.play() after q.pause() ; this will start a new performance. With
q.stop()
867
File and Control Access 14 D. PYTHON IN CSOUNDQT
First, create a new file cs_floss_2.csd, for instance with this code:
EXAMPLE 14B02_tabs.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 1
instr 1
kPitch expseg 500, p3, 1000
aSine poscil .2, kPitch, giSine
out aSine
endin
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
Now get the index of these two tabs in executing q.getDocument('cs_floss_1.csd') and
q.getDocument('cs_floss_2.csd') . This will show something like this:
So in my case the indices are 3 and 4.4 Now you can start, pause and stop any of these files with
tasks like these:
4 If you have less or more csd tabs already while creating the new files, the index will be lower or higher.
868
14 D. PYTHON IN CSOUNDQT File and Control Access
q.play(3)
q.play(4)
q.stop(3)
q.stop(4)
.. you should be able to run both csds simultaneously. To stop all running files, use:
q.stopAll()
To set a csd as active, use setDocument(index). This will have the same effect as clicking on
the tab.
Calling the method without any arguments, it refers to the currently active csd. An index as argu-
ment links to a specific tab. Here is a Python code snippet which returns indices, file names and
file paths of all tabs in CsoundQt:
index = 0
while q.getFileName(index):
print 'index = %d' % index
print ' File Name = %s' % q.getFileName(index)
5 Different to most usages, name means here the full path including the file name.
869
Get and Set csd Text 14 D. PYTHON IN CSOUNDQT
You will get the full visible csd, the orc or the sco part as a unicode string.
You can also get the text for the <CsOptions>, the text for CsoundQt’s widgets and presets, or the
full text of this csd:
q.getOptionsText()
q.getWidgetsText()
q.getPresetsText()
q.getFullText()
If you select some text or some widgets, you will get the selection with these commands:
q.getSelectedText()
q.getSelectedWidgetsText()
As usual, you can specify any of the loaded csds via its index. So calling q.getOrc(3) instead of
q.getOrc() will return the orc text of the csd with index 3, instead of the orc text of the currently
active csd.
870
14 D. PYTHON IN CSOUNDQT Get and Set csd Text
You will see your nice insertion in the csd file. In case you do not like it, you can choose Edit->Undo.
It does not make a difference for the CsoundQt editor whether the text has been typed by hand, or
by the internal Python script facility.
Note that the whole section will be overwritten with the string text.
Opcode Exists
You can ask whether a string is an opcode name, or not, with the function opcodeExtists, for in-
stance:
py> q.opcodeExists('line')
True
py> q.opcodeExists('OSCsend')
True
py> q.opcodeExists('Line')
False
py> q.opcodeExists('Joe')
NotYet
EXAMPLE 14B03_score_generated.csd
<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
871
Widgets 14 D. PYTHON IN CSOUNDQT
0dbfs = 1
nchnls = 1
instr 1
iOctStart = p4 ;pitch in octave notation at start
iOctEnd = p5 ;and end
iDbStart = p6 ;dB at start
iDbEnd = p7 ;and end
kPitch expseg cpsoct(iOctStart), p3, cpsoct(iOctEnd)
kEnv linseg iDbStart, p3, iDbEnd
aSine poscil ampdb(kEnv), kPitch, giSine
iFad random p3/20, p3/5
aOut linen aSine, iFad, p3, iFad
out aOut
endin
</CsInstruments>
<CsScore>
i 1 0 10 ;will be overwritten by the python score generator
</CsScore>
</CsoundSynthesizer>
The following code will now insert 30 score events in the score section:
from random import uniform
numScoEvents = 30
sco = ''
for ScoEvent in range(numScoEvents):
start = uniform(0, 40)
dur = 2**uniform(-5, 3)
db1, db2 = [uniform(-36, -12) for x in range(2)]
oct1, oct2 = [uniform(6, 10) for x in range(2)]
scoLine = 'i 1 %f %f %f %f %d %d\n' % (start,dur,oct1,oct2,db1,db2)
sco = sco + scoLine
q.setSco(sco)
This generates a texture with either falling or rising gliding pitches. The durations are set in a way
that shorter durations have a bigger probability than larger ones. The volume and pitch ranges
allow many variations in the simple shape.
Widgets
Creating a Label
Click on the Widgets button to see the widgets panel. Then execute this command in the Python
Console:
q.createNewLabel()
The properties dialog of the label pops up. Type Hello Label! or something like this as text.
872
14 D. PYTHON IN CSOUNDQT Widgets
When you click Ok, you will see the label widget in the panel, and a strange unicode string as return
value in the Python Console:
Instead of having a live talk with the properties dialog, we can specify all properties as arguments
for the createNewLabel method:
q.createNewLabel(200, 100, "second_label")
873
Widgets 14 D. PYTHON IN CSOUNDQT
A new label has been created—without opening the properties dialog—at position x=200 y=1006
with the name second_label. If you want to create a widget not in the active document, but in
another tab, you can also specify the tab index. The following command will create a widget at
the same position and with the same name in the first tab:
q.createNewLabel(200, 100, "second_label", 0)
874
14 D. PYTHON IN CSOUNDQT Widgets
The setWidgetProperty method needs the ID of a widget first. This can be expressed either as
channel name (disp_chan_01) as in the command above, or as uuid. As I got the string u’{a71c0c67-
3d54-4d4a-88e6-8df40070a7f5}’ as uuid, I can also write:
q.setWidgetProperty(u'{a71c0c67-3d54-4d4a-88e6-8df40070a7f5}',
'QCS_label', 'Hey Joeboe!')
For humans, referring to the channel name as ID is certainly preferable.9 But as the createNew…
method returns the uuid, you can use it implicitely, for instance in this command:
q.setWidgetProperty(q.createNewLabel(70, 70, "WOW"), "QCS_fontsize", 18)
9 Notethat two widgets can share the same channel name (for instance a slider and a spinbox). In this case, referring to a
widget via its channel name is not possible at all.
875
Widgets 14 D. PYTHON IN CSOUNDQT
py> q.listWidgetProperties("disp_chan_01")
(u'QCS_x', u'QCS_y', u'QCS_uuid', u'QCS_visible', u'QCS_midichan',
u'QCS_midicc', u'QCS_label', u'QCS_alignment', u'QCS_precision',
u'QCS_font', u'QCS_fontsize', u'QCS_bgcolor', u'QCS_bgcolormode',
u'QCS_color', u'QCS_bordermode', u'QCS_borderradius', u'QCS_borderwidth',
u'QCS_width', u'QCS_height', u'QCS_objectName')
listWidgetProperties returns all properties in a tuple. We can query the value of a single property
with the function getWidgetProperty, which takes the uuid and the property as inputs, and returns
the property value. So this code snippet asks for all property values of our Display widget:
widgetID = "disp_chan_01"
properties = q.listWidgetProperties(widgetID)
for property in properties:
propVal = q.getWidgetProperty(widgetID, property)
print property + ' = ' + str(propVal)
Returns:
QCS_x = 50
QCS_y = 150
QCS_uuid = {a71c0c67-3d54-4d4a-88e6-8df40070a7f5}
QCS_visible = True
QCS_midichan = 0
QCS_midicc = -3
QCS_label = Hey Joeboe!
QCS_alignment = left
QCS_precision = 3
QCS_font = Arial
QCS_fontsize = 10
QCS_bgcolor = #ffffff
QCS_bgcolormode = False
QCS_color = #000000
QCS_bordermode = border
QCS_borderradius = 1
QCS_borderwidth = 1
QCS_width = 80
QCS_height = 25
QCS_objectName = disp_chan_01
As always, the uuid strings of other csd tabs can be accessed via the index.
876
14 D. PYTHON IN CSOUNDQT Widgets
Create ten knobs with the channel names partial_1, partial_2 etc, and the according labels
amp_part_1, amp_part_2 etc in the currently active document:
for no in range(10):
q.createNewKnob(100*no, 5, "partial_"+str(no+1))
q.createNewLabel(100*no+5, 90, "amp_part_"+str(no+1))
Modify the maximum of each knob so that the higher partials have less amplitude range (set max-
imum to 1, 0.9, 0.8, … 0.1):
for knob in range(10):
q.setWidgetProperty(knobs[knob], "QCS_maximum", 1-knob/10.0)
Deleting widgets
You can delete a widget using the method destroyWidget. You have to pass the widget’s ID, again
either as channel name or (better) as uuid string. This will remove the first knob in the example
above:
877
Widgets 14 D. PYTHON IN CSOUNDQT
q.destroyWidget("partial_1")
Now we will ask for the values of these widgets10 with the methods getChannelValue and getChan-
nelString:
py> q.getChannelValue('level')
0.0
py> q.getChannelString("level")
u''
py> q.getChannelValue('message')
0.0
py> q.getChannelString('message')
u'Display'
As you see, it depends on the type of the widget whether to query its value by getChannelValue or
getChannelString. Although CsoundQt will not return an error, it makes no sense to ask a slider for
its string (as its value is a number), and a display for its number (as its value is a string).
With the methods setChannelValue and setChannelString we can change the main content of a
widget very easily:
py> q.setChannelValue("level", 0.5)
py> q.setChannelString("message", "Hey Joe again!")
This is much more handy than the general method using setWidgetProperty:
py> q.setWidgetProperty("level", "QCS_value", 1)
py> q.setWidgetProperty("message", "QCS_label", "Nono")
Presets
Now right-click in the widget panel and choose Store Preset -> New Preset:
10 Hereagain accessed by the channel name. Of course accessing by uuid would also be possible (and more safe, as ex-
plained above).
878
14 D. PYTHON IN CSOUNDQT Widgets
You can (but need not) enter a name for the preset. The important thing here is the number of the
preset (here 0). - Now change the value of the slider and the text of the display widget. Save again
as preset, now being preset 1. - Now execute this:
q.loadPreset(0)
You will see the content of the widgets reloaded to the first preset. Again, with
q.loadPreset(1)
Like all python scripting functions in CsoundQt, you can not only use these methods from the
Python Console or the Python Scratch Pad, but also from inside any csd. This is an example how
to switch all the widgets to other predefined states, in this case controlled by the score. You will
see the widgets for the first three seconds in Preset 0, then for the next three seconds in Preset 1,
and finally again in Preset 0:
EXAMPLE 14B04_presets.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
pyinit
instr loadPreset
index = p4
pycalli "q.loadPreset", index
endin
</CsInstruments>
<CsScore>
i "loadPreset" 0 3 0
i "loadPreset" + . 1
i "loadPreset" + . 0
</CsScore>
</CsoundSynthesizer>
;example by tarmo johannes and joachim heintz
879
Csound Functions 14 D. PYTHON IN CSOUNDQT
Csound Functions
Several functions can interact with the Csound engine, for example to query information about it.
Note that the functions getSampleRate, getKsmps, getNumChannels and getCurrentCsound refer
to a running instance of Csound.
py> q.getVersion() # CsoundQt API version
u'1.0'
py> q.getSampleRate()
44100.0
py> q.getKsmps()
32
py> q.getNumChannels()
1
py> q.getCurrentCsound()
CSOUND (C++ object at: 0x2fb5670)
With getCsChannel, getCsStringChannel and setCsChannel you can access csound channels di-
rectly, independently from widgets. They are useful when testing a csd for use with the Csound
API (in another application, a csLapdsa or Cabbage plugin, Android application) or similar. These
are some examples, executed on a running csd instance:
py> q.getCsChannel('my_num_chn')
0.0
py> q.getCsStringChannel('my_str_chn')
u''
py> q.getCsChannel('my_num_chn')
1.1
py> q.getCsStringChannel('my_str_chn')
u'Hey Csound'
If you have a function table in your running Csound instance which has for instance been created
with the line giSine ftgen 1, 0, 1024, 10, 1, you can query getTableArray like this:
py> q.getTableArray(1)
MYFLT (C++ object at: 0x35d1c58)
Finally, you can register a Python function as a callback to be executed in between processing
blocks for Csound. The first argument should be the text that should be called on every pass. It
can include arguments or variables which will be evaluated every time. You can also set a number
of periods to skip to avoid.
registerProcessCallback(QString func, int skipPeriods = 0)
You can register the python text to be executed on every Csound control block callback, so you
can execute a block of code, or call any function which is already defined.
880
14 D. PYTHON IN CSOUNDQT Creating Own GUIs with PythonQt
Dialog Box
Sometimes it is practical to ask from user just one question - number or name of something and
then execute the rest of the code (it can be done also inside a csd with python opcodes). In Qt,
the class to create a dialog for one question is called QInputDialog.
To use this or any other Qt classes, it is necessary to import the PythonQt and its Qt submodules.
In most cases it is enough to add this line:
from PythonQt.Qt import *
or
from PythonQt.QtGui import *
At first an object of QInputDialog must be defined, then you can use its methods getInt, getDouble,
getItem or getText to read the input in the form you need. This is a basic example:
from PythonQt.Qt import *
inpdia = QInputDialog()
myInt = inpdia.getInt(inpdia,"Example 1","How many?")
print myInt
# example by tarmo johannes
Note that the variable myInt is now set to a value which remains in your Python interpreter. Your
Python Console may look like this when executing the code above, and then ask for the value of
myInt:
py>
12
Evaluated 5 lines.
py> myInt
12
Depending on the value of myInt, you can do funny or serious things. This code re-creates the
Dialog Box whenever the user enters the number 1:
from PythonQt.Qt import *
def again():
inpdia = QInputDialog()
myInt = inpdia.getInt(inpdia,"Example 1","How many?")
if myInt == 1:
print "If you continue to enter '1'"
print "I will come back again and again."
881
List of PyQcsObject Methods in CsoundQt 14 D. PYTHON IN CSOUNDQT
again()
else:
print "Thanks - Leaving now."
again()
# example by joachim heintz
A simple example follows showing how an own GUI can be embedded in your Csound code. Here,
Csound waits for the user input, and then prints out the entered value as the Csound variable
giNumber:
EXAMPLE 14B05_dialog.csd
<CsoundSynthesizer>
<CsOptions>
-n
</CsOptions>
<CsInstruments>
ksmps = 32
pyinit
pyruni {{
from PythonQt.Qt import *
dia = QInputDialog()
dia.setDoubleDecimals(4)
}}
giNumber pyevali {{
dia.getDouble(dia,"CS question","Enter number: ")
}} ; get the number from Qt dialog
instr 1
print giNumber
endin
</CsInstruments>
<CsScore>
i 1 0 0
</CsScore>
</CsoundSynthesizer>
;example by tarmo johannes
882
14 D. PYTHON IN CSOUNDQT List of PyQcsObject Methods in CsoundQt
883
List of PyQcsObject Methods in CsoundQt 14 D. PYTHON IN CSOUNDQT
Opcode Exists
bool opcodeExists(QString opcodeName)
Create Widgets
QString createNewLabel(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewDisplay(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewScrollNumber(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewLineEdit(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewSpinBox(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewSlider(
QString channel, int index = -1
)
QString createNewSlider(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewButton(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewKnob(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewCheckBox(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewMenu(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewMeter(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewConsole(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewGraph(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
QString createNewScope(
int x = 0, int y = 0, QString channel = QString(), int index = -1
)
884
14 D. PYTHON IN CSOUNDQT List of PyQcsObject Methods in CsoundQt
Query Widgets
QVariant getWidgetProperty(QString widgetid, QString property, int index= -1)
double getChannelValue(QString channel, int index = -1)
QString getChannelString(QString channel, int index = -1)
QStringList listWidgetProperties(QString widgetid, int index = -1)
QStringList getWidgetUuids(int index = -1)
Modify Widgets
void setWidgetProperty(
QString widgetid, QString property, QVariant value, int index= -1
)
void setChannelValue(QString channel, double value, int index = -1)
void setChannelString(QString channel, QString value, int index = -1)
Delete Widgets
bool destroyWidget(QString widgetid)
Presets
void loadPreset(int presetIndex, int index = -1)
Csound / API
QString getVersion()
void refresh()
void setCsChannel(QString channel, double value, int index = -1)
void setCsChannel(QString channel, QString value, int index = -1)
double getCsChannel(QString channel, int index = -1)
QString getCsStringChannel(QString channel, int index = -1)
885
List of PyQcsObject Methods in CsoundQt 14 D. PYTHON IN CSOUNDQT
CSOUND* getCurrentCsound()
double getSampleRate(int index = -1)
int getKsmps(int index = -1)
int getNumChannels(int index = -1)
MYFLT *getTableArray(int ftable, int index = -1)
void registerProcessCallback(
QString func, int skipPeriods = 0, int index = -1
)
886
14 E. GLOSSARY
Math Symbols
Multiplication in formulars is usually denoted with the dot operator:
2⋅3=6
In text, the * is also used (as in Csound and other programming languages), or with the cross x.
Csound Terms
block size is the number of samples which are processed as vector or “block”. In Csound, we
usually speak of ksmps: The number of samples in one control period.
control cycle, control period or k-loop is a pass during the performance of an instrument, in which
all k- and a-variables are renewed. The time for one control cycle is measured in samples and
determined by the ksmps constant in the orchestra header. For a sample rate of 44100 Hz and
a ksmps value of 32, the time for one control cycle is 32/44100 = 0.000726 seconds. See the
chapter about Initialization And Performance Pass for a detailed discussion.
control rate or k-rate (kr) is the number of control cycles per second. It can be calculated as the
relationship of the sample rate sr and the number of samples in one control period ksmps. For a
sample rate of 44100 Hz and a ksmps value of 32, the control rate is 1378.125, so 1378.125 control
cycles will be performed in one second. (Note that this value is not necessarily an integer, whilst
ksmps is always an integer.)
.csd file is a text file containing a Csound program to be compiled and run by Csound. This file
format contains several sections or tags (similar to XML or HTML), amongst them the CsOptions
(Csound options), the CsInstruments (a collection of the Csound instruments) and the CsScore
(the Csound score).
DSP means Digital Signal Processing and is used as a general term to describe any modification
we apply on sounds in the digital domain.
887
Csound Terms 14 E. GLOSSARY
f-statement or function table statement is a score line which starts with ”f” and generates a func-
tion table. See the chapter about function tables for more information. A dummy f-statement is
a statement like ”f 0 3600” which looks like a function table statement, but instead of generating
any table, it serves just for running Csound for a certain time (here 3600 seconds = 1 hour). (This
is usually not any more required since Csound now runs “endless” with empty score.)
frequency domain means to look at a signal considering its frequency components. The math-
ematical procedure to transform a time-domain signal into frequency-domain is called ““Fourier
Transform**. See the chapters about Additive Synthesis and about Spectral Processing.
functional style is a way of coding where a function is written and the arguments of the function
following in parentheses behind. Traditionally, Csound uses another convention to write code, but
since Csound 6 functional style can be used as well. See the functional syntax chapter for more
information.
GEN routine is a subroutine which generates a function table (mostly called buffer in other audio
programming languages). GEN Routines are very different; they can load a sound file (GEN01),
create segmented lines (GEN05 and others), composite waveforms (GEN10 and others), window
functions (GEN20) or random distributions (GEN40). See the chapter about function tables and
the Gen Routines Overview in the Csound Manual.
GUI Graphical User Interface refers to a system of on-screen sliders, buttons etc. used to interact
with Csound, normally in real-time.
i-time or init-time or i-rate denotes the moment in which an instrument instance is initialized.
In this initialization all variables starting with an ”i” get their values. These values are just given
once for an instrument call. See the chapter about Initialization And Performance Pass for more
information.
k-time is the time during the performance of an instrument, after the initialization. Variables start-
ing with a ”k” can alter their values in each control cycle. See the chapter about Initialization And
Performance Pass for more information.
opcode is a basic unit in Csound to perform any job, for instance generate noise, read an audio file,
create an envelope or oscillate through a table. In other audio programming languages it is called
UGen (Unit Generator) or object. An opcode can also be compared to a build-in function (e.g. in
Python), whereas a User Defined Opcode (UDO) can be compared to a function which is written by
the user. For an overview, see the Opcode Guide.
options comprised as csound options and also called command line flags contain important de-
cisions about how Csound has to run a .csd file. The -o option, for instance, tells Csound whether
to output audio in realtime to the audio card, or to a sound file instead. See the overiew in the
Csound Manual for a detailed list of these options. Options are usually specified in the CsOptions
tag of a .csd file. Modern frontends mostly pass the options to Csound via their settings.
orchestra is a collection of Csound instruments in a program, or referring to the .csd file, the CsIn-
struments tag. The term is somehow outdated, as it points to the early years of Csound where an
.orc file was separated from the .sco (score) file.
p-field refers to the score section of a .csd file. A p-field can be compared to a column in a spread
888
14 E. GLOSSARY Csound Terms
sheet or table. An instrument, called by a line of score, receives any p-field parameter as p1, p2 etc:
p1 will receive the parameter of the first column, p2 the parameter of the second column, and so
on.
score as in the Csound score, is the section of Csound code where events are written in the score
language (which is completely different from the Csound orchestra language). The main events
are instrument event, where each line starts with the character i. Another type of events is the f
event which creates a function table. In modern Csound usage the score can be omitted, as all
score jobs can also be done from inside the Csound instruments. See the score chapter for more
information.
time domain means to look at a signal considering the changes of amplitudes over time. It is the
common way to plot audio signals (time as x-axis, amplitudes as y-axis).
time stretching can be done in various ways in Csound. See filescal, sndwarp, waveset, pvstanal,
mincer, pvsfread, pvsdiskin and the Granular Synthesis opcodes.
UDO or User-Defined Opcode is the definition of an opcode written in the Csound language itself.
See the UDO chapter for more information.
widget normally refers to some sort of standard GUI element such as a slider or a button. GUI
widgets normally permit some user modifications such as size, positioning colours etc. A variety
of options are available for the creation of widgets usable by Csound, from its own built-in FLTK
widgets to those provided by front-ends such as CsoundQt, Cabbage and Blue.
889
Csound Terms 14 E. GLOSSARY
890
14 F. LINKS
Downloads
Csound FLOSS Manual Files: https://csound-flossmanual.github.io/
Csound: http://csound.com/download.html
CsoundQt: http://github.com/CsoundQt/CsoundQt/releases
Cabbage: http://cabbageaudio.com/
Blue: http://blue.kunstmusik.com/
WinXound: http://mnt.conts.it/winxound/
Community
The Csound commuity home page is the main place for news, basic infos, links and more.
The Csound Journal is a main source for different aspects of working with Csound.
The traditional place for questions and answers is the Csound Mailing List. To subscribe to
the Csound User Discussion List, go to https://listserv.heanet.ie/cgi-bin/wa?A0=
891
Bug Tracker 14 F. LINKS
CSOUND. After subscribing, put questions at [email protected]. You can search in the
list archive at nabble.com.
Blue: https://lists.sourceforge.net/mailman/listinfo/bluemusic-users
Cabbage http://forum.cabbageaudio.com/
Bug Tracker
It should be distinguished whether an issue belongs to Csound or to one of the frontends.
Rule of thumb: If an issue has nothing to do with a graphical element or any frontend-specific
feature, it belongs to Csound. Otherwise it belongs to CsoundQt, Cabbage ot Blue.
ä Core Csound
ä CsoundQt
ä Cabbage
ä Blue
Tutorials
A Beginning Tutorial is a short introduction from Barry Vercoe, the “father of Csound”.
An Instrument Design TOOTorial by Richard Boulanger (1991) is another classical introduction, still
very worth to read.
Introduction to Sound Design in Csound also by Richard Boulanger, is the first chapter of the fa-
mous Csound Book (2000).
A Csound Tutorial by Michael Gogins (2009), one of the main Csound Developers.
892
14 F. LINKS Video Tutorials
Video Tutorials
A playlist as overview by Alex Hofmann (some years ago):
http://www.youtube.com/view_play_list?p=3EE3219702D17FD3
CsoundQt (QuteCsound)
QuteCsound: Where to start?
http://www.youtube.com/watch?v=0XcQ3ReqJTM
First instrument:
http://www.youtube.com/watch?v=P5OOyFyNaCA
Using MIDI:
http://www.youtube.com/watch?v=8zszIN_N3bQ
About configuration:
http://www.youtube.com/watch?v=KgYea5s8tFs
Presets tutorial:
http://www.youtube.com/watch?v=KKlCTxmzcS0
http://www.youtube.com/watch?v=aES-ZfanF3c
893
Csound Conferences 14 F. LINKS
Csound Conferences
See the list at https://csound.com/conferences.html
Example Collections
Csound Realtime Examples by Iain McCurdy is certainly the most extended, approved and up-to-
date collection.
The Amsterdam Catalog by John-Philipp Gather is particularily interesting because of the adaption
of Jean-Claude Risset’s famous “Introductory Catalogue of Computer Synthesized Sounds” from
1969.
Books
Victor Lazzarini’s Computer Music Instruments (2017) has a lot of Csound examples, in conjunc-
tion with Python and Faust.
Csound — A Sound and Music Computing System (2016) by Victor Lazzarini and others is the new
Csound Standard Book, covering all parts of the Csound audio programming language in depth.
Martin Neukom’s Signals, Systems and Sound Synthesis (2013) is a comprehensive computer mu-
sic tutorial with a lot of Csound in it.
Csound Power! by Jim Aikin (2012) is a perfect up-to-date introduction for beginners.
The Audio Programming Book edited by Richard Boulanger and Victor Lazzarini (2011) is a major
source with many references to Csound.
The Csound Book (2000) edited by Richard Boulanger is still the compendium for anyone who
really wants to go in depth with Csound.
894
14 G. CREDITS
This textbook is a collective effort. If someone’s name is missing in the list below. please contact
us.
The goal of this work is to share knowledge about the Free (Libre) Open Source Software Csound
for anyone. So any usage which contributes to this goal is welcome. The license below is to serve
this goal. If you are not sure about a usage, please contact us, and we will find a solution.
LICENSE
copyright (c) 2011-2023 by Joachim Heintz and contributors
The content of this book is released under creative commons license CC-BY 4.0. In short, anyone
is allowed to copy, redistribute and transform the content, as long as appropriate credit is given
to the source. For more information see https://creativecommons.org/licenses/by/
4.0/.
AUTHORS
00 INTRODUCTION
PREFACE Joachim Heintz, Andres Cabrera, Alex Hofmann, Iain McCurdy, Alexandre Abrioux
01 GETTING STARTED
Joachim Heintz
895
AUTHORS 14 G. CREDITS
02 HOW TO
Joachim Heintz
03 CSOUND LANGUAGE
A. INITIALIZATION AND PERFORMANCE PASS Joachim Heintz
B. LOCAL AND GLOBAL VARIABLES Joachim Heintz, Andres Cabrera, Iain McCurdy
04 SOUND SYNTHESIS
A. ADDITIVE SYNTHESIS Andres Cabrera, Joachim Heintz, Bjorn Houdorf
D. FREQUENCY MODULATION Alex Hofmann, Bjorn Houdorf, Marijana Janevska, Joachim Heintz
05 SOUND MODIFICATION
A. ENVELOPES Iain McCurdy
896
14 G. CREDITS AUTHORS
G. GRANULAR SYNTHESIS Iain McCurdy, Oeyvind Brandtsegg, Bjorn Houdorf, Joachim Heintz
06 SAMPLES
A. RECORD AND PLAY SOUNDFILES Iain McCurdy, Joachim Heintz
B. RECORD AND PLAY BUFFERS Iain McCurdy, Joachim Heintz, Andres Cabrera
07 MIDI
A. RECEIVING EVENTS BY MIDIIN Iain McCurdy
08 OTHER COMMUNICATION
A. OPEN SOUND CONTROL Alex Hofmann, Joachim Heintz
897
AUTHORS 14 G. CREDITS
10 CSOUND FRONTENDS
A. CSOUNDQT Andrés Cabrera, Joachim Heintz, Peiman Khosravi
11 CSOUND UTILITIES
A. ANALYSIS Iain McCurdy
D. CSOUND IN IOS Nicholas Arner, Nikhil Singh, Richard Boulanger, Alessandro Petrolati
13 EXTENDING CSOUND
A. DEVELOPING PLUGIN OPCODES Victor Lazzarini
14 MISCELLANEA
A. OPCODE GUIDE Joachim Heintz, Iain McCurdy
898
14 G. CREDITS EDITING TEAMS FOR VERSIONS
B. METHODS OF WRITING CSOUND SCORES Iain McCurdy, Joachim Heintz, Jacob Joaquin, Menno
Knevel
899
EDITING TEAMS FOR VERSIONS 14 G. CREDITS
900
15 A. DIGITAL AUDIO
At a purely physical level, sound is simply a mechanical disturbance of a medium. The medium
in question may be air, solid, liquid, gas or a combination of several of these. This disturbance in
the medium causes molecules to move back and forth in a spring-like manner. As one molecule
hits the next, the disturbance moves through the medium causing sound to travel. These so called
compressions and rarefactions in the medium can be described as sound waves. The simplest
type of waveform, describing what is referred to as simple harmonic motion, is a sine wave.
Each time the waveform signal goes above zero the molecules are in a state of compression mean-
ing that each molecule within the waveform disturbance is pushing into its neighbour. Each time
the waveform signal drops below zero the molecules are in a state of rarefaction meaning the
molecules are pulling away from their neighbours. When a waveform shows a clear repeating pat-
tern, as in the case above, it is said to be periodic. Periodic sounds give rise to the sensation of
pitch.
ä Period: The time it takes for a waveform to complete one cycle, measured in seconds.
ä Frequency: The number of cycles or periods per second, measured in Hertz (Hz). If a sound
has a frequency of 440 Hz it completes 440 cycles every second. Read more about frequency
in the next chapter.
ä Phase: This is the starting point of a waveform. It can be expressed in degrees or in radians.
A complete cycle of a waveform will cover 360 degrees or 2π radians. A sine with a phase
of 90° or π/2 results in a cosine.
901
Transduction 15 A. DIGITAL AUDIO
ä Amplitude: Amplitude is represented by the y-axis of a plotted pressure wave. The strength
at which the molecules pull or push away from each other, which will also depend upon the
resistance offered by the medium, will determine how far above and below zero - the point
of equilibrium - the wave fluctuates. The greater the y-value the greater the amplitude of our
wave. The greater the compressions and rarefactions, the greater the amplitude.
Transduction
The analogue sound waves we hear in the world around us need to be converted into an electrical
signal in order to be amplified or sent to a soundcard for recording. The process of converting
acoustical energy in the form of pressure waves into an electrical signal is carried out by a device
known as a a transducer.
A transducer, which is usually found in microphones, produces a changing electrical voltage that
mirrors the changing compression and rarefaction of the air molecules caused by the sound wave.
The continuous variation of pressure is therefore transduced into continuous variation of voltage.
The greater the variation of pressure the greater the variation of voltage that is sent to the com-
puter.
Ideally, the transduction process should be as transparent as possible: whatever goes in should
come out as a perfect analogy in a voltage representation. In reality, however, this will not be the
case. Low quality devices add noise and deformation. High quality devices add certain character-
istics like warmth or transparency.
Sampling
The analogue voltage that corresponds to an acoustic signal changes continuously, so that at
each point in time it will have a different value. It is not possible for a computer to receive the
value of the voltage for every instant because of the physical limitations of both the computer and
the data converters (remember also that there are an infinite number of instances between every
two instances!).
What the soundcard can do, however, is to measure the power of the analogue voltage at intervals
of equal duration. This is how all digital recording works and this is known as sampling. The
result of this sampling process is a discrete, or digital, signal which is no more than a sequence
of numbers corresponding to the voltage at each successive moment of sampling.
Below is a diagram showing a sinusoidal waveform. The vertical lines that run through the dia-
gram represent the points in time when a snapshot is taken of the signal. After the sampling has
taken place, we are left with what is known as a discrete signal, consisting of a collection of audio
samples, as illustrated in the bottom half of the diagram.
902
15 A. DIGITAL AUDIO Sample Rate and the Sampling Theorem
It is important to remember that each sample represents the amount of voltage, positive or neg-
ative, that was present in the signal at the point in time at which the sample or snapshot was
taken.
The same principle applies to recording of live video: a video camera takes a sequence of pictures
of motion and most video cameras will take between 30 and 60 still pictures a second. Each
picture is called a frame and when these frames are played in sequence at a rate corresponding to
that at which they were taken we no longer perceive them as individual pictures, we perceive them
instead as a continuous moving image.
According to this theorem, a soundcard or any other digital recording device will not be able to
represent any frequency above 1/2 the sampling rate. Half the sampling rate is also referred to as
the Nyquist frequency, after the Swedish physicist Harry Nyquist who formalized the theory in the
1920s. What it all means is that any signal with frequencies above the Nyquist frequency will be
misrepresented and will actually produce a frequency lower than the one being sampled. When
this happens it results in what is known as aliasing or foldover.
Aliasing
Here is a graphical representation of aliasing.
903
Aliasing 15 A. DIGITAL AUDIO
The sinusoidal waveform in blue is being sampled at the vertical black lines. The line that joins
the red circles together is the captured waveform. As you can see, the captured waveform and the
original waveform express different frequencies.
Here is another example, showing for a sample rate of 40 kHz in the upper section a sine of 10
kHz, and in the lower section a sine of 30 kHz:
We can see that if the sample rate is 40 kHz there is no problem with sampling a signal that is 10
KHz. On the other hand, in the second example it can be seen that a 30 kHz waveform is not going
to be correctly sampled. In fact we end up with a waveform that is 10 kHz, rather than 30 kHz. This
may seem like an academic proposition in that we will never be able to hear a 30KHz waveform
anyway but some synthesis and DSP techniques procedures will produce these frequencies as
unavoidable by-products and we need to ensure that they do not result in unwanted artifacts.
In computer music we can produce any frequency internally, much higher than we can hear, and
much higher than the Nyquist frequency. This may occur intentionally, or by accident, for instance
when we multiply a frequency of 2000 Hz by the 22nd harmonic, resulting in 44000 Hz. In the
following example, instrument 1 plays a 1000 Hz tone first directly, and then as result of 43100
Hz input which is 1000 Hz lower than the sample rate of 44100 Hz. Instrument 2 demonstrates
unwanted aliasing as a result of harmonics beyond Nyquist: the 22nd partial of 1990 Hz is 43780
Hz which sounds as 44100-43780 = 320 Hz.
904
15 A. DIGITAL AUDIO Aliasing
EXAMPLE 15A01_Aliasing.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
asig poscil .1, p4
out asig, asig
endin
instr 2
asig poscil .2, p4, giHarmonics
out asig, asig
endin
</CsInstruments>
<CsScore>
i 1 0 2 1000 ;1000 Hz sine
i 1 3 2 43100 ;43100 Hz sine sounds like 1000 Hz because of aliasing
i 2 6 4 1990 ;1990 Hz with harmonics 1, 11 and 22
;results in 1990*22=43780 Hz so aliased 320 Hz
;for the highest harmonic
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The same phenomenon happens in film and video, too. You may recall having seen wagon wheels
apparently turn in the wrong direction in old Westerns. Let us say for example that a camera is
taking 30 frames per second of a wheel moving. In one example, if the wheel is completing one
rotation in exactly 1/30th of a second, then every picture looks the same and as a result the wheel
appears to be motionless. If the wheel speeds up, i.e. it increases its rotational frequency, it will
appear as if the wheel is slowly turning backwards. This is because the wheel will complete more
than a full rotation between each snapshot.
As an aside, it is worth observing that a lot of modern ’glitch’ music intentionally makes a feature
of the spectral distortion that aliasing induces in digital audio. Csound is perfectly capable of
imitating the effects of aliasing while being run at any sample rate - if that is what you desire.
Audio-CD Quality uses a sample rate of 44100 Hz (44.1 kHz). This means that CD quality can only
represent frequencies up to 22050 Hz. Humans typically have an absolute upper limit of hearing
of about 20 Khz thus making 44.1 KHz a reasonable standard sampling rate. Higher sample rates
offer better time resolution and the Nyquist frequency is not that close to the limit of hearing. But
on the other hand twice the sample rate creates twice as much data. The choice has to be made
depending on the situation; in this book we stick on the sample rate of 44100 Hz for the examples.
905
Bits, Bytes and Words 15 A. DIGITAL AUDIO
All digital computers represent data as a collection of bits (short for binary digit). A bit is the
smallest possible unit of information. One bit can only be in one of two states: off or on, 0 or 1. All
computer data — a text file on disk, a program in memory, a packet on a network — is ultimately a
collection of bits.
Bits in groups of eight are called bytes, and one byte historically represented a single character of
data in the computer memory. Mostly one byte is the smallest unit of data, and bigger units will be
created by using two, three or more bytes. A good example is the number of bytes which is used
to store the number for one audio sample. In early games it was 1 byte (8 bit), on a CD it is 2 bytes
(16 bit), in sound cards it is often 3 bytes (24 bit), in most audio software it is internally 4 bytes (32
bit), and in Csound 8 bytes (64 bit).
The word length of a computer is the number of bits which is handled as a unit by the processor.
The transition from 32-bit to 64-bit word length around 2010 in the most commonly used proces-
sors required new compilations of Csound and other applications, in particular for the Windows
installers. To put it simple: A 32-bit machine needs an application compiled for 32-bit, a 64-bit
machine needs an application compiled for 64-bit.
Bit-depth Resolution
The sample rate determines the finer or rougher resolution in time. The number of bits for each
single sample determines the finer or rougher resultion in amplitude. The standard resolution for
CDs is 16 bit, which allows for 65536 different possible amplitude levels, 32767 on either side of
the zero axis. Using bit rates lower than 16 is not a good idea as it will result in noise being added
to the signal. This is referred to as quantization noise and is a result of amplitude values being
excessively rounded up or down when being digitized.
The figure below shows the quantization issue in simplified version, assuming a depth of only 3 bit.
This is like a grid of 23 = 8 possible levels which can be used for each sample. At each sampling
period the soundcard plots an amplitude which is adjusted to the next possible vertical position.
For a signal with lower amplitude the distortion would even be stronger.
906
15 A. DIGITAL AUDIO ADC / DAC
Figure 82.5: Inaccurate amplitude values due to insufficient bit depth resolution
Quantization noise becomes most apparent when trying to represent low amplitude (quiet)
sounds. Frequently a tiny amount of noise, known as a dither signal, will be added to digital audio
before conversion back into an analogue signal. Adding this dither signal will actually reduce the
more noticeable noise created by quantization. As higher bit depth resolutions are employed in
the digitizing process the need for dithering is reduced. A general rule is to use the highest bit
rate available.
Many electronic musicians make use of deliberately low bit depth quantization in order to add
noise to a signal. The effect is commonly known as bit-crunching and is easy to implement in
Csound. Example 05F02 in chapter 05F shows one possibility.
ADC / DAC
The entire process, as described above, of taking an analogue signal and converting it to a digital
signal is referred to as analogue to digital conversion, or ADC. Of course digital to analogue conver-
sion, DAC, is also possible. This is how we get to hear our music through our PC’s headphones or
speakers. If a sound is played back or streamed, the software will send a series of numbers to the
soundcard. The soundcard converts these numbers back to voltage. When the voltages reaches
the loudspeaker they cause the loudspeaker’s membrane to move inwards and outwards. This in-
duces a disturbance in the air around the speaker — compressions and rarefactions as described
at the beginning of this chapter — resulting in what we perceive as sound.
907
ADC / DAC 15 A. DIGITAL AUDIO
908
15 B. PITCH AND FREQUENCY
Pitch and frequency are related but different terms.1 Pitch is used by musicians to describe the
“height” of a tone, most obvious on a keyboard. Frequency is a technical term. We will start with
the latter and then return to pitch in some of its numerous aspects, including intervals, tuning
systems and different conversions between pitch and frequency in Csound.
Frequencies
As mentioned in the previous chapter, frequency is defined as the number of cycles or periods per
second. The SI unit is Hertz where 1 Hertz means 1 period per second. If a tone has a frequency
of 100 Hz it completes 100 cycles every second. If a tone has a frequency of 200 Hz it completes
200 cycles every second.
Given a tone’s frequency, the time for one period can be calculated straightforwardly. For 100
periods per seconds (100 Hz), the time for one period is 1/100 or 0.01 seconds. For 200 periods
per second (200 Hz), the time for each period is only half as much: 1/200 or 0.005 seconds.
Mathematically, the period is the reciprocal of the frequency and vice versa. In equation form, this
is expressed as follows:
1
𝐹 𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦 = 𝑃 𝑒𝑟𝑖𝑜𝑑
1
𝑃 𝑒𝑟𝑖𝑜𝑑 = 𝐹 𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦
Wavelength
In physical reality, one cycle of a periodic sound can not only be measured in time, but also as
extension in space. This is called the wavelength. It is usually abbreviated with the greek letter λ
(lambda). It can be calculated as the ratio between the velocity and the frequency of the wave.
𝑉 𝑒𝑙𝑜𝑐𝑖𝑡𝑦
𝜆= 𝐹 𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦
As the velocity of a sound in air (at 20° Celsius) is about 340 m/s, we can calculate the wavelength
of a sound as
1 Similar to volume and amplitude – see next chapter.
909
Frequencies 15 B. PITCH AND FREQUENCY
340𝑚
340
𝜆= 𝑠
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑦𝑐𝑙𝑒𝑠 = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑦𝑐𝑙𝑒𝑠 𝑚
𝑠
For instance, a sine wave of 1000 Hz has a length of approximately 340/1000 m = 34 cm, whereas
a wave of 100 Hz has a length of 340/100 m = 3.4 m.
EXAMPLE 15B01_PeriodicAperiodic.csd
<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 44100
nchnls = 2
0dbfs = 1
ksmps = 32
instr SineToNoise
kMinFreq = expseg:k(1000, p3*1/5, 1000, p3*3/5, 20, p3*1/5, 20)
kMaxFreq = expseg:k(1000, p3*1/5, 1000, p3*3/5, 20000, p3*1/5, 20000)
kRndFreq = expseg:k(1, p3*1/5, 1, p3*3/5, 10000, p3*1/5, 10000)
aFreq = randomi:a(kMinFreq, kMaxFreq, kRndFreq)
aSine = poscil:a(.1, aFreq)
aOut = linen:a(aSine, .5, p3, 1)
out(aOut, aOut)
endin
instr NoiseToSine
aNoise = rand:a(.1, 2, 1)
kBw = expseg:k(10000, p3*1/5, 10000, p3*3/5, .1, p3*1/5, .1)
aFilt = reson:a(aNoise, 1000, kBw, 2)
aOut = linen:a(aFilt, .5, p3, 1)
out(aOut, aOut)
endin
</CsInstruments>
<CsScore>
i "SineToNoise" 0 10
i "NoiseToSine" 11 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
910
15 B. PITCH AND FREQUENCY Frequencies
This is what the signal looks like at the start and the end of the SineToNoise process:
And this is what the signal looks like at the start and the end of the NoiseToSine process:
Only when a sound is periodic, we perceive a pitch. But the human ear is very sensitive, and it is
quite fascinating to observe how little periodicity is needed to sense some pitch.
So, in the following example, you will not hear the first (10 Hz) tone, and probably not the last (20
kHz) one, but hopefully the other ones (100 Hz, 1000 Hz, 10000 Hz):
EXAMPLE 15B02_LimitsOfHearing.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
911
Pitches 15 B. PITCH AND FREQUENCY
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
prints "Playing %d Hertz!\n", p4
asig poscil .2, p4
outs asig, asig
endin
</CsInstruments>
<CsScore>
i 1 0 2 10
i . + . 100
i . + . 1000
i . + . 10000
i . + . 20000
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Pitches
Musicians tune their instruments, and theorists concern themselves with the rationale, describing
intervals and scales. This has happened in different cultures, for ages, long before the term fre-
quency was invented and long before it was possible to measure a certain frequency by technical
devices. What is the relationship between musical terms like octave, major third, semitone and
the frequency we have to specify for an oscillator? And why are frequencies often described as
being on a “logarithmic scale”?
Intervals in music describe the distance between two notes. When dealing with standard musical
notation it is easy to determine an interval between two adjacent notes. For example a perfect
5th is always made up of seven semitones, so seven adjacent keys on a keyboard. When dealing
with Hz values things are different. A difference of say 100 Hz does not always equate to the
same musical interval. This is because musical intervals are represented as ratios between two
frequencies. An octave for example is always defines by the ratio 2:1. That is to say every time
you double a Hz value you will jump up by a musical interval of an octave.
912
15 B. PITCH AND FREQUENCY Pitches
Consider the following. A flute can play the note A4 at 440 Hz. If the player plays A5 an octave
above it at 880 Hz the difference in Hz is 440. Now consider the piccolo, the highest pitched
instrument of the orchestra. It can play A6 with a frequency of 1760 Hz but it can also play A7 an
octave above this at 3520 Hz (2 x 1760 Hz). While the difference in Hertz between A4 and A5 on
the flute is only 440 Hz, the difference between A6 and A7 on a piccolo is 1760 Hz yet they are both
only playing notes one octave apart.
The following example shows the difference between adding a certain frequency and applying a
ratio. First, the frequencies of 100, 400 and 800 Hz all get an addition of 100 Hz. This sounds very
different, though the added frequency is the same. Second, the ratio 3/2 (perfect fifth) is applied
to the same frequencies. This spacing sounds constant, although the frequency displacement is
different each time.
EXAMPLE 15B03_Adding_vs_ratio.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
prints "Playing %d Hertz!\n", p4
asig poscil .2, p4
aout linen asig, 0, p3, p3
outs aout, aout
endin
instr 2
prints "Adding %d Hertz to %d Hertz!\n", p5, p4
asig poscil .2, p4+p5
aout linen asig, 0, p3, p3
outs aout, aout
endin
instr 3
prints "Applying the ratio of %f (adding %d Hertz) to %d Hertz!\n",
p5, p4*p5, p4
asig poscil .2, p4*p5
aout linen asig, 0, p3, p3
outs aout, aout
endin
</CsInstruments>
<CsScore>
;adding a certain frequency (instr 2)
i 1 0 1 100
i 2 1 1 100 100
i 1 3 1 400
i 2 4 1 400 100
i 1 6 1 800
i 2 7 1 800 100
;applying a certain ratio (instr 3)
i 1 10 1 100
i 3 11 1 100 [3/2]
913
Pitches 15 B. PITCH AND FREQUENCY
i 1 13 1 400
i 3 14 1 400 [3/2]
i 1 16 1 800
i 3 17 1 800 [3/2]
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
So what about the reference to logarithms? As stated previously, logarithms are shorthand for
exponents. 21/12 = 1.059463 can also be written as log2 (1.059463) = 1/12. Therefore, frequencies
representing musical scales or intervals can be described on a logarithmic scale. The linear pro-
gression of the exponents (with base 2) as 1/12, 2/12, 3/12 … represent the linear progression of
semitones.
MIDI Notes
The equal-tempered scale is present on each MIDI keyboard. So the most common way to work
with pitches is to use MIDI note numbers. In MIDI speak A4 (= 440 Hz) is MIDI note 69.3 The
semitone below, called A flat or G sharp, is MIDI note 68, and so on. The MIDI notes 1-127 cover
the frequency range from 9 Hz to 12544 Hz which is pretty well suited to the human hearing (and
to a usual grand piano which would correspond to MIDI keys 21-108).
Csound can easily deal with MIDI notes and comes with functions that will convert MIDI notes to
Hertz values (mtof) and back again (ftom). The next example shows a small chromatic melody
which is given as MIDI notes in the array iMidiKeys[], and then converted to the corresponding
frequencies, related to the definition of A4 (440 Hz as default). The opcode mton returns the note
names.
EXAMPLE 15B04_Midi_to_frequency.csd
<CsoundSynthesizer>
<CsOptions>
-o dac -m128
</CsOptions>
<CsInstruments>
2 21/12
√
12
is the same as 2 thus the number which yields 2 if multiplied by itself 12 times.
3 Caution:
like many standards there is occasional disagreement about the mapping between frequency and octave number.
You may occasionally encounter A 440 Hz being described as A3.
914
15 B. PITCH AND FREQUENCY Pitches
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
A4 = 457
instr LetPlay
endin
instr Play
iMidiKey = p4
iFreq mtof iMidiKey
S_name mton iMidiKey
printf_i "Midi Note = %d, Frequency = %f, Note name = %s\n",
1, iMidiKey, iFreq, S_name
aPluck pluck .2, iFreq, iFreq, 0, 1
aOut linen aPluck, 0, p3, p3/2
aL, aR pan2 aOut, (iMidiKey-61)/10
out aL, aR
endin
</CsInstruments>
<CsScore>
i "LetPlay" 0 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
As A4 is set in the header to 457 Hz (overwriting the default 440 Hz), this is the printout:
Midi Note = 69, Frequency = 457.000000, Note name = 4A
Midi Note = 69, Frequency = 457.000000, Note name = 4A
Midi Note = 69, Frequency = 457.000000, Note name = 4A
Midi Note = 68, Frequency = 431.350561, Note name = 4G#
Midi Note = 67, Frequency = 407.140714, Note name = 4G
Midi Note = 66, Frequency = 384.289662, Note name = 4F#
Midi Note = 65, Frequency = 362.721140, Note name = 4F
Midi Note = 64, Frequency = 342.363167, Note name = 4E
915
Pitches 15 B. PITCH AND FREQUENCY
the “middle c” on a piano, has the number 8.00. Semitones upwards are then 8.01, 8.02 and so on,
reaching A4 as 8.09. B4 is 8.11 and C5 is 9.00.
The oct notation also uses floating point numbers. The integer part has the same meaning as in
the pch notation. The fractional part divides one octave in acoustically equal steps. For 8.00 as
C4 and 9.00 as C5, 8.5 denotes a pitch which is acoustically in the middle between C4 and C5,
which means that the proportion between this frequency and the C4 frequency is the same as the
proportion between the C5 frequency and this tone’s frequency. Csound calculates this as:
instr 1
iC4 = cpsoct(8)
iC5 = cpsoct(9)
iNew = cpsoct(8.5)
prints "C4 = %.3f Hz, C5 = %.3f Hz, oct(8.5) = %.3f Hz.\n",
iC4, iC5, iNew
prints "Proportion New:C4 = %.3f, C5:New = %.3f\n",
iNew/iC4, iC5/iNew
endin
schedule(1,0,0)
On a keyboard, this pitch which divides the octave in two acoustically equal halves, is F#4. It can
be notated in pch notation as 8.06, or in MIDI notation as key number 66. So why was oct notation
added? – The reason is that by this notation it becomes very simple to introduce for instance the
division of an octave into 10 equal steps: 8.1, 8.2, …, or in 8 equal steps as 8.125, 8.25, 8.375, …
The following code shows that things like these can also be achieved with a bit of math, but for
simple cases it is quite convenient to use the oct notation. A scale consisting of ten equal steps
based on A3 (= 220 Hz) is constructed.
instr 1
puts "Calculation with octpch():", 1
iOctDiff = 0
while iOctDiff < 1 do
prints "oct(%.2f)=%.3f ", 7.75+iOctDiff, cpsoct(7.75+iOctDiff)
iOctDiff += 1/10
od
puts "",1
puts "Calculation with math:", 1
iExp = 0
while iExp < 1 do
prints "pow(2,%.1f)=%.3f ", pow(2,iExp), pow(2,iExp) * 220
iExp += 1/10
od
puts "",1
endin
schedule(1,0,0)
Cent
One semitone in the equal-tempered tuning system can be divided into 100 Cent. It is a common
way to denote small or “microtonal” deviations. It can be used in Csound’s MIDI notation as frac-
916
15 B. PITCH AND FREQUENCY Tuning Systems
tional part. MIDI note number 69.5 is a quarter tone (50 Cent) above A4; 68.75 is an eight tone (25
Cent) below A4. In the pch notation we would write 8.095 for the first and 8.0875 for the second
pitch.
All musical intervals can be described as ratios or multipliers. The ratio for the perfect fifth is 3:2, or
1.5 when used as multiplier. Also one Cent is a multiplier. As one octave consists of 12 semitones,
and each semitone consists of 100 Cent, one octave consists of 1200 Cent. So one Cent, described
as multiplier, is 21/1200 (1.000577…), and 50 Cent is 250/1200 (1.0293022…). To return this multiplier,
Csound offers the cent converter. So cent(50) returns the number by which we must multiply a
certain frequency to get a quarter tone higher, and cent(-25) returns the multiplier for calculating
an eighth tone lower.
instr 1
prints "A quater tone above A4 (440 Hz):\n"
prints " 1. as mtof:i(69.5) = %f\n", mtof:i(69.5)
prints " 2. as cpspch(8.095) = %f\n", cpspch(8.095)
prints " 3. as 2^(50/1200)*440 = %f\n", 2^(50/1200)*440
prints " 4. as cent(50)*440 = %f\n", cent(50)*440
endin
schedule(1,0,0)
Tuning Systems
The equal-tempered tuning system which can be found on each MIDI keyboard is not the only tun-
ing system in existence. For many musical contexts it is not approriate. In european history there
were many different systems, for instance the Pythagorean and the Meantone tuning. Each of the
countless traditional music cultures all over the world, for instance Arabic Maqam, Iranian Dast-
gah, Indian Raga, has its own tuning system. And in comtemporary music we find also numerous
different tuning systems.
Audio programming languages like Csound, which can synthesize sounds with any frequency, are
particularily suited for this approach. It is even simple to “tune” a MIDI keyboard in quarter tones
or to any historical tuning using Csound. The following example shows the fundamentals. It plays
the five notes C D E F G (= MIDI 60 62 64 65 67) first in Pythoagorean tuning, then in Meantone,
then as quatertones, then as partials 1-5.
EXAMPLE 15B05_Tuning_Systems.csd
<CsoundSynthesizer>
<CsOptions>
-o dac -m128
</CsOptions>
<CsInstruments>
917
Frequently Used Formulas 15 B. PITCH AND FREQUENCY
sr = 44100
nchnls = 2
0dbfs = 1
ksmps = 32
instr Pythagorean
giScale[] fillarray 1, 9/8, 81/64, 4/3, 3/2
schedule("LetPlay",0,0)
puts "Pythagorean scale",1
endin
instr Meantone
giScale[] fillarray 1, 10/9, 5/4, 4/3, 3/2
schedule("LetPlay",0,0)
puts "Meantone scale",1
endin
instr Quatertone
giScale[] fillarray 1, 2^(1/24), 2^(2/24), 2^(3/24), 2^(4/24)
schedule("LetPlay",0,0)
puts "Quatertone scale",1
endin
instr Partials
giScale[] fillarray 1, 2, 3, 4, 5
schedule("LetPlay",0,0)
puts "Partials scale",1
endin
instr LetPlay
indx = 0
while indx < 5 do
schedule("Play",indx,2,giScale[indx])
indx += 1
od
endin
instr Play
iFreq = mtof:i(60) * p4
print iFreq
aSnd vco2 .2, iFreq, 8
aOut linen aSnd, .1, p3, p3/2
out aOut, aOut
endin
</CsInstruments>
<CsScore>
i "Pythagorean" 0 10
i "Meantone" 10 10
i "Quatertone" 20 10
i "Partials" 30 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
918
15 B. PITCH AND FREQUENCY Frequently Used Formulas
ä Frequency 𝑓
ä Proportion 𝑝
Searched:
Solution: 𝑓𝑛𝑒𝑤 = 𝑓 ⋅ 𝑝
Example: Which frequency is in 5/4 proportion to 440 Hz? → 𝑓𝑛𝑒𝑤 = 440𝐻𝑧 ⋅ 5/4 = 550 𝐻𝑧
ä Frequency 𝑓
ä Cent difference 𝑐
Searched:
Example: Which frequency is 50 Cent below 440 Hz? 𝑓𝑛𝑒𝑤 = 440 ⋅ 2−50/1200 = 427.474 𝐻𝑧
ä Frequency_1 𝑓1
ä Frequency_2 𝑓2
Searched:
ä Cent difference 𝑐
𝑓1
Solution: 𝑐 = log2 𝑓2 ⋅ 1200
550
Example: What is the Cent difference between 550 Hz and 440 Hz? → 𝑐 = log2 440 ⋅ 1200 =
386.314 𝐶𝑒𝑛𝑡
919
Frequently Used Formulas 15 B. PITCH AND FREQUENCY
920
15 C. INTENSITIES
As musicians we are dealing with volume, loudness, sound intensity. (In classical western music
called dynamics, designated as forte, piano and its variants.) In digital audio, however, we are
dealing with amplitudes. We are asked, for instance, to set the amplitude of an oscillator. Or we
see this message at the end of a Csound performance in the console telling us the “overall amps”
(= amplitudes):
Amplitudes are related to sound intensities, but in a more complicated way than we may think.
This chapter starts with some essentials about measuring intensities and the decibel (dB) scale.
It continues with rms measurement and ends with the Fletcher-Munson curves.
The range of human hearing is about 10-12 W/m2 at the threshold of hearing to 100 W/m2 at the
threshold of pain. For ordering this immense range, and to facilitate the measurement of one
sound intensity based upon its ratio with another, a logarithmic scale is used. The unit Bel de-
scribes the relation of one intensity 𝐼 to a reference intensity 𝐼0 as follows:
921
Real World Intensities and Amplitudes 15 C. INTENSITIES
𝐼
log10 𝐼0
If, for example, the ratio I/I0 is 10, this is 1 Bel. If the ratio is 100, this is 2 Bel.
For real world sounds, it makes sense to set the reference value 𝐼0 to the threshold of hearing
which has been fixed as 10-12 W/m2 at 1000 Hertz. So the range of human hearing covers about
12 Bel. Usually 1 Bel is divided into 10 decibel, so the common formula for measuring a sound
intensity is:
𝐼
10 ⋅ log10 𝐼0
Sound Intensity Level (SIL) in deci Bel (dB) with I0 = 10 -12 W/m2
𝐼 ∝ 𝑃2
Let us take an example to see what this means. The sound pressure at the threshold of hearing
can be fixed at 2·10-5 Pa. This value is the reference value of the Sound Pressure Level (SPL). If we
now have a value of 2·10-4 Pa, the corresponding sound intensity relationship can be calculated as
−4 2
2⋅10 2
( 2⋅10 −5 ) = 10 = 100.
Therefore a factor of 10 in a pressure relationship yields a factor of 100 in the intensity relationship.
In general, the dB scale for the pressure 𝑃 related to the pressure 𝑃0 is:
2
10 ⋅ 𝑙𝑜𝑔10 ( 𝑃𝑃 ) = 2 ⋅ 10 ⋅ 𝑙𝑜𝑔10 𝑃𝑃 = 20 ⋅ 𝑙𝑜𝑔10 𝑃𝑃
0 0 0
922
15 C. INTENSITIES What is 0 dB?
𝐼 ∝ 𝐴2
This yields the same transformation as described above for the sound pressure; so finally the
relation in Decibel of any amplitude 𝐴 to a reference amplitude 𝐴0 is:
𝐴
20 ⋅ log10
𝐴0
If we drive an oscillator with an amplitude of 1, and another oscillator with an amplitude of 0.5 and
we want to know the difference in dB, this is the calculation:
1
20 ⋅ log10 = 20 ⋅ 𝑙𝑜𝑔10 2 = 20 ⋅ 0.30103 = 6.0206 dB
0.5
The most useful thing to bear in mind is that when we double an amplitude this will provide a
change of +6 dB, or when we halve an amplitude this will provide a change of -6 dB.
What is 0 dB?
As described in the last section, any dB scale - for intensities, pressures or amplitudes - is just
a way to describe a relationship. To have any sort of quantitative measurement you will need to
know the reference value referred to as 0 dB. For real world sounds, it makes sense to set this level
to the threshold of hearing. This is done, as we saw, by setting the SIL to 10-12 W/m2 , and the SPL
to 2·10-5 Pa.
When working with digital sound within a computer, this method for defining 0 dB will not make
any sense. The loudness of the sound produced in the computer will ultimately depend on the
amplification and the speakers, and the amplitude level set in your audio editor or in Csound will
only apply an additional, and not an absolute, sound level control. Nevertheless, there is a rational
reference level for the amplitudes. In a digital system, there is a strict limit for the maximum num-
ber you can store as amplitude. This maximum possible level is normally used as the reference
point for 0 dB.
Each program connects this maximum possible amplitude with a number. Usually it is 1 which is a
good choice, because you know that everything above 1 is clipping, and you have a handy relation
for lower values. But actually this value is nothing but a setting, and in Csound you are free to set it
923
dB Scale Versus Linear Amplitude 15 C. INTENSITIES
to any value you like via the 0dbfs opcode. Usually you should use this statement in the orchestra
header:
0dbfs = 1
This means: “Set the level for zero dB as full scale to 1 as reference value.” Note that for historical
reasons the default value in Csound is not 1 but 32768. So you must have this 0dbfs=1 statement in
your header if you want to use the amplitude convention used by most modern audio programming
environments.
EXAMPLE 15C01_db_vs_linear.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
</CsInstruments>
<CsScore>
i 1 0 10
i 2 11 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The first note, which employs a linear rise in amplitude, is perceived as rising quickly in intensity
924
15 C. INTENSITIES RMS Measurement
with the rate of increase slowing quickly. The second note, which employs a linear rise in decibels,
is perceived as a more constant rise in intensity.
RMS Measurement
Sound intensity depends on many factors. One of the most important is the effective mean of the
amplitudes in a certain time span. This is called the Root Mean Square (RMS) value. To calculate
it, you have (1) to calculate the squared amplitudes of N samples. Then you (2) divide the result
by N to calculate the mean of it. Finally (3) take the square root.
Let us consider a simple example and then look at how to derive rms values within Csound. As-
suming we have a sine wave which consists of 16 samples, we get these amplitudes:
925
RMS Measurement 15 C. INTENSITIES
√
And the resulting RMS value is 0.5 = 0.707.
The rms opcode in Csound calculates the RMS power in a certain time span, and smoothes the
values in time according to the ihp parameter: the higher this value is (the default is 10 Hz), the
quicker this measurement will respond to changes, and vice versa. This opcode can be used to
implement a self-regulating system, in which the rms opcode prevents the system from exploding.
Each time the rms value exceeds a certain value, the amount of feedback is reduced. This is an
example1 :
EXAMPLE 15C02_rms_feedback_system.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
a3 init 0
kamp linseg 0, 1.5, 0.2, 1.5, 0 ;envelope for initial input
asnd poscil kamp, 440, giSine ;initial input
if p4 == 1 then ;choose between two sines ...
adel1 poscil 0.0523, 0.023, giSine
adel2 poscil 0.073, 0.023, giSine,.5
else ;... or a random movement
;for the delay lines
adel1 randi 0.05, 0.1, 2
adel2 randi 0.08, 0.2, 2
endif
a0 delayr 1 ;delay line of 1 second
a1 deltapi adel1 + 0.1 ;first reading
a2 deltapi adel2 + 0.1 ;second reading
krms rms a3 ;rms measurement
delayw asnd + exp(-krms) * a3 ;feedback depending on rms
a3 reson -(a1+a2), 3000, 7000, 2 ;calculate a3
aout linen a1/3, 1, p3, 1 ;apply fade in and fade out
outs aout, aout
endin
</CsInstruments>
<CsScore>
i 1 0 60 1 ;two sine movements of delay with feedback
i 1 61 . 2 ;two random movements of delay with feedback
</CsScore>
</CsoundSynthesizer>
;example by Martin Neukom, adapted by Joachim Heintz
926
15 C. INTENSITIES Fletcher-Munson Curves
Fletcher-Munson Curves
The range of human hearing is roughly from 20 to 20000 Hz, but within this range, the hearing is not
equally sensitive to intensity. The most sensitive region is around 3000 Hz. If a sound is operating
in the upper or lower limits of this range, it will need greater intensity in order to be perceived as
equally loud.
These curves of equal loudness are mostly called Fletcher-Munson Curves because of the paper
of H. Fletcher and W. A. Munson in 1933. They look like this:
Try the following test. During the first 5 seconds you will hear a tone of 3000 Hz. Adjust the level of
your amplifier to the lowest possible level at which you still can hear the tone. Next you hear a tone
whose frequency starts at 20 Hertz and ends at 20000 Hertz, over 20 seconds. Try to move the
fader or knob of your amplification exactly in a way that you still can hear anything, but as soft as
possible. The movement of your fader should roughly be similar to the lowest Fletcher-Munson-
Curve: starting relatively high, going down and down until 3000 Hertz, and then up again. Of
course, this effectiveness of this test will also depend upon the quality of your speaker hardware.
If your speakers do not provide adequate low frequency response, you will not hear anything in the
bass region.
EXAMPLE 15C03_FletcherMunson.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr 1
kfreq expseg p4, p3, p5
printk 1, kfreq ;prints the frequencies once a second
asin poscil .2, kfreq
aout linen asin, .01, p3, .01
outs aout, aout
927
Fletcher-Munson Curves 15 C. INTENSITIES
endin
</CsInstruments>
<CsScore>
i 1 0 5 1000 1000
i 1 6 20 20 20000
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
It is very important to bear in mind when designing instruments that the perceived loudness of a
sound will depend upon its frequency content. You must remain aware that projecting a 30 Hz
sine at a certain amplitude will be perceived differently to a 3000 Hz sine at the same amplitude;
the latter will sound much louder.
928
15 D. RANDOM
This chapter is in three parts. Part I provides a general introduction to the concepts behind ran-
dom numbers and how to work with them in Csound. Part II focusses on a more mathematical
approach. Part III introduces a number of opcodes for generating random numbers, functions and
distributions and demonstrates their use in musical examples.
I. GENERAL INTRODUCTION
Random is Different
The term random derives from the idea of a horse that is running so fast it becomes out of control
or beyond predictability.1 Yet there are different ways in which to run fast and to be out of control;
therefore there are different types of randomness.
We can divide types of randomness into two classes. The first contains random events that are
independent of previous events. The most common example for this is throwing a die. Even if
you have just thrown three One’s in a row, when thrown again, a One has the same probability as
before (and as any other number). The second class of random number involves random events
which depend in some way upon previous numbers or states. Examples here are Markov chains
and random walks.
1 http://www.etymonline.com/index.php?term=random
929
I. GENERAL INTRODUCTION 15 D. RANDOM
The use of randomness in electronic music is widespread. In this chapter, we shall try to explain
how the different random horses are moving, and how you can create and modify them on your
own. Moreover, there are many pre-built random opcodes in Csound which can be used out of
the box (see the overview in the Csound Manual and the Opcode Guide). The final section of this
chapter introduces some musically interesting applications of them.
The pseudo-random generator takes one number as input, and generates another number as out-
put. This output is then the input for the next generation. For a huge amount of numbers, they
look as if they are randomly distributed, although everything depends on the first input: the seed.
For one given seed, the next values can be predicted.
EXAMPLE 15D01_different_seed.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
930
15 D. RANDOM I. GENERAL INTRODUCTION
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
instr generate
;get seed: 0 = seeding from system clock
; otherwise = fixed seed
seed p4
;generate four notes to be played from subinstrument
iNoteCount = 0
while iNoteCount < 4 do
iFreq random 400, 800
schedule "play", iNoteCount, 2, iFreq
iNoteCount += 1 ;increase note count
od
endin
instr play
iFreq = p4
print iFreq
aImp mpulse .5, p3
aMode mode aImp, iFreq, 1000
aEnv linen aMode, 0.01, p3, p3-0.01
outs aEnv, aEnv
endin
</CsInstruments>
<CsScore>
;repeat three times with fixed seed
r 3
i "generate" 0 2 1
;repeat three times with seed from the system clock
r 3
i "generate" 0 1 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Note that a pseudo-random generator will repeat its series of numbers after as many steps as
are given by the size of the generator. If a 16-bit number is generated, the series will be repeated
after 65536 steps. If you listen carefully to the following example, you will hear a repetition in the
structure of the white noise (which is the result of uniformly distributed amplitudes) after about
1.5 seconds in the first note.2 In the second note, there is no perceivable repetition as the random
generator now works with a 31-bit number.
EXAMPLE 15D02_white_noises.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
2 Becausethe sample rate is 44100 samples per second. So a repetition after 65536 samples will lead to a repetition after
65536/44100 = 1.486 seconds.
931
I. GENERAL INTRODUCTION 15 D. RANDOM
instr white_noise
iBit = p4 ;0 = 16 bit, 1 = 31 bit
;input of rand: amplitude, fixed seed (0.5), bit size
aNoise rand .1, 0.5, iBit
outs aNoise, aNoise
endin
</CsInstruments>
<CsScore>
i "white_noise" 0 10 0
i "white_noise" 11 10 1
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
1. The way to set the seed differs from opcode to opcode. There are several opcodes such as
rand featured above, which offer the choice of setting a seed as input parameter. For others,
such as the frequently used random family, the seed can only be set globally via the seed
statement. This is usually done in the header so a typical statement would be:
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0 ;seeding from current time
2. Random number generation in Csound can be done at any rate. The type of the output vari-
able tells you whether you are generating random values at i-, k- or a-rate. Many random
opcodes can work at all these rates, for instance random:
1) ires random imin, imax
2) kres random kmin, kmax
3) ares random kmin, kmax
In the first case, a random value is generated only once, when an instrument is called, at
initialisation. The generated value is then stored in the variable ires. In the second case, a
random value is generated at each k-cycle, and stored in kres. In the third case, in each k-cycle
as many random values are stored as the audio vector has in size, and stored in the variable
ares. Have a look at example 03A16_Random_at_ika.csd to see this at work. Chapter 03A
tries to explain the background of the different rates in depth, and how to work with them.
Other Distributions
The uniform distribution is the one each computer can output via its pseudo-random generator.
But there are many situations you will not want a uniformly distributed random, but any other
shape. Some of these shapes are quite common, but you can actually build your own shapes quite
easily in Csound. The next examples demonstrate how to do this. They are based on the chapter
in Dodge/Jerse3 which also served as a model for many random number generator opcodes in
3 Charles Dodge and Thomas A. Jerse, Computer Music, New York 1985, Chapter 8.1, in particular page 269-278.
932
15 D. RANDOM I. GENERAL INTRODUCTION
Csound.4
Linear
A linear distribution means that either lower or higher values in a given range are more likely:
To get this behaviour, two uniform random numbers are generated, and the lower is taken for
the first shape. If the second shape with the precedence of higher values is needed, the higher
one of the two generated numbers is taken. The next example implements these random genera-
tors as User Defined Opcodes. First we hear a uniform distribution, then a linear distribution with
precedence of lower pitches (but longer durations), at least a linear distribution with precedence
of higher pitches (but shorter durations).
EXAMPLE 15D03_linrand.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
4 Mostof them have been written by Paris Smaragdis in 1995: betarnd, bexprnd, cauchy, exprnd, gauss, linrand, pcauchy,
poisson, trirand, unirand and weibull.
933
I. GENERAL INTRODUCTION 15 D. RANDOM
seed 0
opcode linrnd_low, i, ii
;linear random with precedence of lower values
iMin, iMax xin
;generate two random values with the random opcode
iOne random iMin, iMax
iTwo random iMin, iMax
;compare and get the lower one
iRnd = iOne < iTwo ? iOne : iTwo
xout iRnd
endop
opcode linrnd_high, i, ii
;linear random with precedence of higher values
iMin, iMax xin
;generate two random values with the random opcode
iOne random iMin, iMax
iTwo random iMin, iMax
;compare and get the higher one
iRnd = iOne > iTwo ? iOne : iTwo
xout iRnd
endop
instr notes_uniform
prints "... instr notes_uniform playing:\n"
prints "EQUAL LIKELINESS OF ALL PITCHES AND DURATIONS\n"
;how many notes to be played
iHowMany = p4
;trigger as many instances of instr play as needed
iThisNote = 0
iStart = 0
until iThisNote == iHowMany do
iMidiPch random 36, 84 ;midi note
iDur random .5, 1 ;duration
event_i "i", "play", iStart, iDur, int(iMidiPch)
iStart += iDur ;increase start
iThisNote += 1 ;increase counter
enduntil
;reset the duration of this instr to make all events happen
p3 = iStart + 2
;trigger next instrument two seconds after the last note
event_i "i", "notes_linrnd_low", p3, 1, iHowMany
endin
instr notes_linrnd_low
prints "... instr notes_linrnd_low playing:\n"
prints "LOWER NOTES AND LONGER DURATIONS PREFERRED\n"
iHowMany = p4
iThisNote = 0
iStart = 0
until iThisNote == iHowMany do
iMidiPch linrnd_low 36, 84 ;lower pitches preferred
iDur linrnd_high .5, 1 ;longer durations preferred
event_i "i", "play", iStart, iDur, int(iMidiPch)
iStart += iDur
iThisNote += 1
enduntil
;reset the duration of this instr to make all events happen
934
15 D. RANDOM I. GENERAL INTRODUCTION
p3 = iStart + 2
;trigger next instrument two seconds after the last note
event_i "i", "notes_linrnd_high", p3, 1, iHowMany
endin
instr notes_linrnd_high
prints "... instr notes_linrnd_high playing:\n"
prints "HIGHER NOTES AND SHORTER DURATIONS PREFERRED\n"
iHowMany = p4
iThisNote = 0
iStart = 0
until iThisNote == iHowMany do
iMidiPch linrnd_high 36, 84 ;higher pitches preferred
iDur linrnd_low .3, 1.2 ;shorter durations preferred
event_i "i", "play", iStart, iDur, int(iMidiPch)
iStart += iDur
iThisNote += 1
enduntil
;reset the duration of this instr to make all events happen
p3 = iStart + 2
;call instr to exit csound
event_i "i", "exit", p3+1, 1
endin
instr play
;increase duration in random range
iDur random p3, p3*1.5
p3 = iDur
;get midi note and convert to frequency
iMidiNote = p4
iFreq cpsmidinn iMidiNote
;generate note with karplus-strong algorithm
aPluck pluck .2, iFreq, iFreq, 0, 1
aPluck linen aPluck, 0, p3, p3
;filter
aFilter mode aPluck, iFreq, .1
;mix aPluck and aFilter according to MidiNote
;(high notes will be filtered more)
aMix ntrpol aPluck, aFilter, iMidiNote, 36, 84
;panning also according to MidiNote
;(low = left, high = right)
iPan = (iMidiNote-36) / 48
aL, aR pan2 aMix, iPan
outs aL, aR
endin
instr exit
exitnow
endin
</CsInstruments>
<CsScore>
i "notes_uniform" 0 1 23 ;set number of notes per instr here
;instruments linrnd_low and linrnd_high are triggered automatically
e 99999 ;make possible to perform long (exit will be automatically)
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
935
I. GENERAL INTRODUCTION 15 D. RANDOM
Triangular
In a triangular distribution the values in the middle of the given range are more likely than those at
the borders. The probability transition between the middle and the extrema are linear:
The algorithm for getting this distribution is very simple as well. Generate two uniform random
numbers and take the mean of them. The next example shows the difference between uniform
and triangular distribution in the same environment as the previous example.
EXAMPLE 15D04_trirand.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
instr notes_uniform
prints "... instr notes_uniform playing:\n"
prints "EQUAL LIKELINESS OF ALL PITCHES AND DURATIONS\n"
;how many notes to be played
iHowMany = p4
;trigger as many instances of instr play as needed
iThisNote = 0
iStart = 0
until iThisNote == iHowMany do
iMidiPch random 36, 84 ;midi note
936
15 D. RANDOM I. GENERAL INTRODUCTION
instr notes_trirnd
prints "... instr notes_trirnd playing:\n"
prints "MEDIUM NOTES AND DURATIONS PREFERRED\n"
iHowMany = p4
iThisNote = 0
iStart = 0
until iThisNote == iHowMany do
iMidiPch trirnd 36, 84 ;medium pitches preferred
iDur trirnd .25, 1.75 ;medium durations preferred
event_i "i", "play", iStart, iDur, int(iMidiPch)
iStart += iDur
iThisNote += 1
enduntil
;reset the duration of this instr to make all events happen
p3 = iStart + 2
;call instr to exit csound
event_i "i", "exit", p3+1, 1
endin
instr play
;increase duration in random range
iDur random p3, p3*1.5
p3 = iDur
;get midi note and convert to frequency
iMidiNote = p4
iFreq cpsmidinn iMidiNote
;generate note with karplus-strong algorithm
aPluck pluck .2, iFreq, iFreq, 0, 1
aPluck linen aPluck, 0, p3, p3
;filter
aFilter mode aPluck, iFreq, .1
;mix aPluck and aFilter according to MidiNote
;(high notes will be filtered more)
aMix ntrpol aPluck, aFilter, iMidiNote, 36, 84
;panning also according to MidiNote
;(low = left, high = right)
iPan = (iMidiNote-36) / 48
aL, aR pan2 aMix, iPan
outs aL, aR
endin
instr exit
exitnow
endin
</CsInstruments>
<CsScore>
i "notes_uniform" 0 1 23 ;set number of notes per instr here
;instr trirnd will be triggered automatically
e 99999 ;make possible to perform long (exit will be automatically)
937
I. GENERAL INTRODUCTION 15 D. RANDOM
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Having written this with some very simple UDOs, it is easy to emphasise the probability peaks of
the distributions by generating more than two random numbers. If you generate three numbers
and choose the smallest of them, you will get many more numbers near the minimum in total for
the linear distribution. If you generate three random numbers and take the mean of them, you will
end up with more numbers near the middle in total for the triangular distribution.
If we want to write UDOs with a flexible number of sub-generated numbers, we have to write the
code in a slightly different way. Instead of having one line of code for each random generator, we
will use a loop, which calls the generator as many times as we wish to have units. A variable will
store the results of the accumulation. Re-writing the above code for the UDO trirnd would lead to
this formulation:
opcode trirnd, i, ii
iMin, iMax xin
;set a counter and a maximum count
iCount = 0
iMaxCount = 2
;set the accumulator to zero as initial value
iAccum = 0
;perform loop and accumulate
until iCount == iMaxCount do
iUniRnd random iMin, iMax
iAccum += iUniRnd
iCount += 1
enduntil
;get the mean and output
iRnd = iAccum / 2
xout iRnd
endop
To get this completely flexible, you only have to get iMaxCount as input argument. The code for
the linear distribution UDOs is quite similar. – The next example shows these steps:
1. Uniform distribution.
2. Linear distribution with the precedence of lower pitches and longer durations, generated with
two units.
3. The same but with four units.
4. Linear distribution with the precedence of higher pitches and shorter durations, generated
with two units.
5. The same but with four units.
6. Triangular distribution with the precedence of both medium pitches and durations, generated
with two units.
7. The same but with six units.
Rather than using different instruments for the different distributions, the next example combines
all possibilities in one single instrument. Inside the loop which generates as many notes as desired
by the iHowMany argument, an if-branch calculates the pitch and duration of one note depending
on the distribution type and the number of sub-units used. The whole sequence (which type first,
938
15 D. RANDOM I. GENERAL INTRODUCTION
which next, etc) is stored in the global array giSequence. Each instance of instrument notes in-
creases the pointer giSeqIndx, so that for the next run the next element in the array is being read.
If the pointer has reached the end of the array, the instrument which exits Csound is called instead
of a new instance of notes.
EXAMPLE 15D05_more_lin_tri_units.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
;****UDO DEFINITIONS****
opcode linrnd_low, i, iii
;linear random with precedence of lower values
iMin, iMax, iMaxCount xin
;set counter and initial (absurd) result
iCount = 0
iRnd = iMax
;loop and reset iRnd
until iCount == iMaxCount do
iUniRnd random iMin, iMax
iRnd = iUniRnd < iRnd ? iUniRnd : iRnd
iCount += 1
enduntil
xout iRnd
endop
939
I. GENERAL INTRODUCTION 15 D. RANDOM
iAccum += iUniRnd
iCount += 1
enduntil
;get the mean and output
iRnd = iAccum / iMaxCount
xout iRnd
endop
instr notes
;how many notes to be played
iHowMany = p4
;by which distribution with how many units
iWhich = giSequence[giSeqIndx]
iDistrib = int(iWhich)
iUnits = round(frac(iWhich) * 10)
;set min and max duration
iMinDur = .1
iMaxDur = 2
;set min and max pitch
iMinPch = 36
iMaxPch = 84
940
15 D. RANDOM I. GENERAL INTRODUCTION
iThisNote += 1
;avoid continuous printing
iPrint = 0
enduntil
instr exit
exitnow
endin
</CsInstruments>
<CsScore>
i "notes" 0 1 23 ;set number of notes per instr here
e 99999 ;make possible to perform long (exit will be automatically)
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
With this method we can build probability distributions which are very similar to exponential or
gaussian distributions.5 Their shape can easily be formed by the number of sub-units used.
5 According to Dodge/Jerse, the usual algorithms for exponential and gaussian are: Exponential: Generate a uniformly
distributed number between 0 and 1 and take its natural logarithm. Gauss: Take the mean of uniformly distributed numbers
and scale them by the standard deviation.
941
I. GENERAL INTRODUCTION 15 D. RANDOM
Scalings
Random is a complex and sensible context. There are so many ways to let the horse go, run, or
dance – the conditions you set for this way of moving are much more important than the fact that
one single move is not predictable. What are the conditions of this randomness?
ä Which Way. This is what has already been described: random with or without history, which
probability distribution, etc.
ä Which Range. This is a decision which comes from the composer/programmer. In the exam-
ple above I have chosen pitches from Midi Note 36 to 84 (C2 to C6), and durations between
0.1 and 2 seconds. Imagine how it would have been sounded with pitches from 60 to 67,
and durations from 0.9 to 1.1 seconds, or from 0.1 to 0.2 seconds. There is no range which
is “correct”, everything depends on the musical idea.
ä Which Development. Usually the boundaries will change in the run of a piece. The pitch range
may move from low to high, or from narrow to wide; the durations may become shorter, etc.
ä Which Scalings. Let us think about this more in detail.
In the example above we used two implicit scalings. The pitches have been scaled to the keys of
a piano or keyboard. Why? We do not play piano here obviously … – What other possibilities might
have been instead? One would be: no scaling at all. This is the easiest way to go – whether it is
really the best, or simple laziness, can only be decided by the composer or the listener.
Instead of using the equal tempered chromatic scale, or no scale at all, you can use any other ways
of selecting or quantising pitches. Be it any which has been, or is still, used in any part of the world,
or be it your own invention, by whatever fantasy or invention or system.
As regards the durations, the example above has shown no scaling at all. This was definitely
laziness…
The next example is essentially the same as the previous one, but it uses a pitch scale which
represents the overtone scale, starting at the second partial extending upwards to the 32nd partial.
This scale is written into an array by a statement in instrument 0. The durations have fixed possible
values which are written into an array (from the longest to the shortest) by hand. The values in
both arrays are then called according to their position in the array.
EXAMPLE 15D06_scalings.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
942
15 D. RANDOM I. GENERAL INTRODUCTION
;****UDO DEFINITIONS****
opcode linrnd_low, i, iii
;linear random with precedence of lower values
iMin, iMax, iMaxCount xin
;set counter and initial (absurd) result
iCount = 0
iRnd = iMax
;loop and reset iRnd
until iCount == iMaxCount do
iUniRnd random iMin, iMax
iRnd = iUniRnd < iRnd ? iUniRnd : iRnd
iCount += 1
enduntil
xout iRnd
endop
943
I. GENERAL INTRODUCTION 15 D. RANDOM
xout iRnd
endop
instr notes
;how many notes to be played
iHowMany = p4
;by which distribution with how many units
iWhich = giSequence[giSeqIndx]
iDistrib = int(iWhich)
iUnits = round(frac(iWhich) * 10)
944
15 D. RANDOM I. GENERAL INTRODUCTION
giSeqIndx += 1
;call instr again if sequence has not been ended
if giSeqIndx < lenarray(giSequence) then
event_i "i", "notes", p3, 1, iHowMany
;or exit
else
event_i "i", "exit", p3, 1
endif
endin
instr exit
exitnow
endin
</CsInstruments>
<CsScore>
i "notes" 0 1 23 ;set number of notes per instr here
e 99999 ;make possible to perform long (exit will be automatically)
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Markov Chains
A typical case for a Markov chain in music is a sequence of certain pitches or notes. For each
note, the probability of the following note is written in a table like this:
945
I. GENERAL INTRODUCTION 15 D. RANDOM
This means: the probability that element a is repeated, is 0.2; the probability that b follows a is 0.5;
the probability that c follows a is 0.3. The sum of all probabilities must, by convention, add up to
1. The following example shows the basic algorithm which evaluates the first line of the Markov
table above, in the case, the previous element has been a.
EXAMPLE 15D07_markov_basics.csd
<CsoundSynthesizer>
<CsOptions>
-ndm0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 1
seed 0
instr 1
iLine[] array .2, .5, .3
iVal random 0, 1
iAccum = iLine[0]
iIndex = 0
until iAccum >= iVal do
iIndex += 1
iAccum += iLine[iIndex]
enduntil
printf_i "Random number = %.3f, next element = %c!\n", 1, iVal, iIndex+97
endin
</CsInstruments>
<CsScore>
r 10
i 1 0 0
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
The probabilities are 0.2 0.5 0.3. First a uniformly distributed random number between 0 and 1 is
generated. An acculumator is set to the first element of the line (here 0.2). It is interrogated as
to whether it is larger than the random number. If so then the index is returned, if not, the second
element is added (0.2+0.5=0.7), and the process is repeated, until the accumulator is greater or
equal the random value. The output of the example should show something like this:
Random number = 0.850, next element = c!
Random number = 0.010, next element = a!
Random number = 0.805, next element = c!
Random number = 0.696, next element = b!
Random number = 0.626, next element = b!
Random number = 0.476, next element = b!
Random number = 0.420, next element = b!
Random number = 0.627, next element = b!
946
15 D. RANDOM I. GENERAL INTRODUCTION
The next example puts this algorithm in an User Defined Opcode. Its input is a Markov table as
a two-dimensional array, and the previous line as index (starting with 0). Its output is the next
element, also as index. – There are two Markov chains in this example: seven pitches, and three
durations. Both are defined in two-dimensional arrays: giProbNotes and giProbDurs. Both Markov
chains are running independently from each other.
EXAMPLE 15D08_markov_music.csd
<CsoundSynthesizer>
<CsOptions>
-m128 -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 2
seed 0
947
I. GENERAL INTRODUCTION 15 D. RANDOM
</CsInstruments>
<CsScore>
i "trigger_note" 0 100
</CsScore>
</CsoundSynthesizer>
948
15 D. RANDOM I. GENERAL INTRODUCTION
Random Walk
In the context of movement between random values, walk can be thought of as the opposite of
jump. If you jump within the boundaries A and B, you can end up anywhere between these bound-
aries, but if you walk between A and B you will be limited by the extent of your step - each step
applies a deviation to the previous one. If the deviation range is slightly more positive (say from
-0.1 to +0.2), the general trajectory of your walk will be in the positive direction (but individual steps
will not necessarily be in the positive direction). If the deviation range is weighted negative (say
from -0.2 to 0.1), then the walk will express a generally negative trajectory.
One way of implementing a random walk will be to take the current state, derive a random deviation,
and derive the next state by adding this deviation to the current state. The next example shows
two ways of doing this.
The pitch random walk starts at pitch 8 in octave notation. The general pitch deviation gkPitchDev
is set to 0.2, so that the next pitch could be between 7.8 and 8.2. But there is also a pitch direction
gkPitchDir which is set to 0.1 as initial value. This means that the upper limit of the next random
pitch is 8.3 instead of 8.2, so that the pitch will move upwards in a greater number of steps. When
the upper limit giHighestPitch has been crossed, the gkPitchDir variable changes from +0.1 to -0.1,
so after a number of steps, the pitch will have become lower. Whenever such a direction change
happens, the console reports this with a message printed to the terminal.
The density of the notes is defined as notes per second, and is applied as frequency to the metro
opcode in instrument walk. The lowest possible density giLowestDens is set to 1, the highest to 8
notes per second, and the first density giStartDens is set to 3. The possible random deviation for
the next density is defined in a range from zero to one: zero means no deviation at all, one means
that the next density can alter the current density in a range from half the current value to twice
the current value. For instance, if the current density is 4, for gkDensDev=1 you would get a density
between 2 and 8. The direction of the densities gkDensDir in this random walk follows the same
range 0..1. Assumed you have no deviation of densities at all (gkDensDev=0), gkDensDir=0 will
produce ticks in always the same speed, whilst gkDensDir=1 will produce a very rapid increase in
speed. Similar to the pitch walk, the direction parameter changes from plus to minus if the upper
border has crossed, and vice versa.
EXAMPLE 15D09_random_walk.csd
<CsoundSynthesizer>
<CsOptions>
-m128 -odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 2
seed 1 ;change to zero for always changing results
949
I. GENERAL INTRODUCTION 15 D. RANDOM
giHighestPitch = 9
;set pitch startpoint, deviation range and the first direction
giStartPitch = 8
gkPitchDev init 0.2 ;random range for next pitch
gkPitchDir init 0.1 ;positive = upwards
950
15 D. RANDOM II. SOME MATHS PERSPECTIVES ON RANDOM
gkDensDir = -gkDensDir
if kDens > giHighestDens then
printks " Density touched upper border - now becoming less dense.\n", 0
else
printks " Density touched lower border - now becoming more dense.\n", 0
endif
endif
endif
endin
</CsInstruments>
<CsScore>
i "walk" 0 999
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
Random Processes
The relative frequency of occurrence of a random variable can be described by a probability func-
tion (for discrete random variables) or by density functions (for continuous random variables).
When two dice are thrown simultaneously, the sum x of their numbers can be 2, 3, …12. The fol-
lowing figure shows the probability function p(x) of these possible outcomes. p(x) is always less
than or equal to 1. The sum of the probabilities of all possible outcomes is 1.
951
II. SOME MATHS PERSPECTIVES ON RANDOM 15 D. RANDOM
For continuous random variables the probability of getting a specific value x is 0. But the probability
of getting a value within a certain interval can be indicated by an area that corresponds to this
probability. The function f(x) over these areas is called the density function. With the following
density the chance of getting a number smaller than 0 is 0, to get a number between 0 and 0.5 is
0.5, to get a number between 0.5 and 1 is 0.5 etc. Density functions f(x) can reach values greater
than 1 but the area under the function is 1.
Csound provides opcodes for some specific densities but no means to produce random number
with user defined probability or density functions. The opcodes rand_density and rand_probability
(see below) generate random numbers with probabilities or densities given by tables. They are
realized by using the so-called rejection sampling method.
Rejection Sampling
The principle of rejection sampling is to first generate uniformly distributed random numbers in
the range required and to then accept these values corresponding to a given density function (or
otherwise reject them). Let us demonstrate this method using the density function shown in the
next figure. (Since the rejection sampling method uses only the shape of the function, the area
under the function need not be 1). We first generate uniformly distributed random numbers rnd1
over the interval [0, 1]. Of these we accept a proportion corresponding to f(rnd1). For example, the
value 0.32 will only be accepted in the proportion of f(0.32) = 0.82. We do this by generating a
new random number rand2 between 0 and 1 and accept rnd1 only if rand2 < f(rnd1); otherwise we
reject it. (see Signals, Systems and Sound Synthesis6 chapter 10.1.4.4)
6 Neukom, Martin. Signals, systems and sound synthesis. Bern: Peter Lang, 2013. Print.
952
15 D. RANDOM II. SOME MATHS PERSPECTIVES ON RANDOM
EXAMPLE 15D10_Rejection_Sampling.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 10
nchnls = 1
0dbfs = 1
instr 1
krnd rand_density 400,800,2
aout poscil .1,krnd,1
953
II. SOME MATHS PERSPECTIVES ON RANDOM 15 D. RANDOM
out aout
endin
instr 2
krnd rand_probability p4,p5,p6
aout poscil .1,krnd,1
out aout
endin
</CsInstruments>
<CsScore>
;sine
f1 0 32768 10 1
;density function
f2 0 1024 6 1 112 0 800 0 112 1
;random values and their relative probability (two dice)
f3 0 16 -2 2 3 4 5 6 7 8 9 10 11 12
f4 0 16 2 1 2 3 4 5 6 5 4 3 2 1
;random values and their relative probability
f5 0 8 -2 400 500 600 800
f6 0 8 2 .3 .8 .3 .1
i1 0 10
i2 0 10 4 5 6
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
Random Walk
In a series of random numbers the single numbers are independent upon each other. Parameter
(left figure) or paths in the room (two-dimensional trajectory in the right figure) created by random
numbers wildly jump around.
Example 1
Table[RandomReal[{-1, 1}], {100}];
We get a smoother path, a so-called random walk, by adding at every time step a random number
r to the actual position x (x += r).
Example 2
954
15 D. RANDOM II. SOME MATHS PERSPECTIVES ON RANDOM
The path becomes even smoother by adding a random number r to the actual velocity v.
v += r
x += v
The path can be bounded to an area (figure to the right) by inverting the velocity if the path exceeds
the limits (min, max):
vif(x < min || x > max) v *= -1
The movement can be damped by decreasing the velocity at every time step by a small factor d
v *= (1-d)
Example 3
x = 0; v = 0; walk = Table[x += v += RandomReal[{-.01, .01}], {300}];
The path becomes again smoother by adding a random number r to the actual acelleration a, the
change of the aceleration, etc.
a += r
v += a
x += v
955
II. SOME MATHS PERSPECTIVES ON RANDOM 15 D. RANDOM
Example 4
x = 0; v = 0; a = 0;
Table[x += v += a += RandomReal[{-.0001, .0001}], {300}];
(see Martin Neukom, Signals, Systems and Sound Synthesis chapter 10.2.3.2)
EXAMPLE 15D11_Random_Walk2.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 128
nchnls = 1
0dbfs = 1
; random frequency
instr 1
kx random -p6, p6
kfreq = p5*2^kx
aout oscil p4, kfreq, 1
out aout
endin
956
15 D. RANDOM III. MISCELLANEOUS EXAMPLES
kv = kv + ka
kv = kv*(1 - p8)
kx = kx + kv
kv = (kx < -p6 || kx > p6?-kv : kv)
aout oscili p4, kfreq, 1
out aout
endin
</CsInstruments>
<CsScore>
f1 0 32768 10 1
; i1 p4 p5 p6
; i2 p4 p5 p6 p7
; amp c_fr rand damp
; i2 0 20 .1 600 0.01 0.001
; amp c_fr d_fr rand damp
; amp c_fr rand
; i1 0 20 .1 600 0.5
; i3 p4 p5 p6 p7 p8
i3 0 20 .1 600 1 0.001 0.001
</CsScore>
</CsoundSynthesizer>
;example by martin neukom
Csound has a range of opcodes and GEN routine for the creation of various random functions and
distributions. Perhaps the simplest of these is random which simply generates a random value
within user defined minimum and maximum limit and at i-time, k-rate or a-rate according to the
variable type of its output:
ires random imin, imax
kres random kmin, kmax
ares random kmin, kmax
Values are generated according to a uniform random distribution, meaning that any value within
the limits has equal chance of occurence. Non-uniform distributions in which certain values have
greater chance of occurence over others are often more useful and musical. For these purposes,
Csound includes the betarand, bexprand, cauchy, exprand, gauss, linrand, pcauchy, poisson, tri-
rand, unirand and weibull random number generator opcodes. The distributions generated by sev-
eral of these opcodes are illustrated below.
957
III. MISCELLANEOUS EXAMPLES 15 D. RANDOM
958
15 D. RANDOM III. MISCELLANEOUS EXAMPLES
In addition to these so called x-class noise generators Csound provides random function gener-
ators, providing values that change over time at various ways. Remember that most of these
random generators will need to have seed set to zero if the user wants to get always different
random values.
randomh generates new random numbers at a user defined rate. The previous value is held until
a new value is generated, and then the output immediately assumes that value.
The instruction:
kmin = -1
kmax = 1
kfreq = 2
kout randomh kmin,kmax,kfreq
will produce and output a random line which changes its value every half second between the
minimum of -1 and the maximum of 1. Special care should be given to the fourth parameter imode
which is by default 0, but can be set to 1, 2, or 3. For imode=0 and imode=1 the random lines will
start at the minimum (here -1) and will hold this value until the first period has been finished. For
imode=2 it will start at a value set by the user (by default 0), wheras for imode=3 it will start at a
random value between minimum und maximum. This is a generation for five seconds:
959
III. MISCELLANEOUS EXAMPLES 15 D. RANDOM
Usually we will use imode=3, as we want the random line to start immediately at a random value.
The same options are valid for randomi which is an interpolating version of randomh. Rather than
jump to new values when they are generated, randomi interpolates linearly to the new value, reach-
ing it just as a new random value is generated. Now we see the difference between imode=0 and
imode=1. The former remains one whole period on the minimum, and begins its first interpola-
tion after it; the latter also starts on the minimum but begins interpolation immediately. Replacing
randomh with randomi in the above code snippet would result in the following output:
In practice randomi’s angular changes in direction as new random values are generated might be
audible depending on the how it is used. rspline (or the simpler jspline) allows us to specify not
just a single frequency but a minimum and a maximum frequency, and the resulting function is
a smooth spline between the minimum and maximum values and these minimum and maximum
frequencies. The following input:
kmin = -0.95
kmax = 0.95
kminfrq = 1
kmaxfrq = 4
asig rspline kmin, kmax, kminfrq, kmaxfrq
960
15 D. RANDOM III. MISCELLANEOUS EXAMPLES
We need to be careful with what we do with rspline’s output as it can exceed the limits set by kmin
and kmax. Minimum and maximum values can be set conservatively or the limit opcode could be
used to prevent out of range values that could cause problems.
The following example uses rspline to humanise a simple synthesiser. A short melody is played,
first without any humanising and then with humanising. rspline random variation is added to the
amplitude and pitch of each note in addition to an i-time random offset.
EXAMPLE 15D12_humanising.csd
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
seed 0
961
III. MISCELLANEOUS EXAMPLES 15 D. RANDOM
outs aSig,aSig
endin
</CsInstruments>
<CsScore>
t 0 80
\#define SCORE(i) \#
i $i 0 1 60
i . + 2.5 69
i . + 0.5 67
i . + 0.5 65
i . + 0.5 64
i . + 3 62
i . + 1 62
i . + 2.5 70
i . + 0.5 69
i . + 0.5 67
i . + 0.5 65
i . + 3 64 \#
$$SCORE(1) ; play melody without humanising
b 17
$$SCORE(2) ; play melody with humanising
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
The final example implements a simple algorithmic note generator. It makes use of GEN17 to
generate histograms which define the probabilities of certain notes and certain rhythmic gaps
occuring.
EXAMPLE 15D13_simple_algorithmic_note_generator.csd
<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1
instr 1
kDur init 0.5 ; initial rhythmic duration
kTrig metro 2/kDur ; metronome freq. 2 times inverse of duration
kNdx trandom kTrig,0,1 ; create a random index upon each metro 'click'
kDur table kNdx,giDurs,1 ; read a note duration value
schedkwhen kTrig,0,0,2,0,1 ; trigger a note!
endin
instr 2
iNote table rnd(1),giNotes,1 ; read a random value from the function table
aEnv linsegr 0, 0.005, 1, p3-0.105, 1, 0.1, 0 ; amplitude envelope
iPlk random 0.1, 0.3 ; point at which to pluck the string
iDtn random -0.05, 0.05 ; random detune
aSig wgpluck2 0.98, 0.2, cpsmidinn(iNote+iDtn), iPlk, 0.06
962
15 D. RANDOM III. MISCELLANEOUS EXAMPLES
<CsScore>
i 1 0 300 ; start 3 long notes close after one another
i 1 0.01 300
i 1 0.02 300
e
</CsScore>
</CsoundSynthesizer>
;example by Iain McCurdy
963