You might think, seeing Cyberfolk, that this is some genre I’m trying to promote, but I like to use it for a specific spirit of music separated from formal music instruction, and informed by some of the same motivation as ‘folk,’ which has been usurped now as another way to market traditions we might not have a connection to at all.
“All music is folk music; I ain’t never heard no horse sing a song.” – Louis Armstrong
Cyberfolk, then, is what we get when DAWs replace mandolins and fiddles as instruments we grow into by chance and circumstance. Imagine being able to afford a mandolin. Imagine being able to afford to drop tens or hundreds of thousands of dollars in debt on music school. Imagine thinking this should stop anyone from teaching their nieblings how to sidechain out bands on an equalizer to give a synth a percussive quality.
A short concept album
I did the OST for a gamebook, available on Android:
“A coastal town is overrun by horrors, and only YOU can save it!
YOU are a hunter; a man or woman of wealth and leisure who has turned their time to fighting the fiendish monsters that assail humanity. Approached by the poor villagers of Pale Harbour, and informed that a terrible evil has arisen from the sea, can YOU unravel the mystery and drive back the beasts?”
Available on the Google Play store
MuseNet, a product of OpenAI, is a massive neural network trained on MIDI data from a large number of composers. MuseTree and Musenet MIDI Tool are useful interfaces to compose with it. The former has a nicer interface but the samples leave something to be desired – most of what’s here was made with the second.
Much has been made of it replacing musicians, composers, etc. but the process of composition with this is not typically ‘push button, receive quality.’ It’s more of a thick swamp of barely compatible musical ideas that can be pushed, with great effort, into something regular musicians still scoff at and tell you is soulless and not music after all. So, of course, being the contrary bastard I am, I’ve used it a bunch anyway.
First is one using the model of a single composer, Erik Satie.
Each iteration allows you to choose the length of the next section generated. Sometimes, to avoid bad cliches and move the piece in a new direction, you have to limit this as the tree progresses, or you’re stuck regenerating the segment much longer than you’d like. Temperature settings allow for more typical or atypical writing – the lower the temperature, the more ‘stuck’ in the model it becomes, to the point where it can become obsessed with a single chord progression as it slowly descends into madness. Temperatures higher than 1 seem to draw from ideas the selected composer might use rarely or avoid for being not very good, but if the song as a whole becomes stuck it’s very useful to up it shortly to kick it out of that spot.
I found it really easy to retain Satie’s sense of humor. It quotes Beethoven’s 5th rather excessively and does not give up on making that a whole thing. At the end of the song I just want to pet its stupid fuzzy head. It ends on a single note that it really thinks is the best possible note to end on, and to hell with your opinion, this is a MASTERPIECE.
Next shows what can be done by alternating models, in this case Chopin and Rachmaninov
You can swap ‘who’ is doing your composing any time you want. When you do this, it’s very difficult to know if the direction you’re going has a reasonable ending even available. These two seemed unusually compatible even with a rigid 1/1 ratio.
Try to get too cheeky, and bring Bjork to meet Beethoven, and things can get a little uh… well.
One of the most challenging things about composing with this is that since you’re only ever hearing continuations, there’s no indication about where it will end up. That makes it very easy to lose good song structure. Contour is totally absent until it occurs – you’ll want it to follow what’s clearly prime opportunity for build-up, it will decide to bridge to an entirely new idea, or vice-versa.
Compromise is inevitable – breaking measures to follow brilliance can sometimes be for the best. If it generates something genuinely good but only on a microscopic level, tearing yourself away from that because you know it will preserve the whole song from disaster is frustrating. Listening to the whole thing over takes up much of the time, it’s the only way to re-anchor yourself in how things are actually going in the piece and get an idea of where it could/should go the next cycle.
I’m sure this process could be improved with some application of theory – the AI needs more guard rails to follow for fast production, like I’ve seen times where Musenet will abruptly decide to end a song 20 seconds in.
Here’s an example using three models, Tchaikovsky, Debussy, and Satie (picked because each apparently influenced each other,) switched without any predetermined pattern and whenever I felt it would work best.
I have honestly learned more about some composers than I ever would have casually enjoying classical music this way. It’s not at all a replacement for learning to write music. It does let you explore a massive set of brand new idea spaces. It’s funny, entertaining, but hard work to create anything of quality.
People anxious about this somehow ruining music should really go ahead and try to use it. They should remember that their music comes from musicians, no matter how convoluted that process gets. Pop music is already written by committee. If you still find enjoyment from it, it’s only because those still have people in them making choices.
You can buy a selection of these on my Bandcamp if you’d like to nudge me to make more. Most of what I’ve made is free because of licensing for samples – many are CC-By-NC. Details forthcoming.