- Opinion
- 18 Sep 24
The Hermitage Green singer discusses his unique AI songwriting experiment, and why the technology becoming the norm in the music industry mightn't necessarily be a bad thing.
As a member of Hermitage Green, I’ve been writing and releasing songs for well over a decade. I also lecture at Trinity Laban Conservatoire of Music & Dance, and at the Institute of Contemporary Music Performance in London. In virtually all avenues of work in the music industry, the prospect of computer-generated songs has become a prevalent reality. Some see AI as an evolution of existing technologies, while others envisage a dystopia of generic, soulless art and unemployed artists.
To discover how creative these systems are and the challenges they could pose to artists, I spent the first few months of 2024 researching AI songwriting software.
The Method
I set out a plan:
Firstly, I would use AI to generate a portfolio of songs. I would then submit them to a first year songwriting assessment to be scrutinised by a university assessment panel, who would issue a grade and feedback based on the tracks' melodic, harmonic, and lyrical components, as well as other factors like stylistic context and reflections on technique and process. This is how music is typically assessed at third level, and is probably the fairest way to go about the somewhat arbitrary task of grading a song.
Advertisement
I spent weeks experimenting with different software like Loudly, Chat GPT, Ryter, Mubert, Melobytes, etc. Most of these platforms are free to use and easy to navigate.
That said, it was difficult to generate chords, lyrics and melody from a single prompt e.g., “write me a song that sounds like David Bowie, in the lyrical style of the Wolfe Tones, but sing it like Barbara Streisand” (it’s no wonder Nick Cave referred to AI songs as “the apocalypse”).
To work around these limitations, I generated backing tracks on programmes like Loudly and Mubert and used Chat GPT or Ryter for lyrics, before fashioning my own melody to make the words fit the music. This could be considered a “contaminating element”, as it made me a creative contributor, but at the time it was the most effective way of generating a fully formed track (it should be noted that the systems have since become a lot more advanced).
How Does This Technology Work?
AI songwriting algorithms mimic neural networks. When a user gives a prompt, the AI searches its ever-expanding memory for references, then combines elements from stored music in that genre to create a new "original" piece.
Everything we write is also coloured by music we’ve heard and been inspired by throughout our lives. What makes a song unique and original is personality and idiosyncrasy - a new, authentic element that we recognise, remember, and go back to. In this regard, I found the AI songwriting programmes to be underwhelming. At the heart of every good song is a lived, emotional experience. I felt the lyrics and music generated by the AI systems were generic and cliched.
There was also a strong tendency toward precise phonetic rhyming. Is that bad? Well, in a not-so-charitable poll, NME readers once voted the Des’ree lyric “I don’t want to see a ghost, it’s the sight that I fear most, I’d rather have a piece of toast,” as the worst line ever written. Rhyming just for the sake of it comes off as mechanical, even when human beings do it. For any decent songwriter, meaning takes precedent over the need for rhyming. It doesn’t matter if the rhyming scheme isn’t perfect and sometimes, the lack of an expected rhyme can create a dramatic effect. AI systems have a long way to go if emotionally-driven songs are what people respond to in the future.
Advertisement
Another issue I came across was censorship. I found the Ryter programme to be better than Chat GPT in this respect, but it was prudish nonetheless. For example, it refused to write lyrics if the prompts contained words like wanker and penis, or names like as Muammar Gaddafi. Not very rock 'n' roll if you ask me.
The Results
After a couple of weeks I had three AI-generated tracks. One of my talented students lent me her vocals, bringing the tunes to life and reminding me of the immediate impact of a human voice. Once the songs were finished, I used Chat GPT to generate an extensive essay on how they were written, complete with a very well-cited bibliography, all based on nothing. This took about 20 seconds.
I submitted the project for an end-of-year songwriting assessment, where they were included in a batch of 70 other student portfolios. While the assessment panel were aware a research project taking place, they were not informed of the use of AI in order to avoid bias.
A couple of weeks later, the results came back. The songs received 72% (a distinction) and the Chat GPT essay discussing my “writing process” received 68% (a second). Most surprisingly, the project hadn’t raised a single eyebrow.
The markers liked the melodies (which I wrote) and the relentless use of perfect rhyme didn’t seem to bother them too much. Some marks were lost on structure, as all three songs contained an identical pattern of verse, chorus, verse, chorus, end, with no bridge sections, a common occurrence in AI-generated tracks.
While I don't know for certain, it could be that the algorithm is trying to replicate the fact that many number 1 songs from the 21st century tend not to have a bridge or middle 8. The late Ralph Murphy speculated that this trend reflects our dwindling attention spans.
Advertisement
What Now?
So, what of this dystopian future; the so called AI “apocalypse” which Nick Cave speaks of?
Are advanced algorithms going to be churning out songs for any occasions? Will we be flocking to music festivals and venues with our robot friends to see our favourite robot artists, before going home to put our little robot-human-hybrid babies to bed?
It’s all happening so fast that it makes you feel slow, but I don’t see things in such a gloomy light. I certainly don’t see AI songwriting as a death knell for the artists.
In the late 1800’s, mathematician Ada Lovelace predicted that there'd be a day when we would use computers to compose songs. A century later, David Bowie was employing a computer-driven random word generator to inspire lyrical ideas. AI is the next step in that evolution. Songwriting processes have been accelerated by machines for decades, and the listener has always been more interested in the humans operating them than the devices themselves.
Songs are strange little vessels of emotion. They're gestalt; greater than the sum of their parts. Far more than just a bunch of notes and words.
Take 'Tears in Heaven' by Eric Clapton. It's an objectively heartfelt piece of music; lyrically and harmonically, and when we learn that it’s written about the death of his four-year-old son, it takes on a whole other layer of heart breaking context.
Advertisement
Human beings are tribal. We connect with each other through stories, lived experiences and shared emotions. Our inherent need for connection cannot be replaced by algorithms mimicing human creativity. Instead, I believe we are about to witness the next generation of songwriters who will inevitably use AI as a creative tool.
Although the songs that I generated were soulless and, let's face it, pretty shite, I was struck but a strong sense of collaboration between the AI systems and myself. It was like we were were co-writers, and I didn’t expect that.
Good art is often transgressive. It challenges conventions by breaking the rules. The songwriters of the future will lean on AI as a stimulus to generate authentic and original art, to express themselves and to connect with audiences in new ways. It will be interesting to see how these authentic-artificial collaborations subvert the norm.
Advertisement
Advertisement