The Machine Starts: Progressive Music, AI Slop, and the Fight for Artistic Expression
You talk as if a god had made the Machine… I believe that you pray to it when you are unhappy. Men made it, do not forget that.
– The Machine Stops, E. M. Forster, 1909
One of the most admirable qualities about musicians working in the progressive genres is their openness to new ideas. Indeed, it’s one of the guiding tenets of composing and listening to progressive music. Unbeholden to the conventions and tropes of other genres, true prog manifests as a willingness to push boundaries; it becomes an almost kleptocratic impulse to look across the entire musical spectrum, take the bits that work, and fit them into your own style. Perhaps the most obvious example of progressing genres forward is Meshuggah’s invention of a whole new rhythmic paradigm which has come to define the progressive music of the twenty-first century, birthed a genre, and begat a whole new subgenre thereafter1. Recently, artists in progressive music have begun to incorporate microtonal elements (King Gizzard & the Lizard Wizard, The Mercury Tree, Kostnatění), leading to a whole new tonal language to play with. Meanwhile, French group Mantra masterminded Medium, an album with two tracks that can be played either separately or simultaneously to “create” a third song. The progressive spirit is insatiable and willing to try anything. Even an idea that may well break music.
The rollout of so-called AI has happened at an inexorable pace, and as a result you’ll likely be familiar with its roll-out either in your job, your personal life, or just from the relentless pace of the news about it. While AI may or may not have utility in realms like medicine and science, we’re going to focus on how it pertains to art, and more specifically music. You’ll like have seen “AI art” in some form or another. Facebook has been flooded with plausible algorithmically generated pictures of idyllic scenes in order to farm views for money; Twitter (now known by its dwindling user base as X) is beholden to a Musk-made chatbot called Grok which kept telling users, including its creator, that they’re basically fascists, until they fiddled with its programming and made it a fascist itself; and Instagram has repeatedly shown me videos from an account where a scene from a video game plays while artificially simulated voices of Stewie and Peter Griffin from Family Guy dissect the history of a particular classic prog band. Last year, in an effort to make a small stand against AI slop in our scene, we published a short PSA saying that we would call out AI art used to make album covers, and requested bands credit the artists they work with so we could credit them in turn. We felt the gesture important, but perhaps it’s insignificant when so many genre giants are willing to play with a technology which stands in stark opposition to the creative impulse and contains much broader threats within its scope.
Introducing the Disrupters
I know your face, I know your voice
I know your girls, I know your boys
I am the lover of your life (and a handy light at night)
I am the apple of your eye.
– Life in the Wires Pt.1, Frost*
In a piece for The New Yorker, the sci-fi short story writer Ted Chiang writes that most art “requires an intention to communicate”. While there’s no universally agreed upon definition of art, it being a broad and nebulous thing, for the purposes of this essay, the general tendency for art to have an intention to communicate is key. Think of the scene in Ferris Bueller’s Day Off in which Cameron stares awestruck into the pointillist masterwork of George Seurat, A Sunday on La Grande Jatte. We don’t know what he’s feeling, but that the piece has touched him deeply is clear, and the scene is one of the best depictions of the power of art. A melody in a song, a face in a painting, a sentence in a novel—all can hit us like an emotional freight train. In the moment that an artist has communicated with us, a transference of ideas through a transitive medium that acts as a middle man through space and time takes place. What we get out of their art may not be what they intended. Ray Bradbury famously argued about this issue with a class of students studying his famous book-burning dystopian novel Fahrenheit 451; to him, the book was meant to be a warning about television. However, to basically everyone else who has ever read it, the book communicated a chilling tale of censorship taken to a terrible extreme. Nevertheless, such examples show that communication of a sort has nevertheless taken place.
This communication is achieved because artists make choices. Chiang argues that art is the product of a series of choices made by an artist: every word in a novel, and every brushstroke of a painting is a choice. In music, the complexity of choices is immense: notes, instruments and their interactions, vibrato, tone, effects, production and a vast range more. With generative AI, people have been able to generate images, stories, and even music. But the number of choices made is minimal. As Chiang says:
When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices. If an A.I. generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making.
The same is true of AI music. Google, the company that dropped their infamous slogan “don’t be evil” when it became apparent they could no longer live up to that standard, utilises a database based on two million sound clips, mostly YouTube music clips, for its music generating AI program, Audioset. All the interacting choices usually handled by artists are instead being handed over to algorithms which draw on datasets of granularly analysed audio samples to make plausible decisions as to how your prompt should sound based on how all other music sounds. The result is a distillation, and herein is its anti-creative ethos. A human may be influenced by hundreds of different factors—god knows we’ve reviewed our fair share of bands who made the choice to sound like Opeth or Tool or Dream Theater—but it’s nevertheless a choice. Handing so many decisions over to a generative AI means that almost all opportunities for originality are neatly avoided. Of course, this is the point, as Mikey Schulman, CEO of the AI music company Suno said: “It’s not really enjoyable to make music now… It takes a lot of time, it takes a lot of practice, you need to get really good at an instrument or really good at a piece of production software… I think the majority of people don’t enjoy the majority of the time they spend making music.” Of course, this is a fundamental misunderstanding of why people engage with and produce art; as Chiang puts it, “Generative A.I. appeals to people who think they can express themselves in a medium without actually working in that medium.”
We now have a wealth of examples within the prog and metal scenes of AI use. Most of these are, thankfully, restricted to album art. Among bands we’ve reviewed, Time Voyager by Barock Project, The Path of Decoherence by Advocacy, and, possibly, The Lightbringers by Orion (art “by” Hugh Syme) all spring to mind. Veteran Floridian death metal outfit Deicide courted controversy after adorning their thirteenth album, Banished by Sin, with dreadful AI art; frontman Glen Benton responded with typical sang froid. Dream Theater’s longtime album artist, Hugh Syme, was also implicated by internet sleuths postulating that inconsistencies on the cover for latest release Parasomnia might be explained by AI use2. Small artists with a tight budget utilising AI to generate their album art is understandable. It’s less forgivable for established groups with money at their disposal who could easily commission or license artwork from a human artist to utilise generative AI, let alone for established artists themselves, like (allegedly) Hugh Syme to lazily start using AI. Some artists stand by their use of the insurgent tech, others cave to fan pressure, as Pestilence did last year after presenting an AI cover for their greatest hits album, Levels of Perception.

AI album art is the thin end of the wedge. In the underground, some would-be artists are experimenting rather more heavily with AI. A content creator under the moniker D1G1T4L RU1N uses AI to create albums in a variety of genres—the second album of this project shows that sex doesn’t always sell: the AI synthwave record has zero downloads, despite the AI-rendered cover art of a large-chested, conventionally attractive woman3. A “label” called Rift Reaper Records (since removed from Bandcamp) took it a step further, inventing a roster of bands and their new releases collected under one umbrella. The music, as is often the case, sounded passable but uncanny. Singers tended to sound overly digitised, the production had a strange stereo vibe, and one could tell when the track had been edited thanks to incongruous changes in vocal style or instrumentation, the product of new prompts being inserted.
These people, whom I’m loath to call artists, utilise generative AI music apps like Udio and Suno to make their compositions. Such apps offer a range of tools. A user can write a prompt, e.g. “make a thrash song in the vein of Metallica” and receive an output that gives you perhaps two minutes of plausibly Metallica-esque generated slop (which sounds a lot like a description of the last couple of Metallica albums anyway). But with advanced tools, one can edit the initial result: add intros, instrumental sections, codas, change instruments, insert a solo. Doubtlessly, one has to know their way around these technologies to create twenty-minute songs, as many of these content creators do, but we come back to the Chiang problem: while these people make some choices, the bulk are made by the algorithms, and many of those are important qualitative choices. A real band making real music slapping AI art onto their album cover is annoying, but at least the music is real. Nearly everything about these exclusively AI creators, however, is artificial.
In recent weeks, internet sleuths have uncovered a band called The Velvet Sundown, a blues group with over 1.3 million4 monthly Spotify listeners and 48,000 followers at the time of writing (a huge jump from 1,500 followers, just days before)5. The group’s two albums were released on June 5th and June 20th of this year, and the album art and band photo are clearly AI generated. The music is, too. Their Spotify bio even comes with an endorsement from Billboard: “They sound like the memory of something you never lived, and somehow make it feel real.” That quote, as you may have guessed, isn’t real; Billboard never said that. In an article for MusicAlly, writer Stuart Dredge points out that their Spotify “popularity” is largely generated by four Spotify accounts seeding their popular, pre-existing playlists with the content of this AI generated band:
Take Extra Music for example. Its profile has just under 3,000 followers, but its ‘Vietnam War Music‘ playlist has 629,311 saves (accounts adding it to their libraries). The 330-song playlist has tracks from a host of Vietnam War-era artists: Creedence Clearwater Revival, the Rolling Stones, Buffalo Springfield, Jimi Hendrix. Oh, and The Velvet Sundown, whose tracks are nestled at numbers 24, 34, 43, 52, 61, 70, 79, 88, 97, 106, 115, 124, 133, 142, 151, 160, 169, 178, 187, 196, 205, 214, 223, 232, 241 and 250 in its running order. 26 tracks in all – 7.9% of the entire playlist for a band with no obvious ties to the war.
It’s not just Spotify. The Velvet Sundown’s music is available on all of the main music platforms, though Spotify is the one with the most obvious public metrics. Deezer, which has its own AI detection software, has flagged the band’s music as “AI generated content”, and the site estimates that around 20,000 uploaded tracks per day, around 18% of the total uploaded per day, are made with AI. Outside of those main music platforms, you’ll often find nothing. Though the band bio names band members—”vocalist and mellotron sorcerer Gabe Farrow, guitarist Lennie West, bassist-synth alchemist Milo Rains, and free-spirited percussionist Orion “Rio” Del Mar”—these people don’t exist, they return no search results6. The only purpose of the music is to harvest money from disingenuously obtained streams. Out of curiosity, I looked up the top band in the “fans also like” section on Spotify and that group, Flaherty Brotherhood, was also AI generated (as attested to by Deezer); if AI bands are finding themselves recommended in similar places, it suggests their promoters may be coordinating on pushing multiple groups.
While seeding playlists is one tactic for farming listens, one can’t discount the possibility that bot accounts are being set up to artificially bolster the listener and follower counts of fictitious bands; indeed, this is an existing problem with streaming services, and investigations by Wired suggest that bot accounts are artificially inflating the listens of AI generated music. The Velvet Sundown’s 1.3 million monthly listeners, while likely not an enduring figure, is equivalent to some of the largest bands in the prog scene—Dream Theater can boast 1.6 million at the time of writing. And yet these AI bands have virtually no internet presence beyond the streaming platforms. These fake artists live in a bubble of artificiality: non-existent accounts endlessly streaming the AI generated music of non-existent people, an entire fictive, digital world coexisting and bleeding into the wider, human-occupied internet. Naturally, a whole industry is being built to cash in on the AI music grift: free YouTube courses or cheap video tuition; a four week online course with the prestigious Berklee College of Music (worth one credit) that costs a cool $515; or a class run by an MIT alumnus—which, like every underground trad prog project ever, features a guest appearance from Jordan Rudess (who we’ll hear more from later)—that retails at an eye-watering $1,500.
While greedy exploiters cash-in on the latest money-making scheme, it’s only a lucky few who will achieve The Velvet Sundown’s levels of virality. The bulk of these AI projects will be lost amid the clutter of new music releases, and most listeners will likely reject what they hear if they hear anything, repelled by the low quality and uncanniness of the music. However, as the technology improves, telling the difference will become increasingly difficult; artists who may not be using AI could be implicated if their music seems suspect, as in the recent case of Draugveil’s debut whose music and realness has been called into question but with seemingly little evidence. In the meantime, these AI communities support one another because they’re all adherents to the potential powers and possibilities of the technology, powers that extend far beyond music.
The_Book_of_Revelation.epub: A Brief Aside on the Cult of AI
We believe intelligence is in an upward spiral – first, as more smart people around the world are recruited into the techno-capital machine; second, as people form symbiotic relationships with machines into new cybernetic systems such as companies and networks; third, as Artificial Intelligence ramps up the capabilities of our machines and ourselves.
– The Techno-Optimist Manifesto by Marc Andreessen
Most readers will have glimpsed the contents of the slop-bucket that AI ‘art’ has created. From an AI stand-up special by deceased comedian George Carlin (the provenance of which may actually have been human comedians pretending to be AI), to terrible AI-generated Spongebob Squarepants covers of popular songs which led to hasty legislation protecting a person’s right to their voice likeness, the internet has been flooded with countless examples. The journalist Robert Evans has described this trend as “cultural necrophilia”: these algorithms, trained on the sum of human art, are robbing from the dead, as well as the living. Evans is something of an aficionado when it comes to cults, and cultishness is inherent to the AI movement. Silicon Valley has become a breeding ground for subcultures and communities with fanatical tendencies, from cult-like work environments in tech companies organised around a Dear Leader to a slew of cult and cult-adjacent movements such as the rationalists, post-rationalists, effective accelerationists, and the Zizians. As an anonymous former Google engineer put it: “If your product isn’t amenable to spontaneously producing a cult, it’s probably not impactful enough.”
Inherent to these groups is an almost messianic belief in the advent of artificial general intelligence (the classic sci-fi idea of “AI” as a sentient humanlike intelligence), taking them down all sorts of strange and unethical paths. In the effective altruism and accelerationist movements, this has manifested as a reticence to address ethical concerns, such as those surrounding deepfake pornography or algorithmic bias, on the grounds that it would limit the development of artificial general intelligence. At its furthest extreme, we get the strange case of the Zizians, and their belief in the Roko’s Basilisk thought experiment, the gist of which is that an otherwise benevolent godlike AI could come into existence and choose to punish all those who knew of its potential existence but did not work towards its creation. The Basilisk is essentially a restatement of Pascal’s Wager, the idea that a rational person should believe in God and behave as though He exists because the infinite torment of Hell isn’t worth the risk, and it has surprising traction—it’s how Elon Musk and Grimes met. In the case of the Zizians, their belief in Roko’s Basilisk and the justification of all their actions in order to bring about the coming AI God ultimately led to the murder of at least six people.
Cults committing murders is the extreme end of this thinking, but a Messianic fervor suffuses Silicon Valley. The prominent venture capitalist Marc Andreessen claims in his Techo-Optimist Manifesto7: “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder,” which is clearly a very dangerous assertion to make. Andreessen, a lifelong Democrat, supported Donald Trump in the 2024 election, citing concerns that Joe Biden would regulate AI, which Andreessen had heavily invested in; indeed, the entire tech sphere’s sudden pivot to support of the Republican party was largely motivated by such concerns, and it’s paid off for them. When the US Copyright Office asserted its position on AI content earlier this year—“making commercial use of vast troves of copyrighted works, especially where this is accomplished through illegal access, goes beyond established fair use boundaries”—President Trump fired Shira Perlmutter, the Office’s Director. More worryingly still, Trump’s ridiculously named “Big Beautiful Bill” that just passed contains a clause that bans regulation on AI for ten years.
What one has to bear in mind is that their faith in the power of AI to come is just that: faith. While the science is by no means settled, tech experts suggest the current type of LLM-based AI inherently lacks the traits to ever achieve the artificial general intelligence that these gurus crave, regardless of increasing sophistication; the method by which tech is currently seeking to push towards sentient AI may well be a complete non-starter. Indeed, in The AI Con by Emily Bender and Alex Hanna, the authors recall that “a paper written by OpenAI researchers… determined what kinds of tasks in what kinds of jobs could be handled by an LLM by asking the LLM itself.” There’s a circular logic at play in Silicon Valley, a willingness to believe an experimental machine’s algorithmic, ego-pleasing answers. A similar messianic fervour came with the crypto boom of recent years and the adjacent run on NFTs. True believers hailed the advent of an alternative to banking which avoided cruel government oversight; when thousands became victims of pump-and-dump schemes and fraud, people began to suggest that perhaps their deregulated currency needed banks to help regulate them.
One might reasonably wonder what Spotify’s stake is in the AI game, especially when, as we’ve seen, their platform is allowing the proliferation of AI music. Given that Spotify CEO, Daniel Ek, recently invested $690 million into the defence company Helsing, which is developing AI-enhanced military hardware, the outlook doesn’t look all that promising. Indeed, Ek partnered with everyone’s favourite innovator in the domain of ranking female college students by their attractiveness, Mark Zuckerberg, to release a joint statement persuading Europe to embrace open-source AI, of the poorly regulated kind that LLMs are based on. While the statement makes some valid points regarding the inconsistency of European laws, the ultimate thrust is clear, jettison restrictions on AI content so they can bring you worthless products: “Given the current regulatory uncertainty, Meta won’t be able to release upcoming models like Llama multimodal, which has the capability to understand images.” Again, these billionaires only wish is to encourage countries to deregulate in order for them to hawk their dubious wares.
The section of the Zuckerberg/Ek statement regarding Spotify is particularly galling:
Looking back, it’s clear that our early investment in AI made the company what it is today: a personalised experience for every user that has led to billions of discoveries of artists and creators around the world. As we look to the future of streaming, we see tremendous potential to use open-source AI to benefit the industry. This is especially important when it comes to how AI can help more artists get discovered. A simplified regulatory structure would not only accelerate the growth of open-source AI but also provide crucial support to European developers and the broader creator ecosystem that contributes to and thrives on these innovations.
Given Spotify’s widespread renown as a platform that pays artists an insulting pittance per stream (maybe some of that $690 million could’ve gone towards paying artists fairly?), and the fact that the generative AI used on 2024’s Spotify Wrapped feature delivered such useless results that it caused a backlash, trumpeting anything about the Spotify experience beggars belief. It’s perhaps unsurprising that a company as distinctly unscrupulous as Spotify would take such a line on AI, but it’s nevertheless worth noting. These investors and company executives fear industry regulation on AI because they’ve invested unimaginable sums of money in it. Sold by tech companies as this cutting-edge technology that you simply have to use in order to not be left behind, those who are susceptible to such heavily-marketed fads partake and find, lo’ and behold, a product which takes the decision-making process out of art, and an all-powerful thing to evangelise about. If every piece of information can legally be harnessed by users of generative AI, then companies and users alike can reap immense profits from content they don’t own and have no right to own. I don’t claim to be a legal scholar, but that sounds rather a lot like theft. The internet has always been a regulatory wild west, but we need to catch up fast. Thank god the real artists haven’t been taken in by this stuff.
Terror & Hubris in the AI-Generated House of Jordan Rudess
There walks a god among us
Who’s seen the writing on the wall
He is the revolution
He’ll be the one to save us all.
– The Gift of Music, Dream Theater
The keyboardist and pianist Jordan Rudess is a musical institution. A child prodigy, he attended esteemed music college The Juilliard School when he was just nine years old. His work with Dream Theater and Liquid Tension Experiment is widely lauded; he’s performed guest spots with a litany of great artists from Steven Wilson and Ayreon to Gleb Kolyadin and Richard Henshall; and his masturbatory excesses in his solo project inspired one of the most disgustingly funny reviews in music criticism. He’s also the founder of Wizdom Music, a software company dedicated to developing apps that explore new virtual avenues for music creation and breaking down the process of composition. Wizdom has released a number of apps, the most notable of which is GeoShred, a guitar simulator with a breadth of customisability. While such innovations are legitimately great tools for artists, it should come as no surprise that a technophile like Rudess also showed an early interest in AI. If you journey to his Instagram page you’ll be inundated with content relating to the various AI companies he’s endorsing and partnering with; he’s even working with the music labs at MIT on the potential applications of AI.
Sometimes you can just watch him shred or talk theory, and at other times you’ll end up watching videos he edited in Videoleap by Lightricks, an app that apparently allows him to turn himself into a character from The Polar Express who plays piano amid roiling clouds with an ever-changing number of fingers. Partnering with AI start-ups like Udio and Moises, Rudess highlights their capabilities, such as remixing generated rhythms and stem-splitting. The strange thing about all these apps is how unimpressive they are. Some of the features offered, such as stem-splitting, are certainly useful, but Udio is hardly the first to offer this technology—stem-splitting is a free plugin for Audacity (Audacity offers AI-based and manual options). As for breaking down chords for songs in real-time, yes, it’s a useful feature, but it also takes some of the fun out of learning; guesswork is often what yields the real artistic eureka moments. More crucially, preliminary research from MIT suggests that the use of ChatGPT leads to users’ brain scans exhibiting “weaker neural connectivity and under-engagement of alpha and beta networks”; while the study has yet to undergo peer review, if the same results apply to AI tools for music learning—an area which hasn’t been studied as yet—then such tools may actually come to put young musicians at a disadvantage. We simply can’t be sure of the long term effects of such radical, emergent technologies, and there may be hidden costs alongside the more immediately tangible benefits.
In an interview with Devin Townsend, Rudess talks about AI as simply the next tool for musicians and posits that “how you use it is up to you as a person.” But the tool he describes sounds an awful lot like cheating:
My goal is to give the machines information about who I am so we can start to get to a point where you’re at home and you’re working on a song and you play something and, y’know, like measure 14 to 18 you can be like, “I don’t know I’m having a bad day or whatever just give me something based on my style.” And to me that’s like the next level tool.
Rudess’ believes that a sufficiently well-trained neural network could compose something in your style and that you could then look at what the AI gives you for those measures and choose to reject it or adapt it. That in and of itself isn’t an inherently unethical use of such technology, but Rudess’ “how you use it is up to you” absolves him of considering the potential for people to use such technologies to create terabytes of music trained on the work of others and then dishonestly sell it to the market at large. Although Rudess’ promotion of these companies seems to be in good faith, the slightly obsessive preoccupation of a technophilic boomer, he nevertheless seems blinkered to the reality that not everyone utilising these technologies will be acting responsibly and that AI contains massive potential for fraudulence.
Rudess isn’t the only influential figure in music to pivot towards tech. Rick Rubin, the music producer/guru, a man proudly untrained in music playing, theory and producing, has taken his immunity to learning things and produced a digital book about AI: The Way of Code: The Timeless Art of Vibe Coding. By his own admission, Rubin knows as much about coding as he does about music production. In fact, the genesis of the book is that Rubin heard the phrase ‘vibe coding.’ which he didn’t understand, and then kept seeing a meme of himself associated with it. “Based on” Lao Tsu’s Tao Te Ching and “adapted” by Rick Rubin, the “book” is a lot of aphoristic mumbo-jumbo on the topic of “vibe coding”—i.e. writing prompts into AI tools to do your coding for you8—and illustrated with graphics generated by Claude AI which the reader can modify, if they’re so inclined.
In an interview on The Ben & Marc Show9 (that’s Marc Andreessen of the aforementioned Techno-Optimist Manifesto and his venture capitalist partner Ben Horowitz), Rubin discusses various aspects of AI in the music industry. He describes AI as just “another tool in the artist’s arsenal”. He says the backlash against AI in art is because “the reason we go to an artist… is for their point of view” but we mistakenly believe that AI art is showing an AI’s point of view. In a moment of very muddled reasoning he states that “the AI doesn’t have a point of view. The AI’s point of view is what you tell it the point of view is to be.” But as we’ve seen, this isn’t true. If art is an accumulation of choices by the artist then handing over the bulk of those choices to a machine trained on the art of others isn’t a creative act at all. It’s true that AI doesn’t have a point of view, but the product it produces based on your prompts isn’t your point of view either; at best, it’s a funhouse mirror reflection of your point of view—a distorted aberration.
Rubin argues that AI in music is a further democratisation of the artistic process; just as the simplicity of punk rock before it allowed anyone with a message to convey it via music, so “vibe coding” is a democratisation of coding. Again, Rubin gets muddled here:
It can make animation that looks like your favourite cartoon and so then you see a million people doing that. That’s one idea, I want to see all the things it could do, to understand what’s possible, instead of just “I’m going to get it to do the same thing that everyone else is getting it to do.”
Rubin wants to see what the people who can “push the boundaries” can do with this technology. And doubtlessly, there are creative, talented people out there who will likely be able to push what generative AI can do to a higher level; one could become skilled at being a prompt generator, but that wouldn’t make them an artist. Again, Rubin doesn’t have a solid foundation on what AI is or that the way it produces anything; he doesn’t understand that all the outputs are tantamount to theft of existing art, including all the albums he’s ever worked on. When he says he’s “interested in what AI really can know…based on what is and not what we tell it we think it is”, he once again shows that he doesn’t understand that this isn’t an intelligent machine, it doesn’t know anything, it can’t create, and everything it makes is a regurgitation of content originally rendered by humans.
It’s easy to dismiss Rubin’s views on this topic as the ravings of a spiritual man whose curiosity outweighs his inclination to actually conduct research, but he’s a heavy-hitter in the creative world, and his legitimisation of an ethically dubious technology without questioning the potential harms is a problem. The average user of these LLMs and AI-based programmes isn’t interrogating the industry they emanated from, the aims of the libertarian-inclined tech capitalists who own them, or the potentials for harm that come with the technology. Trusted industry figures like Rubin and Rudess allow the pushers of these technologies to maintain a veneer of respectability, as well as plausible deniability against the various issues that come with them.
Closer to prog, another icon who’s dabbled with AI is Steven Wilson. In December 2024, Wilson released a novelty Christmas track called “December Skies”. All the instruments, the vocals, and the composition itself are real human musicianship; only the lyrics were generated via ChatGPT prompts to give him lyrics in the style of himself. Wilson explained:
It produced a lot, 99 percent of which was pretty awful. It was very generic, very clichéd, very banal, but about one percent it generated I could use. So it was really a question of me going through and picking out, “That’s a good line. That’s shit, that’s shit, that’s shit, that’s shit, that’s shit. Oh, that’s a good line,” and ending up with something that I thought was usable.
Ultimately, Wilson said he wasn’t interested in AI because it produces quite generic results and he was more interested in surprising ideas, adding that AI lacked that human sense of soul: “It’s kind of a reflection of a human being to lots of other human beings and seeing if those other human beings recognize themselves in that mirror.” AI can, at best, only fake that, much as Chiang said. But Wilson recognised that AI is here to stay and it could be a useful tool. After all, he argues:
For the last twenty-five years we’ve had software that can tune a singer that can’t sing in tune (like me). We’ve had software that can make a drummer that can’t play very well in time make them sound in time. We’ve had software that can emulate orchestras going back to the Mellotron. Since the beginning of electricity musicians have had tools that have helped them to make their music sound more polished and more impressive.
Wilson’s analysis seems more cautious than the AI enthusiasts like Rudess, but he still comes to the same point: this is just the next weapon in the arsenal. However, there’s a difference between those like Rudess who want to emphasise the potential utility AI might have for augmenting the composing experience of talented musicians like himself, and those who have no musical talent of their own and just want to cash in by generating soulless slop. Opposition to AI isn’t steeped in a technological consideration so much as it’s motivated by concerns of integrity. Understandably, most people have more respect for artists who have honed their craft over years of dedicated practice, who create without AI and who support fellow artists. Just as the AI album art used by bands often receives a backlash from fans, so musicians who make extensive use of AI will likely be called out for it, and it can make a difference—under pressure from fans who disliked his incessant promotion of AI, Rudess recently made new social media accounts on Facebook and Instagram which would silo the AI content to only those interested in following it. That may be a small concession to the Luddites, but it’s a minor victory nonetheless for prog’s top AI spokesman to recognise the unpopularity of his new obsession.
Fighting Back Against RoboSlop
What we are witnessing from the AI boosters is not much short of a crusade… They are waging a holy war to destroy every threat of their vision of the future, which involves all creative work being wholly owned by a handful of billionaires licensing access to chatbots [and] to media conglomerates to spit up content generated as a result of this. Their foot soldiers are those with petty grievances against artists—people who can create things that they simply cannot—and those who reflexively lean in towards whatever grifters of the day say is the best way to make cash quick.
What we have to realise is this is a systemic issue. While the true believers like Rudess and Rubin claim that this is merely a new tool for musicians to use, the reality is that an entire industrial edifice exists beyond the bedroom artist generating an image via DALL-E for his second-rate instrudjental album. An evangelistic fervour has captured Big Tech leading the industry towards a profoundly libertarian desire to burn down all regulation in order to ensure maximal profit from the sum total of human culture through an act of wide-scale thievery. The space for AI that has opened up in the digital sphere appeals to a pre-existing tendency to indulge in fantastical views of how the world work, one which has massive and terrible implications for our politics, for the environment, and for the concept of truth itself.
We still don’t know the long-term harm of these technologies. Much has been written on the immense environmental impact of AI servers, with projections that they will be drawing on four to six times more freshwater annually than the entire population of Denmark uses in a year, and that AI data centres in Ireland may account for up to 35% of the nation’s total electricity output in order to maintain operation by 2026. As mentioned earlier, a preliminary study out of MIT showed weaker activity in brain scans of ChatGPT users, and another showed “indicators of addiction” including “withdrawal symptoms” when users were cut off from the chatbot. Reports of chatbot use causing or exacerbating mental health issues, including messiah complexes, paranoid delusions, and even leading to suicide have begun to make headlines—the Turing test now seems less like a measure of the humanness of artificial intelligence and more like a measure of the credulity of the human interlocutor. AI content on social media platforms can have marked political impacts, such as the satirical AI video of a Trump/Netanyahu conquered Gaza which was shared by the President, and disingenuous actors have begun to attempt to discredit their political enemies with AI generated content. It’s hard not to conclude that we’re living in a deeply stupid time10. In the artistic sphere, AI may be less popular than it is among essay-averse students, people in the midst of mid-life crises, and world leaders, but that doesn’t mean it’s going away. However, some people are coming up with tools to fight back.
Programmers at the University of Chicago have developed the software tools Glaze and Nightshade, which scramble the AI’s ability to interpret images. Specifically, Nightshade “turns any image into a data sample that is unsuitable for model training” by effectively “poisoning” the images so that neural networks that attempt to read them hallucinate rather than reading them accurately. Similarly, independent musician Benn Jordan has pioneered a software called Poisonify which he claims works like Glaze and Nightshade but for music; his claims seem a little wild, and the degree to which this is a genuine technology is unclear, but it nevertheless shows that people are fighting back. Of course, technology is an arms race, and when the weapons of war are revealed, defence mechanisms are developed in parallel; forums like the r/DefendingAIArt subreddit monitor the tactics being used against their hallowed generative AI software.
These tools have their limits and they won’t be able to stop generative AI forever. The greatest tool against feckless billionaires is, as ever, the law. Regulation and restriction of the companies that train their AIs on the sum total of human knowledge with total disregard for ethics could and should be a significant priority. For one thing, if it destroys their derangedly large investments that will give us all some schadenfreude to take solace in, but the main aim is to protect. While the UK has held a consultation on AI and Copyright which seeks to support “rights holder’s control of their content and ability to be remunerated for its use” as well as ways of opting out of being used for training AI models, the consultation also sought to support “the development of world-leading AI models in the UK by ensuring wide and lawful access to high-quality data” which seems like a vested interest.
Nevertheless, work like this and the EU’s Artificial Intelligence Act at least show some recognition of the problems that have emerged from the rapid and unencumbered rollout of AI, as well as an awareness that there is a need to safeguard against its more immediate harms. When it comes to AI music, major record labels Warner Music Group, Universal Music Group, and Sony have pursued lawsuits against AI companies Udio and Suno. As Robert Stringer, CEO of Sony Music says:
There will be artists, probably there will be young people sitting in bedrooms today, who will end up making the music of tomorrow through AI. But if they use existing content to blend something into something magical, then those original creators have to be fairly compensated. And I think that’s where we are at the moment.
Such lawsuits will likely end in licensing agreements rather than the wholescale crushing of corporate models predicated on theft. While such a conclusion wouldn’t completely address the underlying ethical issues, some regulation on AI companies and recompense for exploited artists is certainly a step in the right direction.
But perhaps the most promising development in war of AI comes from the headquarters of the companies developing, funding and marketing it. The industry has become increasingly reticent to share financial data and user numbers in the wake of its revolutionary tech coup, and for good reason. OpenAI, by far the most successful of the AI start-ups and the parent of both ChatGPT and DALL-E image generation software, lost $5 billion in 2024, but projected potential revenues of $11.6 billion for 2025; revenues, it should be emphasised, are not profits. AI is extraordinarily expensive to run, and the industry runs at a loss because it simply isn’t being adopted at the necessary scale to make money: OpenAI claims 1.5 billion active monthly users, of whom 15.5 million are paying subscribers, or 0.96%; other estimates suggest the platform actually has more like 600 million active monthly users, which still puts paying customers at an abysmal 2.58%. For a technology that claims to be revolutionary and indispensable, these numbers are catastrophic.
Meanwhile, as AI products are forced on the workplace, many employees refuse to cooperate. A survey by the generative AI platform Writer, found that “31% of employees admit to ‘sabotaging’ their company’s AI strategy by refusing to adopt AI tools and applications”, two-thirds of company executives said adopting generative AI had caused “tension and division” in their companies. People simply don’t like AI, they aren’t adopting it at the scale the industry expected despite enormous investments and marketing efforts; if this trajectory continues, the AI bubble will pop and cause immense damage to the tech sector. Worse still, the utility of AI over time may well degrade rather than improve. As the internet becomes flooded with AI content, LLMs will begin to use that AI content to train themselves, and this leads to what’s known as model collapse. AI needs an immense quantity of data in order to improve itself, more than we’ve ever generated. When an LLM cannibalises too much AI content, it will “compound its own errors, forget certain words and artifacts that are less present in its training, and eventually cave in on itself.” Some researchers have dubbed this Habsburg AI, referring to the inbreeding and decline of the European royal dynasty.
While there are some promising moves towards regulation that aim to curtail the most extreme excesses of generative AI and those invested in it, and the sector itself seems to be teetering upon the precipice of failure, it’s down to us as consumers to be informed, aware, and ethical in our choices. We can politely tell our favourite artists that we would like them to avoid using AI content and alert them to the harms therein; we can tell them that we won’t financially support them if they insist on using a technology that hinges upon theft; and we can choose to boycott streaming services that take a relaxed line on AI content or even actively promote it. Consumer choice might not change the world, but it will yield some small victories and leave our consciences clean.
The world of AI is a strange new frontier for digital content creators, and one can’t blame people for being curious to experiment. However, the myriad ethical issues, from copyright theft to significant exacerbation of climate change, as well as the single-minded domination of an unregulated and errant tech sector, show that the technology as a whole is significantly flawed. Procedurally generating AI album covers may seem a harmless act, but it tacitly supports a deeply immoral industry gambit predicated on an act of wide-scale data thievery that has a surprising range of deeply worrying externalities. If we love music, we must do what we can to defend it from such threats, and though AI may have a limited place within the future of music composition, it behooves us to resist so-called “AI art” and “AI music” as much as we can. So keep away from tech bros, listen to AAI not AI, and make art the good old-fashioned way: with Pro Tools.
Footnotes:
- Thall. If you think djent isn’t a genre, boy are you gonna be pissed I said thall is a subgenre. ↩︎
- Syme also caught flack for seemingly reusing a piece of art from Orion’s album booklet for Parasomnia’s, which, as I said, he may have used AI for anyway. ↩︎
- The next release, Djentlemen of Groove, opted for a group of topless women standing around a muscular Latino man in leather shorts and yielded D1G1T4L RU1N a single download. Did the user buy the album because of the sexy AI art, or for such classic hits as “Don’t Cum”, “I’m Gonna Cum”, and “Gawk Gawk 9000”? We may never know. ↩︎
- When I first came across the story, they had a mere 300,000 monthly listeners, but the resulting media attention has massively boosted them. ↩︎
- It’s best to assume that everything about this piece is beholden to the rule of “at the time of writing.” This article has been a nightmare because the situation is so fluid and evolving. Things keep happening. ↩︎
- An Instagram page for The Velvet Sundown was created after the band went viral, populated with obviously AI images, naturally. ↩︎
- The full document’s worth a read. It’s a wild ride through the mind of a guy who read a lot of sci-fi and didn’t understand it, and a lot of Ayn Rand and, tragically, did understand it. ↩︎
- Vibe coding has nothing to do with music and Rubin doesn’t seem to understand that because, y’know, he doesn’t do research. ↩︎
- I link this for being true to my sources, but for the good of your own health, don’t watch this, it’s interminable nonsense. ↩︎
- I have to maintain some sense of objectivity, so I can’t really say that I think every tech CEO should be on trial in The Hague, but it’s a belief that writing this essay has driven me towards. ↩︎
0 Comments