There are inherent challenges in trying to convey meaningful messages about social or political issues through the medium of electronic music. For all the expressive opportunities afforded by sequencing, synthesis and processing, it can sometimes fall short of tackling the finer details of humanity’s complexities. There are of course artists who have taken on the challenge, and some who have succeeded.
On Sam Kidel’s latest release, Silicon Ear, he poses uncomfortable questions about the nature of our hyper-connected, highly surveilled world from two distinct angles. Through a combination of concept and practical production approaches, he playfully pushes back against big data and constructs a device that subverts the omnipresent, state-sanctioned listening through the smart media embedded in our daily lives. Silicon Ear works as an experimental twist on rave’s anarchic roots, but the motivation behind its two pieces makes a strong case for the need to broaching difficult subjects faced by societies around the world.
“I find it interesting to make music that has a kind of function,” says Kidel. “It’s an unromantic way of making art that refuses the idea it could ever be separate from its social and political conditions.”
Disruption and generation
Kidel’s music first came to light under the name El Kid, folded into the Bristol-based Young Echo collective, as well through his collaborations with Vessel and Jabu. While these initial forays were mostly accomplished and original approaches to dance music, it was under his own name that he solidified his approach with a greater emphasis on concept. In particular, 2016’s Disruptive Muzak subtly subverted the tradition of easy listening hold music. On one side, the 20-minute ambient piece captured the results as Kidel played his own Muzak abstraction down the phone to unwitting call centre agents, while on the other an instrumental ‘DIY’ version gave his listeners the chance to try the technique out for themselves. For all of its mischievous fun, Disruptive Muzak also posed poignant questions, from the role of ambient music as a coercive tool to the oppressive working environment of call centres and the plight of those working in them.
"I had the idea in the abstract that I wanted to do a fake live performance in a space that I couldn't get access to, just as a playful gesture.”
Following on from the positive reception of Disruptive Muzak, Kidel expanded this line of enquiry with a conference at Oxford Brookes University called The Politics Of Ambience, which welcomed a variety of speakers including David Toop and Janna Graham and Chris Jones of the Ultra-Red art activist collective. He also brought Disruptive Muzak into the live setting with the Customer Service Agent installation, presented in London with Dr Jamie Woodcock and solo elsewhere.
While his ideas and intentions with his art are at the forefront of any discussion around his work, Kidel admits the majority of his creative process is spent working with sounds that appeal to him.
“Probably 70 percent of the time I spend producing is just experimenting with sounds without concept,” he reveals. “I have hundreds of experiments I've done in Ableton in a folder, and most of those I don't do anything with. Then later on I might have an idea, and the idea might be kind of abstract, but I might think, 'oh, actually that experiment I did fits with this idea.’”
Amongst his experiments with various aspects of music production, Kidel has delved into algorithmic pattern generation, partly as a shortcut to get to the synthesis part of the creative process after getting home from work (he currently teaches music production at BIMM). The algorithmic patches he created using a mixture of native Live and Max for Live devices became the springboards for the two pieces featured on Silicon Ear.
Dance 'til the cyber police come
The A side of Silicon Ear, “Live @ Google Data Center,” found genesis in a number of different places for Kidel, including an invitation to contribute to EBM(T), a kind of virtual gallery run by Nile Koetting and Nozomu Matsumoto. Taken with the concept of, “a fake room online,” Kidel was also thinking about the potential within simulating real spaces using reverb.
“I use reverb a lot as every electronic music producer does,” he says, “and the reverb is often just a kind of textural thing, but actually reverb came about as a way of simulating spaces. So I was thinking about whether it would be interesting to make something meaningful based off that simulation of space. I had the idea in the abstract that I wanted to do a fake live performance in a space that I couldn't get access to, just as a playful gesture.”
Composite image of Google’s Data Center with visualizations of the sound reverberating through the modelled room in RaySpace
While the idea was gestating, Google released photos of their data centre in Iowa, which gave Kidel the target for his simulated trespass. “There's that weird thing of there being probably loads of really intimate secrets about you in those data centres, but most of us will never be able to set foot in them,” he points out.
To achieve his aim of sonically simulating the cavernous warehouse of servers, cable runs and pipework, Kidel used an acoustic room simulator by QuikQuak called RaySpace. Primarily used in video game soundtrack design and cinema, the software allows you to draw in the space you wish to model, and creates the reverb accordingly.
In terms of the musical content of “Live @ Google Data Center,” Kidel again turned to his previous exploration of generative MIDI to achieve his desired effect, with around 90 percent of the sounds on the piece triggered by algorithm. “The processes were inspired by reverse engineering Keith Fullerton Whitman's Generators records, which are a huge influence on me,” Kidel explains. “Listening to them and watching live performances, I managed to figure out that a kind of clock generated the notes and an LFO controlled the pitch of those notes. You can hear the notes going up or down in either a triangle, sine wave or rectangular pattern.”
Keith Fullerton Whitman’s Generator pieces – inspiration for Sam Kidel
Kidel replicated some of the functions of Fullerton Whitman’s custom-built modular synthesizer in Live with a note generator, a Pitch device controlled by an LFO to move the pitch and a set of the randomized Velocity devices he also used on the Voice Recognition DoS Patch. Depending on the instrument channel and the desired effect, the notes might be generated by an Arpeggiator on hold, a very short looping clip with many very short C3 notes in it, or a basic Max for Live device he built that sends out C3 notes on a clock pulse. These sets of devices then control the different instruments and sounds Kidel wished to use for the piece.
“Like the Keith Fullerton Whitman instrument, it's actually quite simple in terms of what's happening, but it gives the capability to make something that evolves differently every time it plays,” he says.
Disabling the silicon ear
In 2018, Kidel was invited to participate in Eavesdropping, a programme of surveillance-themed events curated by the Liquid Architecture festival in Melbourne, Australia, primarily on the strength of Disruptive Muzak.
“There's a couple of different ways eavesdropping is happening in Disruptive Muzak,” Kidel explains. “When you’re listening to [the piece] you hear these phone calls, but also, all of those calls would have also been recorded by the call centres… They might have been replayed in meetings to discuss what happened, and they certainly would have been kept in an archive of all of the calls the call centre had ever made.”
“People do have a sense that they're being listened to all the time,” Kidel suggests, “and I think that's new. I think that contributes to this sense of ambience. I think there's a relationship between ambience and surveillance that [Liquid Architecture] were interested in and wanted me to come and talk about.”
Sam Kidel performing Disruptive Muzak in 2017
Alongside a presentation of Disruptive Muzak, Kidel was also invited to create a new project related to the Eavesdropping programme. Before he travelled to Australia, he had a moment of inspiration watching a scene in the German spy thriller The Lives Of Others. A character who suspects his apartment has been bugged puts on a record and cranks up the volume to mask the conversation he is having. While not suggesting a specific idea straight away, the scene planted a seed. While in Melbourne, Kidel attended talks and had further conversations with Sean Dockray, who was conducting experiments and research into voice synthesis and voice recognition, particularly in ‘smart speakers’ like the Amazon Alexa. As he discovered, voice synthesis and recognition are both built around similar ‘deep learning’ technologies, and his research led him to what is known as the ‘cocktail party problem.’ While the relative sophistication of the human ear can pick out one voice from a crowd in many situations, even if the voice is quieter than others, voice recognition struggles to achieve the same results. Researchers are using deep learning AI to try and circumvent the shortcoming, with limited success.
“It creates this interesting situation where voice recognition only works if it can clearly hear one voice,” Kidel explains, “and so I started playing with this idea of voice masks, which are recordings you can use, a bit like that piece of music is used in The Lives Of Others, to mask speech.”
The NSA leaks were a rich source of information pertaining to the topic of voice recognition, but of equal inspiration to Kidel as he developed the idea for a ‘speech mask’ was the hacker tool known as the Low Orbit Ion Cannon, an open source application which can crash websites or slow them down by firing a stream of fake requests at it (otherwise known as a denial-of-service or DoS attack).
“I was working with bits of voice,” he says, “and I thought, ‘OK, more effective than just having layered up people speaking would be to have loads of decoy bits of speech deployed extremely quickly, because that would make the voice recognition software try to address each one of those independently, and it would be really struggling to identify any speech at all.”
There are voice recognition applications available online, including the voice-to-text function in Google Docs and the more sophisticated IBM Watson. They aim to pick out individual voices based on pitch, but these publicly accessible services have their limitations, especially if being bombarded with a whole host of voices at once. Bearing this in mind, Kidel’s research led him towards working with phonemes, the smallest units of language, and triggering them rapidly at various different pitches to send voice recognition software in a tailspin as it tries to identify each speaker. The idea was that if such an array of sounds were played while you were speaking, the software would fail to identify any of the content in the conversation.
The basic iteration of his Voice Recognition DoS Patch consists of Vocalese, a Max for Live plug-in from Cycling ‘74’s Pluggo collection that works as a phoneme-playback synth. To control the behaviour of Vocalese, Kidel drew upon his aforementioned experiments with algorithmic MIDI generation to create the kind of energetic flurries of sound he was after. The device chain starts with a randomized Arpeggiator, followed up by a trio of Velocity devices set up to randomly filter out some of the notes from the Arpeggiator for added degrees of unpredictability. The Vocalese plays different phonemes at different notes triggered from this set of MIDI tools, with a Max for Live LFO to randomize the speed, hence pitch, of the phonemes.
At that stage, there is still only one phoneme being played at a time, and so to create further complexity Kidel added IM-Freezer [from the IRCAMAX 2 Pack] to his patch. The sampler-like effect, designed by Ircam, has a buffer that records the signal from the Vocalese and then plays it back over the top of the original signal at different speeds.
Live 10 and Max for Live required
“[The IM-Freezer] uses granular synthesis and FFT resynthesis,” Kidel explains, “and that means you can play back the audio buffer extremely slowly or extremely quickly and not get the same kind of audio distortions you get with a conventional sampler. There are different kinds of distortions that sound very metallic and strange. This piece of audio is so much about things that are happening right now, it felt appropriate to use processing techniques that made it sound so contemporary that it feels a bit alien. The IRCAM device can’t be distributed for free so I’ve replaced it with a couple of other devices that process sound in a similar way.”
Kidel tested each version of his patch using IBM Watson, finding that the volume of the patch needed to be matched with the volume of his voice, and worked best when the phonemes were in a similar pitch to his own voice. When it came to presenting it as a piece of music on Silicon Ear, he triggered the MIDI notes on the patch a little slower at the start and worked some keys and percussion into the piece to make it more of a musical listening experience.
“I guess that's the balancing act between this being a piece of music and the recording of a piece of software doing a thing,” Kidel muses. “It is a piece of music - you're supposed to listen to it, and I guess the functionality is, more than anything, to ask a set of questions about modern surveillance rather than to provide something that people could use. They could use it, but that's not my top priority.”
The bigger issues behind the music
Kidel frames the simulated live performance on “Live @ Google Data Center” as, “chamber music meets free-party-scene warehouse-invasion,” eschewing the simplistic novelty of creating a banging hard techno track to make it sound as though an actual rave had taken place. Instead you’re treated to an ear-pleasing dichotomy between delicate keys, melodious pads and punchy, metallic hits that seek to reflect our own complicated relationship with personal data and our digital gatekeepers.
“There's some kind of tension in the music and that reflected a bit of a tension in my reaction to this photograph of the data centre,” says Kidel. “Actually there was something really seductive about that photograph. It looked like a cooler version of the inside of the Death Star. I think there's something of that ambiguity in a lot of people's relationship to these new technologies. There's something really fascinating and seductive about them, and at the same time something really terrifying, and those two feelings are quite difficult to parse. I've tried to represent that a bit in the sound.”
“I think a lot of people feel a sense of hopelessness and inevitability in terms of increasing surveillance and data capture,” he adds. “That sense of inevitability is really difficult to shake, and it's really demobilizing. Part of the reason I'm interested in both sides of this record, “Live @ Google Data Center” and the voice recognition hack, is that it starts to play with that and indicates there are some possibilities for shaking up how we relate to these changes. Even if in the case of “Live @ Google Data Center,” it's a fake gig, to me it feels like it slightly changes my relationship to the existence of these spaces.”
The imagined breach of Google’s security may be more effective on an emotional or intellectual level, but the Voice Recognition DoS Patch is a practical, tangible tool. Its application may be limited – it would only work as a masking device when manually played at the correct volume alongside a conversation, but it confronts one of the pressing issues around voice recognition. From evidence gleaned in the NSA leaks, Kidel points out that while the joke image of the FBI agent trawling through endless banal discussions is inaccurate, what voice recognition actually does is scan audio for specific keywords, and if enough of those words are detected, the audio could then be passed to an actual person for examination. If correctly deployed, the voice recognition hacker would stop these keywords from being decipherable.
One of the prevalent arguments in support of mass surveillance is the well-worn adage, ‘if you’ve nothing to hide, you’ve nothing to fear.’ What Kidel points out is that tools like voice recognition are already used to reinforce existing power inequalities within society, with disturbing consequences.
“It’s probably not a big deal for me, a white man, to be recorded saying, ‘UK and US military activities in the Middle East are a form of imperialism,’” says Kidel. “But for a person who is black, and a Muslim, that statement would be very risky. We know that [UK counter-terrorism strategy] Prevent disproportionately surveils Muslims, and there was a recent report that came out that found that they were much more likely to censor themselves in all sorts of contexts, online and offline, as a result of this surveillance.”
Beyond the immediate scenario of mass data capture through social media and voice recognition (to name a few obvious examples), even more ominous technological developments are on the near horizon. Private health insurers in the US already provide incentives to people who track their health with Fitbit devices – the potential to exacerbate already existing inequalities in society with regards to access to healthcare are plain to see.
Among the presentations Kidel attended at the Eavesdropping conference in Melbourne was one by Glenn Dickins, Architect of Convergence at Dolby Laboratories in Sydney, whose job requires him to come up with innovations to increase the sound technology company’s market share. His idea was that, with the widespread increase in smart devices from phones to televisions to toasters and beyond, Dolby were aiming to get microphones into all smart devices, but more radically that these microphones would also always be on and accessible for anyone anywhere in the world.
“He pitched his idea that everyone will have access to all of the microphones in the world as a kind of utopia, but it felt very dystopian,” says Kidel. “I guess he was assuming that makes it sound more egalitarian and benign. What we're starting to see is there will be such an expansion of data capture in everyday life that everything will be measured and quantified, and that's really terrifying.”
With the rate at which these ominous technological developments take shape, Kidel is not likely to be short of fuel for his socially conscious creativity any time soon. While he recognises the limited reach of his work in the field of leftfield electronic music, it’s also apparent he has a knack for communicating some complex or troubling ideas in engaging ways. At present, he is retraining as a researcher to take his work into other fields beyond music, although he also speaks optimistically about the two disciplines complementing each other further down the line. For now, his music serves a vital role in challenging some of the more sinister aspects of modern human existence, combining concept with application in inventive, compelling ways.
“It's just asking a set of questions,” he states, “and spreading the desire to fuck with these technological systems, to break them.”
Follow Sam Kidel on Twitter
Photo of Sam Kidel performing by Jakób Skrzypski / Lausanne Underground Film + Music Festival