As the digital world allows creative practices to merge more seamlessly, an important strand of music, especially in its live presentation, is visuals. We’re now in an era where you can easily find an underground festival devoted solely to audiovisual, or A/V, performances – such as LEV Festival in Spain – and the cross-platform pair-up has long been standard practice for large-scale music events. Just look at clips from stadium shows from Kanye West, Taylor Swift, Katy Perry, etc. Any musician that reaches audiences of a certain size will eventually face the question of A/V accompaniment, regardless of whether visual presentation has been central to their work or not.
While a few musicians are also visual artists and create their own visual element, the vast majority of A/V performances are collaborations. Though working with an artist from another discipline can be challenging, it has the potential to augment and even transform a performance in many ways. At the A/V Interchange panel discussion at last year’s Loop event, we asked three artists working in visuals for music about some of their artistic goals going into such collaborations.
Visual Representation of the Music
A direct visual representation of the music – via waveforms, timed responses, shapes or colors tied to particular sound, or otherwise – is one of the simplest ways to approach A/V, and one crew that has regularly used this method is Raster-Noton. Each of the three founders – Carsten Nicolai/alva noto, Olaf Bender/Byetone, and Frank Bretschneider – makes their own visuals, and while each has their own style, they tend to share a common geometric design language, which is almost always synched to the music.
As Frank Bretschneider tells me via email, “I don't have a concept, but I avoid illustration, I don't want to tell a story. So I use more or less abstract forms – simple shapes, geometrical patterns – rather in an animation style than cinematic. Like the status lights on electronic sound equipment, representing the parameters of a music piece: flashing and moving bars, dots, lines and numbers. Generally speaking, I love to have the visuals connected to the music, synchronized and tight. In the best case, it should represent the sound on the visual level. For different projects I use different methods. For "Kippschwingungen" I used the particle emitter of Modul8, just reacting to the sound, with each frequency band (hi/mid/lo) having their respective colors, red/green/blue. It always moves along the pulse of the music, but I can change the size, place and speed of the objects. For "Super.Trigger" I just use MIDI, coming from the sequencer part of my Octatrack. It triggers the internal sounds of the Octatrack audio as well as the graphic layers in Modul8. The layers contain sets of simple black & white shapes, like circles, squares, and lines. As soon as a certain sound is triggered, a corresponding pattern appears. So whatever I play, the visuals follow.”
In another, more conceptual example, Alva Noto’s “Uni Acronyn”, video flickers between recognizable brand logos or iconic imagery, lingering on one when vocalist Anne-James Chaton’s deadpan recitation of letter clusters corresponds. The result is a formally consistent, stand-alone artwork tinged with wry commentary about consumerism.
Even more reduced and literal, artists Luisia Pereira and Manuela Donoso’s Harmonic Series showcases the harmonic patterns, or Lisaajous figures, of major, minor, and diminished chords, imaging their parabolic mapping as 3D-printed sculptures.
In a setting based on the music, immersive environments can often take the form of mesmerizing visual patterns – like Robert Henke’s laser-guided, synched Lumiere performances – or thematic riffing – such as Akihiko Taniguchi’s digital desktop re-imaginings for Holly Herndon’s “Chorus”. However, such an A/V experience is often realized in installation form.
The Nanotak duo of Noemi Schipfer and Takami Nakamoto are not the only people who create immersive environments using a simple list of ingredients – theirs is based on white LED lights and music – but they do a particularly good job of it. They have created dozens of inventive installations with little else.
While installations are heavily reliant on site-specificity, some work to highlight aspects of a particular space. Romain Tardy is an artist who has done a number of projection-mapping projects with ANTIVJ – a self-described “visual label” who have worked closely creating immersive video for the likes of Murcof. His O (Omicron) installation with musician Thomas Vaquié operates alongside existing architecture to achieve just this. Similarly, his piece The Ark with Squeaky Lobster accomplishes a similar effect within a more natural landscape. As he clarifies, “I’m more interested in a specific context rather than just a physical site or building. By context, I mean everything which is part of the experience you have in a given place and at a given time: it goes from the ambient temperature, the language spoken around you as well as the social context, to the building itself – both as an architecture and as a piece of history. I like to be surrounded by things I don’t know and/or I don’t understand – it can be a very overwhelming feeling sometimes, but I tend to think that habit and comfort are not the best allies when it comes to create something interesting.
“If I take a little step back from this practice of projection mapping, I’d say that technology is both what made it possible, and what is limiting. On one hand, the fact that digital projectors have become more and more accessible over the past 20 years has given visual artists a whole lot of new opportunities for large-scale image projects. On the other hand, I would also put this thing into perspective as the emergence of projection mapping is not only due to technical innovation, but also — and I would say essentially — to conceptual innovation. Projection mapping is nothing more than taking the projector outside of the projection room, and using three dimensional objects as a projection canvas instead of a screen, which, on a technical level, is nothing extraordinary. It’s also amazing to realize that this action of taking the projector outside, and expanding the picture outside of the flat canvas, is another loop in the history of art: from the first immersive visual environments such as the Sistine Chapel (or, if we want to go waaaay back in time, the Lascaux cave!), to more contemporary painters such as Ellsworth Kelly with his angled canvases, it seems that connecting a flat picture to the 3rd dimension has always been a recurring research theme since art exists. The concept of AR [augmented reality] is not really new, if you look at it from this angle.
“I think being aware of the latest technical evolutions/tools is great, as it also strongly connects you to your time (and of course, it’s also part of your responsibility as an artist to have at least a little idea of how you want to realize a project technically), but giving it too much importance will make you miss most of the really interesting things you could say – not from a machine perspective but from a human perspective. This is something very important to keep in mind, especially when you spend most of your time on a computer to create a piece. This is also why I love working on site-specific projects: you cannot get rid of the physical challenges and difficulties, but also the great source of pleasure and inspiration that the physical world is.”
As ubiquitous as digital software is, A/V can still be accomplished the old-fashioned way, as Paul Clipson’s live collaborations with the likes of Grouper, Lawrence English, and Jefre Cantu-Ledesma prove. His use of 16mm film and projectors means his imagery can create an immediate environment in grain alone.
If the relationship between electronic music and science fiction feels like a natural affinity, so too can the imagery around the music lend itself to imaginative speculation. Sometimes the music is already built around a particular narrative – think Drexciya – setting the tone for visuals, but often the visuals create their own story. That’s certainly the case for Tarrik Barri’s Continuum project with Paul Jebanasam.
Judging from the artwork and track titles, the musical concept around Kode9’s album Nothing was central to its live presentation via his Nøtel A/V performances with Lawrence Lek. Their drone’s journey through an abandoned luxury hotel in the future is a musical sci-fi journey with a fully realized subject, setting, and – if you’ve seen the show – resolution.
While it isn’t subtle, spectacle is effective, stunning an audience’s senses and leaving a big impression. Often, video isn’t even necessary, as lights, especially strobes, can accomplish this on their own. And as lights get more technologically advanced, it becomes easier to do interesting things with them.
MFO is a lighting and video artist who has had several high-profile collaborations with musicians. While he used both tools in his visuals repertoire, his work on Ben Frost’s Aurora tour showed that you can create quite a big impact with relatively simple ideas.
Dejha Ti is an installation artist and her team-up with multi-instrumentalist and electronic musician Rick Feds is synched in real-time with the kind of total responsiveness that approaches visual representation of the music. However, Ti’s use of scale – an important consideration for installation work – including lights that run along the length of the room as well as on the screen in front, tip this over into reactive spectacle.
As Feds explains, “Every single pad from the Push sends a MIDI signal to Ableton and also to Resolume (sometimes I use a different program), and is assigned to a certain image or lights. There are short sounds, longer sounds and same with the visuals. As long as I press down the pad the visuals or lights will go on. We tried to get as close as we could [to a one-to-one correspondence). We worked in different time zones, exchanging ideas and samples, and then for a week 20 hours per day prior to the show, and tried to get every image to correspond to the visuals. Dejha’s imagery for this project was mostly triangles, so we had to pick the right ones to match the sounds. This was the first step and it wasn't easy, but it’s only the tip of our iceberg.
“The hardest part about working with a visual artist is the actual performance. I don't have loops playing, long samples, none of that. Every single thing I'm playing with my fingers, so I have to remember all that – but it also has to make sense visually and I have to remember all the imagery! Not to mention we used 7:1 surround sound and some of the pads I panned manually. This track is pretty percussive, and as I come from a jazz background, I like to improvise a little which sets a new vibe and energy to the performance. However, by doing that the visuals got a bit chaotic, so we had to overcome a big obstacle for me to play much less. So, it was definitely a challenge for me.”
As Sougwen Chung pointed out during the Loop discussion, sometimes artists aim merely to try something new. Especially in the realm of process-based works, experimentation can add interesting layers to a performance. Visual artist Harm van den Dorpel’s algorithmically produced visuals for Lexachast (music provided by Bill Kouligas and Amnesia Scanner) changes each time you refresh the page. The collaboration has also appeared at various festivals, with van den Dorpel helming live visuals: “My algorithms filter random NSFW imagery from across the internet, generating graphic live-streaming visuals that fade into each other, sort of a Ken Burns effect on steroids,” he reveals. “The filtering process was done with the open_NSFW classification model and word2vec tag cloud analysis. From this massive pile of image data, I live 'curate' the output and mix it with self-programmed, MIDI controlled Open GL software. This has somehow a quite dystopian effect. If you, for example, mix all colors together as paint, the result is this particular dark brown/grey color. Also, I used to cycle daily past a dump as a child, and it struck me that if you put all trash together, all of their smells combined create one quite specific and particular smell. This also happens with Lexachast. When 'all' images are put together, from this multitude and randomness, somehow a 'one-ness' emerges, or unity. A particular feeling.”
In a live setting, Daito Manabe has worked with artists ranging from Nosaj Thing to Björk, often manipulating data in real-time. Likewise, Weirdcore’s work for Aphex Twin regularly involves using live footage from a drone flying over the audience.
Lillevan is a celebrated video artist, well-known for his distinct style and ability to evoke emotional depth with his images, however abstract. As someone who has worked in the field at least 20 years, his visuals have included all sorts of styles – including at least one performance with Vladislav Delay that was Lillevan’s version of Andy Warhol’s Screen Tests – and improvisation.
An A/V duo melding the digital with the analog, animator/visual artist Rueben Sutherland and musician/producer Dan Hayhurst’s work as Sculpture began as an experiment. While they had been neighbors for about a year and were familiar with each other’s work, their first performance together was unrehearsed and neither knew what the other would do. As Hayhurst explained in a 2014 interview, Sutherland works primarily with an “animation technique using discs like a phenakistoscope and a video camera.” He continues to detail how there’s still a digital element to the video, just as there’s an analog element to the music: “We use techniques of spontaneous rearrangement, in Reuben's case he literally has a 'library' of hundreds of animations on printed cards. They're being generated in Photoshop and After Effects and transferred to this tactile medium. I'm combining physical cut-ups with analogue tape and hardware instrumentation (samplers / CDJ deck) with digital techniques for disintegrating and reconfiguring audio, for which I'm using Live pretty much entirely.”
As Merkaba Macabre, Steven McInerney is another artist using actual film, which he subjects to a “decomposition” process. It also triggers his music hardware, which is the opposite direction to the way Bretschneider and Feds/Ti are working (where the music triggers the visuals). As McInerney divulges via email, “Decomposition is an ever-evolving, camera-less audio-visual performance for one 16mm projector with optical sound and live modular soundtrack. The color negative stock used in the film has undergone a series of organic and chemical decomposition techniques. A color print is made of the results and while projected, the sound output of the disrupted film emulsion is sent into the preamp of a modular synthesiser converting the film print into controlled voltage. The contours, shapes and color of the decomposed celluloid is translated live into sound synthesis. Triggers, gates and envelopes are created from the film print and the parameters of the modular are played live. Each live show is a recycled, re-decomposed version of itself where the film print and subsequent inter-negatives go through the same decomposing process and reprinted so no show is ever the same.The composition process has ranged from putting the color negative into food waste giving it a cellular type quality. This is because of the bacteria feeding off the gelatin in the layers of film. I also use boiling, which caused reticulated effects, and also freezing along with organic chemical compounds. I try not to touch the film as I do not want to disrupt the natural process. I have lost a lot of film in the process, but it has been an interesting learning curve.”
As Tarrik Barri and Jem the Misfit say, ideally visuals can accomplish many of these things in one go. Certainly, the line between immersive environments and spectacle, for instance, can be a thin one. And nothing pushes an artform forward like experimentation. What are some of the other ways visuals can be used in conjunction with music? How can they affect your sound?
Text: Lisa Blanning
Photo credit: Udo Siegfriedt