Kaitlyn Aurelia Smith creates collages of sound that burst like colors from a canvas. Her cinematic music is inspired by minimalist composers like Terry Riley and from visual artists such as Moebius and Hayao Miyazaki. While the sound could be classified as neo-classical or ambient, there is a definitive texture and palette that is unique to Smith’s sound.
Smith is a Berklee-trained musician who found modular synthesis almost by accident through a neighbor who “happened to have a bunch of Buchla synthesizers that he had bought in the 1970s.” She deftly combines her knowledge of composition with a natural knack for sound design to create a series of releases that are defined more by sonic texture than by structure or genre templates.
A series of releases on Austin imprint Western Vinyl have led to her latest outing EARS. To promote the album Smith is touring with Battles in Europe and will support Animal Collective on their tour this summer.
We caught up with Kaitlyn Aurelia Smith at her Los Angeles studio to talk about her journey through synthesizers, composition, and creative process.
How do you go about assembling the tones of your sound palette?
I try and blend in as many different tones / timbres as possible and like to think a lot about texture. I guess I like to mix senses. I like to imagine what that sound would feel like if I were to touch it and vice versa - if I have a textural feel that I like - like a tennis racquet - what would that sound like?
I’ve read that some of your inspiration for EARS came from visual works by Moebius and Miyazaki. Can you elaborate on how this visual influence finds its way into your sound creations?
These visual works inspired environments that I wanted to create with sound - Color tones, hues, and sensations I feel when I see their images. I wanted to create a futuristic jungle.
When composing these sprawling jungles that you are creating, how do you know when the piece is complete?
On EARS that is where my voice and the bass line come in. Those two elements are the grounding source for the compositions on EARS. I wonder how I know when a piece is complete all the time. I think intuition. And also when I can take enough breaks and listen back and not want to add anything more.
It’s kind of like when you’re speaking. You’re not really thinking about where your sentence is going to end. It’s a more intuitive thing. You’re just letting communication happen. At the same time you’ve put a lot of time into learning the language and how to best express yourself but you’re not thinking about those things in the moment.
I also feel like creatively I relate to the way that Michelangelo would go about his creative process where he’d start with a big block of stone and carve until it revealed itself. I sometimes start the other way, from the bottom up. But I feel like I find more creativity in creating a wall of sounds and then carving it out.
How do you keep the repetitive parts of the composition interesting?
I’m always trying to keep the listener in mind and ask myself “at what point am I bored?” I don’t want to fatigue someone. This is one of the things that keeps me from doing purely modular performances because it’s really easy for electronic music to have repetition. When you have all those quantized clocks it’s sometimes hard to get that human element out of a modular. That’s something that’s really intriguing to me.
So how do you get that human element when working with machines?
By playing it like an instrument and by movement in the filter that is unpredictable. And a lot of rearranging of notes so that if you have a repetitive line, you don’t go beyond three times without adding in a new element or changing the timing.
Is three the magic number?
I did a songwriting class with Paul Simon in school and he was the one who told me that you don’t want to give the brain that third time of hearing something because the brain gets irritated after the third time. So I try to keep that in mind. I don’t always follow it but I try to change in little ways like the filter, the timbre, the note, the rhythm. You want to take people on a journey.
In the pieces that I heard, rhythm seemed to come from repetition of vocal bits, tones, textures. Where did these sounds come from?
Each creation is different but EARS was all made with my voice, analog modular synthesizers, a woodwind quintet that I composed for, mbira, and field recordings I made and then used as a foundation for granular synthesis.
Is there some correlation between the cleanliness of your workspace here and the modular? Does modular allow you “let the crazy out,” so to speak?
Yes. I am very challenged by control. I like to have control. And I feel like modular synthesis is kind of riding that line.. you can almost control it. It feels kind of like horseback riding because it feels like you’re reigning in that craziness all the time, especially with tuning and things phasing. It’s easy for modulars to sound like crazy noise. I feel like they like to do that, they want to do that.
How did you find this technology initially?
I came across modular synths from a neighbor who I was talking to about Terry Riley, one of my biggest influences for composition. And my neighbor happened to have a bunch of Buchla synthesizers that he had bought in the 1970s and he let me borrow them for about a year. I had no idea what a Buchla was at that time.. no idea what a modular synth was at that time. But I figured it out from there.
But you already had a background in music at this time, correct?
Yeah. And sound engineering. I went to school for composition, sound engineering, and some film scoring. But I learned synthesis differently than many people because I learned on modular first and then worked with software synths afterwards.
How does Live integrate into your workflow?
I perform with Ableton. I use it as my main mixer and processor. I process my vocals through it and mix the output volume ahead of time all of my different Buchla Music Easel channels. I also send MIDI from the easel to trigger other sounds.
When you are doing a live performance, are most of the sounds coming from hardware synths or are some coming from inside the box?
I do a lot of preparation in the box. I pre-mix all my signals and get things at the level I want them to be for the mix. And I set up software synths to trigger as well. For this album / show, a lot of the sounds are hardware. And then little accents from my voice (sampled) and the soft synths come from Ableton. And then I sing in real-time and process my vocals live as well. That’s actually my main use for Live - as the brain and as a processor for my voice.
So are you are using pre-recorded samples as well as real-time processing of vocals?
Yeah, both. And then I have about 22 different harmonies that I launch from the Novation Launchpad. Those are different vocal tracks that are pitch-shifting my voice in real-time. They run in parallel and they aren’t locked to a key, so I have to turn harmonies on and off that make sense as I’m playing. It’s making a lot of work for myself because there are things like the TC Helicon Harmonizer. But I have found that I get a certain sound that I like by processing through Ableton.
Do you follow what other people are making? Or is it coming from a more internal place?
It’s coming from a more internal thing. At school I did a lot of listening and transcribing and analyzing. I only do that now when I’m really interested in drawing influence from something. I can emulate something if I need to for a job or something. But when I’m coming from my own creativity I have a hard time matching something.
I don’t really listen to that much music. I do when I’m trying to be influenced. I listen to a lot of African music and tabla music. Things that don’t really stick in your head but rather wash over you. I’m really inspired by the sounds of nature. That feeling of when you sit still in nature and how much is going on sonically. That’s probably my biggest source of creativity.
Interview and photographs by Michael Walsh.