Archive for April, 2013

I began writing “Something’s Missing” in August 2005 but didn’t finish it until October.  On recording the demo version I accidentally did a false ending, which John liked and recommended it be duplicated in the finished recording.  The slightly strange echo guitar part appeared out of nowhere when I was trying to come up with a second guitar line.

When it came time for Katie to record her vocals for this song, John had not yet recorded his, so I laid down temporary vocals so she’d have something to sing with.

“Something’s Missing” was another of the Transposition songs on the set list when put together a short-lived performing version of Chameleon Red in 2008.


Posted by on April 26th, 2013

Alfalfa SingingAuto-Tune is a particular brand of software (there are others) that can alter pitches in a vocal performance.  In a nutshell, it’s like “Photoshop for human voice”, as Time journalist Josh Tyrangiel put it.  It can correct a sharp or flat note so that it is pitch-perfect.  It’s also used as an effect to make a vocal sound somewhat unnatural or robotlike–the first notable instance of Auto-Tune used in this way was Cher’s “Do You Believe” back in 1998.

Auto-Tune is much used in pop (including pop country) music these days; most of the time it’s pretty obvious because the vocal sounds too perfect.  Interestingly enough, it’s difficult to get the robotlike effect unless the pitch of the vocal is way off; this of course makes one wonder how many pop stars can actually sing these days.  It’s also notable that Auto-Tune can be used in live performances as well as recordings; some artists have admitted to using it in this way as a “safety net”.  Auto-Tune could even make Alfalfa sound like a perfect singer.

David Bowie as Ziggy StardustI always knew the closing number in the second act would be a glitter-rock type song in the manner of early 70’s David Bowie, T. Rex, and other proponents of the style.  As John Lennon quipped to Bowie, “It’s just rock and roll with lipstick on, isn’t it?”   The instrumental configuration is much like Bowie’s Ziggy Stardust era, with acoustic rhythm guitar and electric lead guitar.  The opening riff is more or less the same as the beginning of “Two-Spirit”, only played in a different rhythm.  In the counter-melody at the end of the guitar solo the “secret sister” phrase makes its first appearance in the opera, but not the first appearance in my writing; I wrote and recorded a song called “Secret Sister” five years before.  I thought it worked in this context, though, so I recycled it.


Posted by on April 19th, 2013

In this installment of our ongoing series on modern music production techniques (see previous ones here and here), we looking at quantization, a term you may never have encountered.

Basically, quantization is a function of modern digital audio workstations that allows you to take a musical performance that is poorly performed, timing-wise, and make it better, even perfect.  For example, suppose I record a piano track but I play very unevenly, holding some notes too long, coming in too late or too early at times.  I select all the notes, select quantize from the menu, and–voila!–all the notes I played are now perfectly timed.

However, perfect timing in music = boring.  The best performances are not perfect but have a certain feel or groove.  Some musicians play slightly behind the beat, creating a laid-back feel, or slightly ahead of the beat, giving a more aggressive feel.  Well, fear not!  There is also a function called “groove quantize”, which lets you match the feel of your performance to a preprogrammed groove.  So now my performance, which was originally crappy, now sounds like Ray Charles played it!

In reality, quantization has technical limitations and my example is a bit simplified, but you get the point.  You don’t have to be able to play well (or really, at all) anymore in order to create a decent-sounding song.

Transposition Album Cover“Start Again” was written in November-December 2005. Like “Getting Ready”, it’s a link song connecting “We Can Love” with “Lipstick”. For a while I debated dropping it from the running list, but ultimately decided it served a useful purpose. It’s a simple, rather unsubstantial song that is enhanced greatly by the accordion accompaniment.

It took me a while to decide at what point Jack became Jackie full-time. Initially I thought that it would be after the epiphany in “We Can Love”, so “Start Again” would have been the first appearance of the full-time Jackie. Ultimately, though, I decided that the transition would take place between Acts II and III.


Posted by on April 12th, 2013

SynclavierIn the second installment of our series addressing some technologies employed in modern music production.  You may have heard of sampling; in the 80s and 90s there were a few court cases surrounding the sampling of music to create new recordings.  So what is sampling?

Sampling is, in essence, recording something and then playing it back as part of a musical performance.  For example, recording the sound of a flute playing and then playing it back through a keyboard.  An early kind of sampler was the the Mellotron, which played back loops of tapes with recorded sounds.  Each key on the keyboard played a different loop of tape.  This is the sound you hear at the beginning of the Beatles’ “Strawberry Fields Forever”–a Mellotron loaded with tape loops of flutes.  Even back then there was controversy surround this sort of thing–session musicians felt that this technology was robbing them of their livelihood.  After all, why pay a group of musicians to play when you can just lug this Mellotron into the studio and play it yourself?

Sampling really came into full flower in the 80s with digital samplers that played back sounds with higher fidelity.  Now it became possible to construct fairly realistic digital pianos that played back samples of real acoustic pianos, drum machines that played back samples of actual drums playing, etc.  In fact, it became possible to construct songs entirely of sampled instruments.  But sampling was taken even further when whole sections of songs started to be lifted off of recordings and redeployed as samples to create “new” songs.  Often this was done without crediting the original writers and performers of the music, hence the court cases surrounding Vanilla Ice’s “Ice Ice Baby”, which sampled the main riff of Queen’s “Under Pressure”, and MC Hammer’s “Can’t Touch This”, which similarly lifted the main riff of Rick James’ “Superfreak”.  The upshot of this is that the original artists now have to be credited and receive royalties for this sort of sampling.

As a musician, I have some serious issues with the use of sampled performances.  To me, it’s kind of like taking a reproduction of a classic work of art, spraying some graffiti on it, and calling it your own work.  In other words, it’s a shameless appropriation of the product of others’ labor and creativity.  It’s possible to someone to sit in a chair in front of a computer, never touching an actual music instrument, and string together layers of sampled performances to create a new song.  Some people make a career of doing just that.  I don’t deny that doing it well requires creativity, but to me it’s much less interesting than hearing a performance of actual human musicians playing together.  It’s analogous to collages made by cutting things out of books and magazines.  It can be interesting, but I have greater respect for the Leonardos and Monets of the world.

That aside, sampling has been a boon to the low-budget DYI musician–it makes it possible to incorporate sampled instrumentation such as string sections and exotic instruments that would be cost prohibitive if one had to hire musicians to play them for you.  Still, I won’t deny that it’s problematic in the sense that the proliferation of easily available sampled instruments has reduced the number of paying jobs for session musicians.

UU ChaliceThe entire guitar part for “We Can Love” came to me in August 2004, when writing a rock opera was just an idea I was playing around with.  Despite that, I always thought that it would end up being an opera song.  The words came quite a bit later.   At my request, John wrote the lovely harmonized flute lines, which were subsequently recorded by his future wife Inge in January 2007.

Given that the subject matter stresses the importance of community, especially to outsiders, I have performed this song several times at church.  On a couple of occasions, John, Inge and I performed it together in a rendition close to the recorded version (with rhythmic accompaniment, once by Pen, and once by Anna).


Posted by on April 5th, 2013


“Duuude! You know what I would do, if I were you? I’d run my guitar through a compressorrrrr…”

Back after a week’s absence!  While taking the “Art of Mixing” class through Berklee Online, I got to thinking about how processed modern music is, and how little the average person realizes this.  One could say that modern pop music is the aural equivalent of a Twinkie: it bears only a slight resemblance to music as found in nature.  So I thought I’d do my bit to educate the public on a few of the tools used by recording engineers to process and manipulate recorded sounds.  I’m going to try to explain things in layman’s terms without resorting to technical talk.

Let’s start with compression.  Basically, a compressor is a device (hardware or software) that reduces the dynamic range of a signal going through it.  In other words, it controls loudness; it’s sort of like having an automatic hand on the volume knob, ready to reduce the volume if the signal gets too loud.  It basically lets you get away with a louder average signal because the loudest peaks are tamped down.  Compressors are not new; they’ve been used in music for many decades, and have a lot of practical use.  For example, they are used to keep an instrument from overloading a recording console because of sudden loud notes.  Instruments like drums have a lot of dynamic range; they can play really soft and really loud.  A compressor helps to even things out so that the loudest notes don’t cause the recording to become distorted.  Compressors are also used at radio stations to make sure that the station never exceeds the broadcast wattage permitted by law, by ruthlessly controlling the dynamic range of sounds being broadcast.  That’s one reason why your favorites song played on an FM station never sounds as good as it does in your CD player.  Also, commercials on TV and radio have their audio compressed–that’s why the commercials sound much louder than the regular programming.

In modern popular music, it’s not uncommon for every single instrument and vocal to be compressed.  Why?  I suppose because it’s technologically possible, not because every track actually needs compression.  Also, the entire mix is compressed to make the average loudness higher.  So now we have a situation where every instrument and voice has much of the dynamic range squashed out of it, and the entire song is further squashed until it sounds really, really loud and distorted.  Now there–isn’t that better?