NOTE: This is the first draft of an article I'll eventually post to my essays section in the next few days, with more links, and pictures if my failing digicam cooperates. Please be patient with the typos and such. You can watch it change as I edit it. I also recommend Jake Luddington's Recording a Podcast and Upgrade Your Podcast for Under $200 articles.
I've had a few inquiries about what kind of equipment I use to record my podsafe music and my wife's podcast, for which I'm the engineer and cable wrangler. The setup has changed a fair bit in the past month—even since I last wrote about it—and is now radically different from the stripped-down den configuration I used to record most of my album last year.
So let's take a tour. I get fairly technical, so feel free to skip stuff if it doesn't interest you.
When recording the podcast (which has prompted most of the recent adjustments to the setup), my wife speaks into an MXL 990 condenser microphone suspended in a shock mount on a standard boom microphone stand, with a mesh pop filter to keep the "p" and "b" sounds from creating thudding sounds.
The Marshall-designed 990 (replacing a borrowed AKG C1000s) is an awesome deal of a microphone, often listing at $70 USD or less, and performing as well as some mics costing five times as much or more. It certainly works better for my wife's voice than the more expensive AKG, and has the added bonus of making our basement look more like a real recording studio.
Her co-host KA uses one of the two Shure SM58 dynamic microphones that I own. While it is not a common choice for studio vocal recording, the 58 is almost certainly the world's most popular mic—it, or its pricier cousin the Beta 58, is the typical "ball-head" mic you see on nearly every podium or rock 'n' roll stage. That's for good reason: it's nearly indestructible, it's inexpensive (around $100 Cdn if you look for a deal), and it sounds good.
I found that the SM58 particularly suits KA's voice, so I have it mounted on the same kind of boom stand as my wife's MXL uses, with both a pop filter and a foam windscreen to keep the extraneous noise down. It doesn't look quite as swanky as the MXL, but the sound is what counts.
The next part of the chain is the newest, added only in the past week. The two mics are connected using short XLR cables to a Behringer UB802 mini-mixer, which has two XLR input jacks (as well as three other stereo-paired inputs), is one of the least expensive mixers you can buy ($70 Cdn), and also sounds great for the price. (I admit that the control knobs feel a little cheap, and lack detents so you can tell when they're exactly centred, but for the price, I really can't complain.)
The 802 has phantom power for the MXL mic—which, unlike the Shure, requires a small electric current to work—and also provides a bit of control to even out the sound and levels of those two very different microphones. I don't currently use the onboard equalization at all, preferring to keep the microphone signals as unprocessed as possible at this stage.
I have the mic signals panned hard left and right in the Behringer's outputs, so each one can travel separately down a 1/4" patch cable to one of the two channels in my dbx 266XL compressor/gate, which lives in a road rack case because it also finds use as a noise gate for my bass drum when I play with my band.
For the podcast, I'm now using the 266XL as both a noise gate to dampen the sound from the fan for our furnace (which should be less of a problem come summer), and to provide slight dynamic range compression so that the loud and soft parts of the hosts' speech aren't too "peaky," i.e. neither the loud parts too loud, nor the soft parts too soft. For the same reason, I also find the dbx handy when recording bass guitar.
I have the 266XL's compressors set to its "Auto" mode, using dbx's "Over Easy" compression, which is basically a set-and-forget operation. I could adjust the threshold, ratio, attack, and release manually, but the Auto setting works well so I need not worry about it. With the noise gate and compression, the improvement in the raw sound of the recording is quite remarkable compared to the software gating and compression I tried to accomplish in GarageBand before.
Both hosts listen to headphones while recording. KA uses my big Sennheiser HD 280 Pro pair, while my wife has the Sennheiser PX 100 set she got for Christmas, which don't provide much isolation, but which are comfy and sound great.
We've used a few other headphones over time (such as Audio-Technica ATH-M20s from my work), but the Senneiser models are the main ones for now. They were previously plugged into the M-Audio interface (see below) with long extension cables, but now I run the headphones straight out of the mixing board, using a strange little headphone splitter I cooked up with a series of connectors I had kicking around in a Frankenstein series of plugs.
The splitter is so complicated (it includes one of those weird airline headphone adapter things midway in the chain) because it not only splits the signal, but physically bridges the resulting stereo signals to mono so that the two hosts hear everything in both ears, instead of their own voice on one side and their partner's in the other, which can be discombobulating. They now listen to an unprocessed mix of their voices from the mixer, before it hits the compressor or computer. They don't seem to mind.
Last month, along with the MXL mic, I acquired an M-Audio FireWire 410 digital interface, which is what converts the analogue signals coming from the mics, mixer, and compressor/gate into digital bits my computer can use. Previously, I briefly used an Edirol UA-25 USB interface borrowed from work for the same task, and before that—before Lip Gloss and Laptops came along—I recorded directly into my computer's audio-in jack (sometimes with a borrowed Mackie mixer), which had the Mac doing the analogue-to-digital (A/D) conversion internally instead.
Sometimes I also used a neat little gadget called the MicPlug to route a vocal signal from the SM58 straight to USB. All approaches worked, but the audio interfaces sound better and, again, permit greater control of the signal.
The two vocal signals come from the dbx 266XL box, which sits next to the hosts' desk, to the M-Audio interface, perched on a shelf above my computer, through long XLR cables. Importantly, I tried regular unbalanced phono patch cables for the task last week, but got electrical buzz that went away as soon as I switched to balanced XLRs. Since the MXL gets phantom power from the mixer now, I have the M-Audio's phantom power shut off.
In the back of the M-Audio interface is a FireWire cable that sends the now-digital signal to my eMac (and back from it for monitoring), plus two 1/4" cables that go from a couple of the analogue output jacks to my Harmon-Kardon Champagne 2.1 computer speakers, which I use for room sound monitoring. I also plug my HD 280 headphones into one of the M-Audio's front headphone jacks when mixing.
Computer and software
The M-Audio interface has a fairly complex driver program that shows me a virtual mixing board on the screen, with various configuration options. After some initial confusion, I figured out some reasonable settings, and have those saved as a preset that I call up whenever we record. (I use the default settings the rest of the time, so I can listen to my usual computer sound in a normal way.) Otherwise, all the recording tasks are handled by Apple's GarageBand software, which I love love love love love.
Did I mention that I love GarageBand?
Yes, it is limited, but in very intelligent ways. For recording a podcast its feature set and interface are nearly ideal, because it doesn't have too many extra bells and whistles to get in the way. When laying down music, its simplified screen and great collections of loops, samples, and MIDI instruments are awesome.
GarageBand does demand more system resources than it should—on our eMac, I have to make sure that all other applications are shut down, that the screen saver and energy-savings features are deactivated, and so on to ensure smooth recording. Yet that seems to be true of most audio apps, and we can record to the internal hard disk without trouble, so that we don't have to use an external FireWire drive for the sound files.
I don't actually use GarageBand in its podcasting mode, because, in a puzzling decision, when you choose that Apple seems to want it only to export the final audio mix as a lossy AAC file, not a lossless and uncompressed AIFF file—which I much prefer to work with—as you can do with music-mode tracks. So I record the podcast as if it were a song with no music, just speaking, and with each host in one channel so I can adjust their relative levels and signal processing afterwards.
Now, the show itself is entirely the hosts' business. They spend time during the week researching their topic, visiting stores and doing interviews, reading up, and making notes. (On Friday evening, our recording day, they often finish off their notes on looseleaf paper and flash cards while I'm getting the gear set up and tested.) Once I have the whole rig ready and they are seated at their mics, I leave the room and keep an eye on our two daughters and KA's son "C" upstairs (they're often here during recording, since C's dad is working nights right now). My wife and KA emerge when they're done, some time in the following hour.
I do need to let go of my audio control-freak tendencies and show the hosts how to record and post the podcast themselves one of these days, I think. That's going to be especially useful when they get to the stage of recording interviews in the field, which I'm sure they'll do.
Once the main recording is done, I may wait till Saturday morning before I add theme and background music, bumpers, and promos to the mix as separate tracks in GarageBand, then adjust mix and levels. Or I may do that right away. I do very little editing of the conversation between the hosts because I prefer to keep its natural sound and flow.
I do find natural breaks in which to insert promos and the like, and trim "off the air" talk from the beginning and end of the file. I mix so that my wife is slightly to the left in the mix, and KA a bit to the right, which brings out the conversational nature of the show when you listen, especially in headphones. But I don't pan their voices hard left and right, which would be disconcerting.
Next I go through a bit of post-production rigamarole that is more than is strictly necessary, but keeps the content and audio quality high. I export the mix to iTunes, where I convert it to an MP3 that I upload to a temporary server location so that my wife and KA can approve it. If they've recorded separate audio for their own weekly promo, I'll do the same for that—or I'll assembled it from bits from the show itself.
Once they have approved the mix, I make any adjustments in GarageBand, export to iTunes, convert the AIFF file to WAV, open that in the free Audacity sound editor, and master the recording by boosting the volume and perhaps applying some more dynamic range compression and leveling. Then I save the file, import it into iTunes again, and convert it to an MP3 file at 80 kbps stereo, which for a typical podcast yields a file between 15 and 32 MB in size. I add ID3 tags and an album art image, as well as a copy of the shownotes in the Lyrics tag area.
Posting the podcast
That whole process happens in fits and starts, since we don't actually post the show until Tuesday evening, so there are four days in which to make and listen to the mixes, then adjust as necessary. During that time my wife and KA create shownotes as a draft blog post for their site, and I adapt those for the ID3 tags in the MP3 file and the actual podcast episode page.
For the actual posting of the podcast, I started out using Apple's new iWeb, which is an intriguing if deeply flawed program. On the plus side, it provides great control over the podcast shownotes pages that appear, automatically adjusts images, creates archives and index pages, and generates the RSS feed for you. In other words, it makes the posting of podcast pages quite streamlined.
But the web addresses it creates are long and awful monstrosities; there's no easy way to create your own page templates; the upload process is awkward if you don't use the .Mac hosting service (and we don't, even though I have it, because I don't want the site URLs pointing somewhere Lip Gloss and Laptops doesn't control); there seems to be no way to choose another template once you've picked and published one for a podcast; and the HTML code it generates is a very odd beast, being structurally valid, but a semantic disaster that doesn't work well for search engines or disabled users. iWeb is a 1.0 version, and it shows.
So after a few weeks of pulling my hair out with iWeb, for Lip Gloss and Laptops I switched to WordPress, which does things much more cleanly and automatically, and I'm way happier. (I continue to use iWeb to post my own musical compositions as the Penmachine Podcast, more out of inertia than anything else.)
I do a bit of web geeky stuff before uploading the files to the web server, but in summary I just post them to a folder using Panic's Transmit FTP program, then point to them with WordPress. Then my wife and KA can publish their blog post, and I can use the Ping-o-Matic website to notify the various podcast directories and other sites out there that the new show is available. (For my own show, where I still use iWeb, I publish to a folder and then upload it, including the audio files, in one go.)
Lots of stuff and steps, I know, and it could be cut down, but this process keeps the quality of the show high, and we're learning lots as we go.