Could Nipper Tell?

National Public Radio recently ran a series on “Streaming at the Tipping Point,” all nine segments of which are worth a look or a listen. (Although I stumbled across this particular series on-line rather than over the airwaves, I will confess to being a regular NPR listener. If that outs me as one’a’them elitist, social-democrat kinda bleeding-hearts, well…)

But the particular story I want to point toward here is this one, authored by Tyler Fisher and Jacob Ganz.

It’s a self-administered “quiz” on file-format and audio quality. Six music segments are presented, each with three clickable, on-screen “Play” buttons, corresponding to uncompressed WAV, and 320 kbps and 128 kbps MP3—but which button goes with which format is unknown until you complete the quiz. This does not qualify as a double-blind survey, but it’s considerably more scientific than alternately playing and comparing two files of which you know the origins and formats.

Regular readers know I love this kind of thing, and that I encourage one and all to own the fact that such aural discrimination is a lot harder than most of us think, or that many in the press would have us believe.

But that’s not exactly news in S&V’s pages—er, screens. Nor is it my intention to brag about how many I got right. (All six, listening on $0.99 RadioShack earbuds plugged into a 2003-model ThinkBook with a dead battery, on the subway, with a head-cold.)

Rather, my point is to note that the conversation is gradually becoming more general. Even while the world’s Ponos and Tidals enjoy their 15 minutes of fame, at least a few journalists outside of the audiophile hothouse are holding the Emperor’s old clothes up to the light. (Yes, I noticed that this page does not include any comparisons with “high-resolution” files. No, I don’t think that invalidates it.)

Whether or not these garments prove to be fully transparent—pun very much intended—remains to be seen, or heard. But I am hopeful that such exercises will, at least occasionally, jolt us to remember that recording technique and production quality are more important variables than bit-rate or recording format in determining our ultimate listening satisfaction.

Not to mention performance quality—or that of the music itself.

COMMENTS
canman4pm's picture

Interesting test. I've never done this type of direct comparison before, but have always wanted to. I've always wanted to see if I could tell the difference, and what that difference was. To be fair, I did this test on my iPad with a pair of Apple earbuds (the latest generation) and would definitely like to do it again with a better pair of phones. But based on these headphones, I did learn a few interesting things:

-Differences (if any) with all samples, were very subtle.
-To notice any differences required many listenings, with the exception of the Mozart, which I started to notice within a half dozen run throughs
-Jay Z was a WAG (wild-assed guess) based on a feeling, "this ones better": wrong.
-I couldn't hear any difference with the a cappella, Suzanne Vega track, "Tom's Diner"
-The Coldplay track was a super subtle detail increase in the backing guitar picking that I really had to listen for.
-The improvements, when noticed, required super active listening and multiple listens to pick out subtle improvements, mostly in the clarity of background stuff. Mostly it was "oh, that instrument sounds a little better, cleaner, or whatever and wasn't noticed, until I was concentrating on that instrument.

I guess I was expecting to hear muddy vs clear, but I may have been hardware limited. I was expecting something like stereo shopping with my technically ignorant parents, 20 years ago, for their last system: "here's the difference between the B&W 100 series speaker vs the 200," which for me was a "wow," big-time. OK I knew it would be more subtle than that, but still...

Bottom line, for my iPhone will be for 95% of my listening, while walking around, driving, hooked up to my home stereo or one of several speaker systems I own (a newish TDK boombox, or the sounder in the bedroom. The phone will be filled with mp3s from my original 200+(VBR) kbs rips. If I'm streaming over my phone (I don't yet), I'm not paying for the "higher quality". I have been re-ripping my collection on a second iTunes library, but I won't use it on my phone (files are too large). I'll save those for my decade old 60Gb iPod Video (4th Gen, supposedly the zenith in sound quality in iPods, with the best headphone amp and best DAC (Wolfson) they ever used. I can certainly confirm it's a far better iPod than my iPhone: when patched from the headphone jack to my input in my car, the iPhone clips the system when the unit's volume is over ¾ at any volume on the car's stereo. The iPod never clips. With a better set of 'phones, I'll re-evaluate and then we'll see. When I feel like lying on the floor, in the dark for a listen to "Dark Side of the Moon," I'll pop the CD into my PS3 and run it through my living room home theatre.

KINGTED's picture

Like canman I am surprised how poorly I did, I too figured the difference would be more obvious like improvements in equipment. Granted I was going straight into headphones from my work computer. Maybe I dont need high res...
I did a test like this back in 2001 at my then local Magnolia Hifi, when I outgrew my CD changers, on their top system. I wanted to see how much worse MP3s or WMAs would sound than full WAV, but the staff and I all thought WMA actually sounded better than WAV. I have encoded everything at highest setting VBR WMAs ever since.

X