These "Buyer's Guides" …

Should be titled:

“Lists of Everything That Got Sent to Us”

“Everything We Have Ever Heard”

“Random Components Randomly Selected by Randomly Qualified People”

———————————

Oh yeah, Neli says I am supposed to be in a good mood when I post. Oops.

It is not as if the stuff we carry doesn’t make all the lists – but it is the principal of the thing…

OK. Here is something:

Why Good People Often Write Random Information-Free Reviews

The problem is with their associated components [and to some much lesser extent their room].

Let’s say a reviewer, let’s call him [yes, 99% are hims] X. X has dull-sounding speakers. To balance these speakers he has a bright sounding amp. Together the sound kind of doesn’t scratch his ears out nor does it put him to sleep. However, the amp is very lean, so his CD player is very warm and harmonically rich, if somewhat veiled with exaggerated syrupy macro-dynamics – to help balance the leanness of the amps.

Now, whenever X reviews a speaker, amp, or CD player, he is going to prefer it to have the same handicaps his current components have – in order to maintain the balance of his system.

But, perhaps X is smarter than your average bear. So perhaps he is not completely ignorant of the compromises of his system – his reviews will still not be accurate [because he cannot provide a clean signal to the component under review, nor a clean transducer to hear the result]. I propose that, unless X is a genius, the review will be almost random – that the best one could hope for is a comparison with the previous component in the system [more bright, less bright, more harmonic less harmonic, etc.]. Of course, very, very few reviewers do this.

Now X gets a pair of cables to try….

Another example: Suppose Mike Fremer gets an amp to review [haven’t picked on Mike for awhile]. Let’s also assume he still has the Musical Fidelity amps that I last heard that he had.

OK. So I know A) what his previous amps sound like and B) that he must actually like that type of sound. I know that he can hear and that he can more or less describe what he hears [within the limits of his responsibilities and persona at Stereophile].

Let’s say that Mike says that the amp under review is dull-sounding. Now, the Musical Fidelity sound to my ears is akin to Bryston and last generation Pass Labs and Krell and might be described as bright and aggressive. So would Mike have meant that the amp under review is dull sounding compared to the MF, or to the average amp, other amps in the same class, or to the sound of real music?

I personally think Mike would, in my way of thinking , well… I am not sure. I think the correct answer would be to compare the amp to a weighted average of real music and the average amp at the given price point, as well as to other amps in general [because amps do some things pretty well close to real – and others not so well at all, compared to real music].

So, let’s say for sake of argument that he takes the time to describe it with respect to all 3 benchmarks. The question still remains, assuming the rest of his system is neutral and revealing – which I think it is – how do I, as a reader of his review, adjust whatever his conclusion is against the fact that his preferred sound is not aligned with my preferred sound?

I had even more issues with Roy Gregory’s reviews. His favorite CD player was Wadia, back when they were deeply, deeply flawed [they are much better these days]. I read his reviews to try and get a glimmer of what certain things sounded like, but it was very difficult to pin down just what his associated equipment was and what his preferences were [though we DO like the same cables :-)].

So, I hope this helps describe why reviews are so random. The best you can hope for is someone who has a very good, mostly neutral system to place the component under review into; someone who has some experience with all ranges of components, but especially those of comparable quality of the component under review; someone who has the freedom and balls and integrity to print what they hear; someone who has the ability to hear and ability to describe what they hear and to describe exactly what they are describing [which hearkens back to our discussion a few weeks ago about how to describe sound].

Sorry for the lack of posts. Been busy around here… and lazy at the same time. Funny how that happens 🙂

Recommended Music

First off, this post is not going to list music we recommend. Sorry.

In fact, we don’t really recommend music, per se, and this post is going to talk about why.

First, although I think we both enjoy it when someone else plays their canned selection of tracks… THIS one shows awesome mid bass, that one shows how wonderful female voices are etc. – it is a terrible way to evaluate a system and I think a dishonest way to try and sell a system.

That midbass is nice and deep and rich, yes, but won’t it sound like that on almost all systems? Maybe it sounds better, in actual fact, on most other systems and this system actually sucks.

That it is the track itself that is extraordinary, not the system.

And when friends play us these kinds of tracks, and they not trying to sell us something :-), this is probably their main point – that it is the track itself that is great, yes… “but doesn’t it sound great on my system!” with a big smile on their faces.

There are two ways that we use to select music to evaluate a system.

1. One is the way Peter Qvortrup of Audio Note recommends: using a wide selection of music you have never heard before [or at least do not listen to very often]. This is great for people stuck in a musical rut, usually only playing 3 or 4-piece jazz bands with female vocals because that is the only stuff that sounds good on the systems they are familiar with. By playing more complex music at random – they will actually be able to recognize when they come across a high-quality system that can play many, if not all, kinds of music well. For people NOT stuck in a rut 🙂 this method works [and is required for any real in-depth evaluation], but it takes a lot longer to evaluate a system this way than…

2. Playing a select group of tracks [songs] or varying ‘complexity’ that you are VERY familiar with to test various aspects of the system.

Not sure that complexity is the right word.

There is a continuum of music with varying degrees of… ‘difficulty’, say, stretching from music that sounds good on every system, just about, music from which only a few clues about the quality of the system can be judged [so called audiophile music – but it is really useless-for-audiophiles music] to that which sounds unpleasant on most systems below a certain quality.

[i.e. some music sounds about the same on most systems, and some sounds very different on different systems – to exaggerate a bit]

And this is why it is hard if not impossible to recommend music – we can talk about how track #16 shows wonderful decay and amazing separation between 16 different instruments being played simultaneously, for example. But it requires a certain amount of quality in the system to render the music like this – and people who listen to it on a lesser system, they will either think we are nuts or, instead, will imagine that they too hear such amazing things – that their system is up to the task when it is, in actual fact, not.

So what we do is, when it is our turn to pick music to demo our system to someone, and we are done with playing music they are really familiar with [approach #2 above. BTW So many people are embarrassed to admit their favorite music! What a world we live in] , we just pick music we like at the time [which serves as approach #1, above, for the listener].

When I play music to evaluate someone else’s system – I play any Radiohead track that I am VERY familiar with and then something natural, real world stuff, that I am familiar with: classical or world music or whatever. After these two I can rate the system based on separation, depth, imaging, soundstage, tonal quality, detail, and stuff like harmonic detail, imaging sizing, etc [all from Radiohead] and also whether the sound is grounded in reality. This only works because [besides Radiohead being deceptively complex] I have heard a these few Radiohead tracks on a number of very high-quality systems – and others – that I listened to with full attention – and so know kind of just What Can Happen… what these tracks REALLY should sound like [kind of. Think of a graph, lesser system sounds on the left, better systems sounds stretching to to the right. The graph is trending up, so one can extrapolate that there might be better systems someday that will extend the graph farther to the right. This anticipation of what Radiohead etc. will sound like over in the uncharted areas on the right side of the graph is one of the things that keeps me playing with this stuff :-)].

Summarizing

OK, summarizing the last few posts and comments…

I am of the opinion that:

1. It is a disservice to audiophiles and the equipment/systems to lump everything into more or less two categories: good and pretty good but that this is the state of affairs for 99% of the reporting by both laypeople and reviewers.

2. That simple numerical ranking, and Stereophile’s grading system, are very slightly better [much better than the Golden Ear-type approaches] but still fails because of A. taking into account the cost [a $3K class A component is not as good as a $100K class A component [even though there is a significant percentage in our hobby who INSIST this mythical component MUST exist, somewhere, somehow and they keep looking and buying a heckuva lot of $3K components] and B. not describing why it belongs in the class they have assigned it to [yeah, they refer us to the orig review but those reviews do not put the sound in context, see #3].

3. That comparative analysis that compares components to each other is the only approach that makes any sense – not on the basis of This is better than That [which would drive away the advertisers who are paying for the review] – but at a more detailed level that is completely agnostic about what is ‘better’.

In the Audiophile’s Guide to the Galaxy, we do exactly this at a level that the layperson can grok [rereading Stranger in a Strange Land. Last read it when I was 12 (from this same paperback!)]. Emotion, Impressive, Natural/Organic… where Magical/Spiritual really just means that there are some depth to the component and that it will take time to understand and that the listener will be given the opportunity to ‘greatly enhance’ their appreciation of music.

Beyond the layperson-accessible approach, we audiophiles can use terms like ‘detail’, or more precisely, micro-, midi- and macro- detail, with which we can compare and describe components using such references as the Wilson speakers, Levinson amps, etc. and beyond (helps to use components most people have heard – but the beauty of it is – this is NOT a requirement! Humans are great at figuring out where things go in their special ‘buy list’ without having to live with each and every object in the list, if GIVEN ENOUGH COMPARATIVE DATA to work with). This approach would eventually sketch out the world of audio components: both the compared and compared-to become more well-defined through this technique. There is no inherently better and worse here – though an audiophile who is at all familiar with their own preferences, and that of the average listener, will KNOW which is best for them (and the average guy and gal).

This exposes the way components actually sound to the light of day – let the chips fall where they may. I will post some real-world examples in the next few posts.

[before I get to that… Yes, we do plan on reviving Spintricity at some point. Right now the High End Audio channel on Mattters is doing somewhat better than Spintricity did [except for show reports :-)] and about the same as Dagogo [on average]. There are also channels for the laypeople like Home Audio and even Home Theater for those that like technology for the eyes as well as the ears].