Jump to content

Why Wine Ratings Are Badly Flawed


Rovers2000

Recommended Posts

I thought this was an interesting read (being the data focused/analytic type that I am), I like that they admit that this "study" doesn't really prove much and that you aren't going wrong if you either ignore the parker ratings/medals or just decide to experiment and choose what YOU like. Here is the link.

In the event that the article gets locked behind the pay wall, shoot me a PM and I'm happy to send you the full text.

I also got a good laugh from this line: Or you could just shrug and embrace the attitude of Julia Child, who, when asked what was her favorite wine, replied "gin." :(

With a snippet from the article:

In his first study, each year, for four years, Mr. Hodgson served actual panels of California State Fair Wine Competition judges—some 70 judges each year—about 100 wines over a two-day period. He employed the same blind tasting process as the actual competition. In Mr. Hodgson's study, however, every wine was presented to each judge three different times, each time drawn from the same bottle.

The results astonished Mr. Hodgson. The judges' wine ratings typically varied by ±4 points on a standard ratings scale running from 80 to 100. A wine rated 91 on one tasting would often be rated an 87 or 95 on the next. Some of the judges did much worse, and only about one in 10 regularly rated the same wine within a range of ±2 points.

"I'm happy we did the study," said Mr. Pucilowski, "though I'm not exactly happy with the results. We have the best judges, but maybe we humans are not as good as we say we are."

Link to comment
Share on other sites

Good article.

Many moons ago, when I was actively involved with a well-known DC area wine group, I did a blind tasting in which I served 5 wines (California Cabs.) that Parker had rated from the mid-50's to around 90. The idea was to see if the impressions of the group followed the ratings, the background idea being whether it was really worthwhile to chase, and pay more for, wines he had reviewed favorably. It wasn't terribly scientific, but in any case, there was just about zero correlation. Whether this says more about the wines or the tasters cannot be known, of course, but most everybody there was enthusiastic about wine. I did other blind tastings, for example pitting Bud against fancy European beers, with similar results (they were amazed it was Bud--I had purposely gone out and gotten long necks so the bottles sticking out of the bags wouldn't give anything away).

My view is that we learn much in blind tastings, of wine, beer, coffee, olive oil, fancy grades of rice, you name it. Generally what is learned is that these "expert" ratings, and general product reputations, are frequently more smoke than fire. I am always skeptical about claims of great superiority in foods and drinks, coming from those who consumed it knowing what it was. Do it blind, then tell me which one was the good one.

My interest in wine started when Parker was still a junior lawyer at BG&E I think it was, and in those days most reviews one saw, if they had a scale at all, made do with 5 or maybe 20 points at most. I'm personally convinced that the main reason Parker caught on was that he started the 100 point scale, and it gave an air of science and precision to the process. But the truth is it's all hogwash, a scam that mostly sells newsletters. Nobody can taste two wines, score one 87 and the other 89, and convince me it is due to a meaningful difference. The wine merchants of course know this, but play along because it works. I don't blame them. We have met the enemy and they are us.

Link to comment
Share on other sites

On a related note, sometime last year, I attended one of the Town Hall meetings conducted by Tim Westergren, founder of Pandora and the Music Genome Project. Just as his team of musicians logs attributes for the thousands of songs in the Pandora database, he mentioned that other groups were looking into doing something similar for a wine genome project.

(“Genomes” in this sense would be taste/texture/mouthfeel attributes, not grape DNA.)

I did a half-<3ed web search on a wine genome project, but found little. I am fascinated by the potential of this methodology and hope someone is doing something with it somewhere. Granted, many of the musical attributes Pandora captures per song could be far more objective than wine sensations, but attribute cataloging seems a compelling frontier.

I am going to drop Tim a line to see if he he’s hooked into the latest thinking on this. In the meantime, I would welcome thoughts from anyone here, especially if you know the status of a wine-focused endeavor that mirrors the Music Genome Project’s careful methodology.

Link to comment
Share on other sites

On a related note, sometime last year, I attended one of the Town Hall meetings conducted by Tim Westergren, founder of Pandora and the Music Genome Project. Just as his team of musicians logs attributes for the thousands of songs in the Pandora database, he mentioned that other groups were looking into doing something similar for a wine genome project.

(“Genomes” in this sense would be taste/texture/mouthfeel attributes, not grape DNA.)

I did a half-<3ed web search on a wine genome project, but found little. I am fascinated by the potential of this methodology and hope someone is doing something with it somewhere. Granted, many of the musical attributes Pandora captures per song could be far more objective than wine sensations, but attribute cataloging seems a compelling frontier.

I am going to drop Tim a line to see if he he’s hooked into the latest thinking on this. In the meantime, I would welcome thoughts from anyone here, especially if you know the status of a wine-focused endeavor that mirrors the Music Genome Project’s careful methodology.

FYI update on this topic...

Today's assed-half web search showed at least one company claiming to become the next Pandora of wine:

WineTamer (and no, "I don't even know 'er!")

A couple of other hits came up, too, but mostly in interviews. The company websites did not call out Pandora, or a comparable "genome" methodology, as the basis for their data collection.

Anyone know of others out there taking this approach?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...