Death To Star Reviews

The idea of reviewing things with some sort of arbitrary number of something is not new. According to the Wall Street Journal the first movie to be rated on the star system was in 1928. This system has bled into pretty much every place in existence that has online reviews. Amazon, Newegg, Macworld, iTunes and every other major retailer for the most part, have adopted this system, either by their professional reviewers, registered users, or both. But at the end of the day the system is broken and stupid.

At a high level, star reviews are OK, and I use them on big sites especially when there are a lot of options to chose from and I just want to separate the horrible from the best. For example, just recently I was looking for a cheap pair of desktop computer speakers. I don’t like putting my headphones on every time I want to watch a YouTube video so I wanted something super cheap, and didn’t care if they were Bose-quality for music. I looked on Newegg and Amazon and found an $11 set of Logitech speakers that fit my bill. In this case, both sites had loads of reviews and the star ratings were high enough to distinguish them from other similarly priced models. This is the scenario where star reviews work. Something small, that I won’t use that often, where I am just trying to find the least crappy version from a whole lot of options.

But when you are looking for more information, things break down quickly. Most star systems these days are 0-5, although some are 0-4. Users are free to add/remove stars from their rating based on whatever criteria they see fit. And this is where the things get dicey: most people have different needs, and almost all people’s priority for specific features will vary.

Let’s use the example of an iPhone case. Company Alpha makes an iPhone case that is super protective. When an iPhone is inside this case, it can fall 50 feet and not break. The downside is that the headphone jack is a little obstructed, and as a result not all headphones will work with this phone. This would undoubtedly cause some people to take off one, if not multiple stars. The problem with this is that it’s totally relative to the interested party. Perhaps the customer only uses Bluetooth headsets, or perhaps their lifestyle is such that dropping their phone 23 feet is a regular or potentially regular occurrence. In this case, all they care about is whether or not their phone will stay protected, and in all likelihood would rather spend $50 on a special set of headphones than run the risk of having to spend $500 on a new phone. I regularly run into this issue when actually reading low-star reviews. Often the reason someone has removed stars doesn’t apply to me.

The second major problem with star reviews is people take off stars for “missing features”. Let’s step away from the tech realm for a second and talk home appliances. Many refrigerators these days come with an alarm that will sound if the door is not fully closed. I assume this feature is especially important if someone has children who tend to not push the door closed all the way. If someone is interested in a fridge, buys it, and later writes a review saying “Great fridge. Would have given it five stars (instead of four) if it had a door alarm”, is that fair? At no one did this particular fridge advertise that it had an alarm. It’s not as if it had an alarm that didn’t work well, it never had one at all.

This instance pops in the iOS and Mac App Stores regularly. The number of stars given is a direct result of comparing one app to another, and often involves removing stars for features that other applications have. This makes absolutely no sense. Applications should be rated on the quality of the features they do have, not what they “lack” compared to others. The app launcher Alfred has a much smaller feature set than that of Launchbar1, but Alfred is an awesome piece of software and I love it. It wouldn’t be fair to downgrade it’s rating just because it does not have features that Launchbar does.

Of course, in written out reviews all of these types of things are (and should be) fair game. If a reviewer is reviewing a refrigerators and says “Model A is good, but Model B offers everything plus a door alarm at the same price”, that is super helpful. These kinds of written facts are tangible, there is no disputing that there is a difference between Model A and Model B. But maybe Model A is actually $20 cheaper. It’s possible that $20 is a dealbreaker for some people, and for other’s the alarm is worth the extra $20. Some people may remove a star for this, other’s may not. If someone sees a 5-star rating and then finds out later it doesn’t have an alarm, they might be mad that people were not taking stars off for that. While others would get it and not understand why someone took off a star.

Just about the only pro involving star reviews is easy aggregation of results. If I am interested in a product, especially if it’s something low cost, seeing hundreds of reviews with an average of 4 out of 5 stars is useful to help me make a quick decision. And in cases like that, a few bad apples usually don’t ruin it for people. And that is why star reviews aren’t going anywhere. It’s too convenient and easy for people to both rate and decide on purchases. People are lazy, and this is a big time saver, whether it ends up helping them in the long run or not.

At the end of the day though, star reviews are difficult for product makers and consumers. When I read a professional review the best evaluation I could get would be is this product worth buying, and how does it compare to other similar products. Sometimes I read reviews and it’s not clear whether I should buy the thing but then it has some sort of number rating at the end. Does 3 out of 5 stars mean I should buy? What about 2 out of 5? If anything less than 3 is a no go, then why even have that? When people are reviewing things on the App Stores, what I am really interested in is whether or not they would buy the app again if they had to. How do I know how many stars equates a “recommend” and how many does not?

The best change in this would be simply that. Change the review system around the web to be more “yay or nay” based. For professional reviews, I want the reviewer to tell me whether or not I should buy the item in question. Don’t hesitate to compare it to other products either. Most of the people I read reviews from these days are people I trust, and I want to know if given the chance they would buy, or would go back and buy the same product or something else.

For non-professional reviewers, I would simply like to see a “recommend” or “do not recommend” and if “do not” possibly an alternative option instead. Seeing a total number of recommend vs. do not recommend is a lot clearer than 2-3-4 stars, and would keep people from playing the “removed one star because I don’t like the dock icon” game.

  1. At least that’s my understanding []