I never quite 100% got why other movie critics and especially filmmakers hated Rotten Tomatoes. After all, all it is is a review aggregator. They don’t rate or critique movies themselves. The Tomato Meter isn’t some well-thought-out score that the website comes up with themselves. They simply average the kinds of reviews a movie receives and tell you what percentage of them are positive or negative.
It’s merely one tool in a movie-goers toolbox: right up there with MetaCritic, CinemaScore and actual critics reviews. Each one of these tells you something a little different about the movie in question, and combined they help audiences to not waste their money on movies that they aren’t likely to enjoy.
Rotten Tomatoes tells you how broadly liked a movie is (ie, did people generally like or generally dislike the movie). MetaCritic tells you how well liked a movie is (ie, to what degree did people like the movie). CinemaScore tells you if moviegoers’ expectations matched with the reality of what they were shown (ie, were they advertising for the movie that they screened in theaters). And, of course, individual reviewers — by far the most useful (if most time-consuming) of these tools — tells you, in depth, the thoughts and impressions of an individual person who saw the movie (hopefully one whose opinions generally align with your own). With a few minutes of reading and research, it’s easier than ever to find a movie playing near you that you are likely to have a good time with.
Over the past few weeks, however, I began coming around to exactly why critics and filmmakers dislike these metrics so much. On the whole, it has nothing to do with the values themselves: all of which have their place and purpose and are designed to help moviegoers make informed decisions about the media that they choose to support and consume. Rather, the problem is that nobody is using them correctly: in the way that they were intended to be used.
When Solo came out, for instance, I saw one notable film essayist — a man for whom I have nothing but the highest respect — compare its 71% rating to Revenge of the Sith‘s 79% rating and conclude that the latter movie was objectively and demonstrably superior to the latter. It doesn’t matter that the score isn’t meant to be a point-by-point comparison between the two — just that, in broad strokes, about as many people liked both movies and that most people who saw both tended to like them — it was being used like some kind of standardized test.
Now that Jurassic Park: Fallen Kingdom‘s reviews are starting to trickle in over on Rotten Tomatoes, I’m seeing the same thing start to happen for that franchise. People are starting to compare the new movie’s 63% rating (at least as of this writing) to the last movie’s 71% score. Rather than noting that they both show that, in general, most people like both movies, they are trying to calculate their enjoyment of the new movie based on the old one: “Fallen Kingdom is nearly a full letter grade lower than the first one, so I guess that I can skip it.” The Tomato Score isn’t a review in of itself, it’s just showing how many people liked the movies in the broadest possible terms.
Then, of course, there’s CinemaScore: the metric that people only pay attention to on the rare occasion that it’s actually bad. It, in essence, merely measures how satisfied audience members are with the movie immediately after the fact: which, unless they were promised a very different movie, is generally very positive. But, again, this is not a review: measure a measure of met expectations. Divisive or otherwise difficult-to-assess movies — like my absolute favorite from last year, Mother! — are the ones that tend to fail.
And if there’s anything that the last year has taught us, it’s that offering critic and audience scores side-by-side creates a false binary where both are shown as equally valid and equally informative. There is demonstrable evidence supporting the notion that disgruntled DCEU fans, convinced of a vast and overriding conspiracy in which Disney was somehow paying off critics the world over to give their movies good reviews (something which, I can say from experience, does not happen), “review bombed” The Last Jedi out of some misguided attempt at balancing the scales. It resulted in that movie getting a rather deceptively bad score from audiences (despite otherwise glowing reviews and strongly positive word-of-mouth from people who actually paid money to see it) and in Rotten Tomatoes having to take pre-emptive action when a similar plot was discovered for Black Panther at the beginning of the year.
Again, all of these metrics are useful — if only in their own way — but they cease meaning anything at all if they aren’t used in the context for which they were intended. Audience scores are susceptible to orchestrated sabotage. Tomato Scores are really only useful in the broad “good / bad’ binary indicated by whether the icon beside the movie’s title is a red tomato or a green splat. CinemaScore simply tells you if audiences got the movie that they were promised.
The best way to tell if you will or won’t like a movie ahead of time is to find a reviewer whose opinion you trust and put your faith in their ability to evaluate it accordingly. I have a whole cadre whose insights I value and reviews I will read regardless of whether or not I see (or even want to see) the movie in question. And sometimes they’ll turn me on to a movie that was never on my radar to begin with, like Upgrade or Revenge or Colossal or Dave Made a Maze (to name a few recent examples). If you have one of these guys in your corner, you can honestly ignore all of this other noise.