3.5 doesn’t necessarily round to 4…

It’s Basic Math, Isn’t It?
Somewhere in late elementary school we all learned about rounding numbers. If the numbers after the decimal are .5 or higher, you round to the next whole number, right? 3.5 becomes 4. 4.5 becomes 5. Etc.

So why is it that some reviewers on Goodreads and Amazon include a rating with a half star in the body of the review but don’t round up to the next whole number of stars in the official review score? Here’s a hypothetical example. The body of the review says 3.5 stars but the reviewer assigned a score of 3 as the numeric star rating — and its the 3 that goes into Goodreads’ calculations. Did the reviewer miss that day in elementary math? Did the book lose out on a full star in that review? No. The system is working exactly the way Goodreads intended.

What’s Going On Here?
Before I launch into an explanation, let me provide a little background about my perspective. I have a doctorate in an applied area of social sciences, specifically health economics and outcomes research. Part of the quantitative training in this field includes study in survey analysis, psychometrics, etc. In my non-book-writing career, I’ve spent considerable time designing and using instruments that collect subjective ratings from individuals.

I provide this background not to convince you to believe me. I do have expertise in this area, but more importantly, I want you to know my perspective when I dig into this issue. I’m looking at it from the cold, objective perspective of a social science researcher and not as an artist interpreting how others are judging my work.

With that out of the way, let’s get to the numbers. In fields like engineering, physics, and chemisty, rounding works exactly the way we learned when we were young. That’s because we’re taking objective measurements of a property or attribute, like weight or length. If you and I both use the same measurement device, we should always get the same answer.

A book star rating is an entirely different animal. It’s a subjective summary scoring, where each reviewer uses different criteria. The same reviewer might give a different score on a different day due to mood, recollection, etc. It’s messy stuff. So 4.5 stars is not a robust measurement that is meaningfully different from 4.4 stars or 4.6 stars. It’s a hand-wavy way of saying that the reviewer thought the book was better than 4 stars but not worthy of 5.

So why don’t Goodreads and Amazon allow reviewers to use half stars? Aren’t they losing information by forcing the reviewer to round? Nope. They force you to round so that they gain information. Wait… How can they gain information by reducing the precision of the score? Ahh… That is the money question. It’s the reason they don’t have half stars. They are asking you to make a distinction that only you can make. That book you’d like to rate 4.5 stars… Is it more accurate in your mind to rate it a 4 or to rate it a 5? Now the reviewer has to make a call. In the hypothetical review of the book above, where the body of the review says 3.5, the reviewer made that call and said, in their mind, it’s more accurate to go with 3. (Social scientist me says well done. Artist me says dang it!)

A single score doesn’t tell you much, but if a lot of reviewers make the same call (to round up or down), the resulting average score is pushed the direction that matches that sentiment. If the reviewers are all over the place, the up-rounders and down-rounders should offset each other.

A Question of Fairness
I’ve seen a number of authors complain about improper rounding on Twitter. As you can see above, I don’t think they are right — at least not in terms of math. But I have seen a different complaint that needs acknowledgement: that persons-of-color receive ratings that are disproportionately rounded down. Since star ratings are subjective, biases definitely play in, so that is certainly plausible. (If anyone can point me to a paper or some analysis on this, I’d much appreciate it.) But if that is happening, it’s not due to a lack of precision (or what I would call false precision) in the rating scale; it’s because some people are jerks. I don’t think the jerkiness of some rating randos changes any of what I have said above. Bias is a separate problem that plagues reviews.

Amazon tries to deal with bias and rating reliability through some black box weighting of reviews in the calculation of average ratings. So two people’s 4 star reviews don’t necessarily count the same. (I’m not sure if Goodreads does this. They’re owned by Amazon, but Goodreads seems to be more like the Wild West with reviews.)

This post is long enough already. I’ll write about bias in reviews in a future post. Stay tuned.

5/4 – BOOK LAUNCH – The Between

The official launch event is happening with the world’s greatest independent bookstore, BookPeople, in Austin, Texas, on May 4th. It’s a virtual event, so you can participate from the comfort of wherever the hell you happen to be. Click here to register and pre-order.

The release date is April 27th. If you don’t go through BookPeople, pick it up from your favorite retailer. If you’re an audiobook fan, I think you’ll love Geoff Sturdevant’s narration. You can get the audio from Audible, Apple, etc.

Here’s the GoodReads link.

The Between — coming April 27, 2021

I’m not sure yet whether there will be an in-person launch at an Austin bookstore (which I would love) or something virtual. I’ll update this space when that gets figured out.

That cover is brilliant, isn’t it? It’s the work of Shayne Leighton. I was hoping for something that would stand out and grab prospective readers’ attention. If it caught yours, check out the summary below:

While landscaping his backyard, ever-conscientious Paul Prentice discovers an iron door buried in the soil. His childhood friend and perpetual source of mischief, Jay Lightsey, pushes them to explore what’s beneath.

When the door slams shut above them, Paul and Jay are trapped in a between-worlds place of Escher-like rooms and horror story monsters, all with a mysterious connection to a command-line, dungeon explorer computer game from the early ’80s called The Between.

Paul and Jay find themselves filling roles in a story that seems to play out over and over again. But in this world, where their roles warp their minds, the biggest threat to survival may not be the Koŝmaro, risen from the Between’s depths to hunt them; the biggest danger may be each other.

Here it is on GoodReads and Amazon.

Beta Testing Your Novel

A thinly-cited Wikipedia entry credits IBM with originating the “alpha/beta testing” terminology for software development. Even if you’re not a software developer, you’ve probably seen software with labels like beta and early-access. The objective of beta testing is to get real-life users to spend time with the software and find the show-stopping bugs before the product is made generally available.

In many ways, novels are like software: they are drafted, edited, optimized, tested, and (hopefully) eventually published. There are plenty of resources on the Web about how to alpha and beta test your manuscript (e.g., here and here), so I won’t duplicate their contents. Instead, I’ll provide some thoughts, based on my experience, about what makes the process work and what breaks it.

  1. Don’t confuse alpha and beta testing. Alpha testing is early in the process and generally should use a tester that is a good big-picture thinker who can tell you if the major concepts work. Beta readers, on the other hand, are readers who resemble customers that would actually buy your book. Your betas help you polish by identifying bugs. If your betas are finding major plot and character issues, you might want to rethink where you are on your manuscript development path.
  2. Dread, impostor syndrome, anxiety, etc. are all perfectly normal feelings once you hit send and your manuscript goes to betas. It’s rare in life that we put our flaws on display with the express purpose of having them called out. If you don’t feel uncomfortable, you’re doing it wrong.
  3. Diversify your readers. You want readers who closely resemble your target audience, but if they are all too similar in their likes and preferences, they’ll share blindspots that can be large.
  4. Beta readers are probably right when several identify the same problems. Remember, your story doesn’t take place on the paper; it takes place in the reader’s head. If you get consistent feedback that something is broken, assume that it is, or at the very least that it can be improved.
  5. Avoid biased readers, like close friends and family. So much can go wrong here. Overly positive feedback can be damaging — it can blind you to real problems. Also, assume that some beta readers won’t end up finishing your manuscript. If that’s going to create awkwardness each time you encounter this person going forward, maybe you don’t want them as a beta.
  6. Cherish good beta readers and respect them by only sending work that is truly ready. You’re asking someone to several hours attentively focusing on your work. That’s a lot to ask of someone. If you’re unsure whether your manuscript is ready, use writing workshops, critique partners, and other resources first.