One of my former insurance colleagues once said, “If a company makes a change, it probably won’t benefit you.”
Assuming the photo above is legit, and who knows, McDonald’s these days says it will round cash to the nearest five cents. McD is probably not the only one doing it. So if your change has last digit {1, 2, 6, 7} McD is rounded down, and so are you to lose 1 or 2 cents; if your change has the last digit {3, 4, 8, 9}, McD will be rounded up, and you will also earn 1 or 2 cents; if your change has the last digit {0, 5}, there is no effect of rounding. So in 40% of the latest figures you lose, at 40% you win, and at 20% there is no effect. The question is: are the last ten digits 0 to 9 equally likely?
All I need is a nice example of McD coupons. In cash transactions. Well, that’s not going to happen. Let’s try a different approach.
As with other retail prices, I assume that many meal prices end in 99 cents, such as $5.99, to seem less expensive. (As an actuary, I’ve never been asked to do this.) Marketers know there’s a left-digit effect, where customers focus on the leftmost number, so a price like $5.99 feels significantly cheaper than $6.00. To see Psychological prizes. But even if the majority of meal prices end at 99 cents, I believe the effect of that majority will be neutralized by the following.
I haven’t been to a McDonald’s in years. But I do know that there are many different menu items, I have no idea what the frequency is of, say, a Big Mac vs. Chicken McNuggets purchase, I have no idea the frequency of permutations of all possible menu item purchases, prices vary based on location because franchises set their own prices, and there is also a sales tax that varies by state and sometimes by city within the state. No doubt a McD data analyst has access to all this data, but I don’t. What follows is therefore not an exact analysis, but rather an approach that should be unbiased.
At first I thought Benford’s law might be useful. This law applies to the distribution of first digitsnot the last numbers, and says under certain circumstances (such as when values are distributed over several orders of magnitude, which is probably not true for a place like McDonald’s), P(1) = 30.1%, P(2) = 17.6%, P(3) = 12.5%, etc. This is interesting, but not useful here. To see
Benford for more information about Benford.
In How to solve itsays George Polya, “If you can’t solve the proposed problem, try solving a related problem first.”
I found that if I narrowed the problem down to a single meal item, a McDonald’s Big Mac Meal (MBMM), I could get an average (well, more representative) price per state. I could also apply an average (again representative) sales tax per state.
For each state, I took the MBMM price, added tax, and applied the McD rounding rule to the last figure. For each state, I define Cents as the after-tax cents portion of the MBMM price, CentsRounded as the cents portion after the McD rounding rule, and Rounding Difference = CentsRounded – Cents. The unweighted state average rounding difference was 0.04 cents. Not 4 cents, but 4% of a cent. This is positive and indicates a slight gain for McDonald’s, but barely greater than zero. This 0.04 cents is per transaction. Of course, McDonald’s sells a lot of hamburgers.
This study had many limitations, some of which I noted above, so it is by no means an exhaustive study. But to the extent that I did what I could, my friend was right in the opening paragraph: “If a company makes a change, it probably won’t benefit you.”
The R code is as follows:
library(dplyr)
# --- 1. Big Mac Meal Prices by State (approx, USD) ---
meal_prices <- tibble::tribble(
~State, ~Price,
"Alabama", 9.49, "Alaska", 11.59, "Arizona", 12.99, "Arkansas", 8.79,
"California", 10.69, "Colorado", 9.89, "Connecticut", 9.79, "Delaware", 8.99,
"Florida", 9.39, "Georgia", 9.49, "Hawaii", 10.99, "Idaho", 9.29,
"Illinois", 9.59, "Indiana", 8.99, "Iowa", 8.89, "Kansas", 9.09,
"Kentucky", 7.79, "Louisiana", 9.69, "Maine", 9.19, "Maryland", 9.49,
"Massachusetts", 9.99, "Michigan", 8.59, "Minnesota", 9.19, "Mississippi", 9.29,
"Missouri", 8.99, "Montana", 9.09, "Nebraska", 8.59, "Nevada", 9.69,
"New Hampshire", 8.99, "New Jersey", 9.49, "New Mexico", 9.09, "New York", 9.89,
"North Carolina", 9.29, "North Dakota", 10.59, "Ohio", 8.89, "Oklahoma", 8.99,
"Oregon", 10.69, "Pennsylvania", 9.19, "Rhode Island", 9.49, "South Carolina", 9.29,
"South Dakota", 9.09, "Tennessee", 9.79, "Texas", 9.19, "Utah", 9.39,
"Vermont", 9.19, "Virginia", 8.99, "Washington", 9.69, "West Virginia", 8.99,
"Wisconsin", 9.19, "Wyoming", 8.99
)
# --- 2. Combined State + Local Sales Tax Rates (approx, fraction) ---
tax_rates <- tibble::tribble(
~State, ~TaxRate,
"Alabama",0.0944,"Alaska",0.0182,"Arizona",0.0837,"Arkansas",0.0948,
"California",0.0885,"Colorado",0.0780,"Connecticut",0.0635,"Delaware",0.0000,
"Florida",0.0700,"Georgia",0.0739,"Hawaii",0.0450,"Idaho",0.0602,
"Illinois",0.0874,"Indiana",0.0700,"Iowa",0.0689,"Kansas",0.0874,
"Kentucky",0.0600,"Louisiana",0.1011,"Maine",0.0550,"Maryland",0.0600,
"Massachusetts",0.0625,"Michigan",0.0600,"Minnesota",0.0749,"Mississippi",0.0707,
"Missouri",0.0813,"Montana",0.0000,"Nebraska",0.0696,"Nevada",0.0849,
"New Hampshire",0.0000,"New Jersey",0.0660,"New Mexico",0.0777,"New York",0.0852,
"North Carolina",0.0698,"North Dakota",0.0696,"Ohio",0.0724,"Oklahoma",0.0908,
"Oregon",0.0000,"Pennsylvania",0.0634,"Rhode Island",0.0700,"South Carolina",0.0744,
"South Dakota",0.0640,"Tennessee",0.0961,"Texas",0.0819,"Utah",0.0702,
"Vermont",0.0636,"Virginia",0.0567,"Washington",0.0947,"West Virginia",0.0648,
"Wisconsin",0.0572,"Wyoming",0.0556
)
# --- 3. Merge datasets ---
df <- inner_join(meal_prices, tax_rates, by = "State")
# --- 4. Compute totals and apply rounding rule ---
options(dplyr.width = Inf)
df <- df %>%
mutate(
# Total in cents, rounded to nearest cent
Total_cents = round(Price * (1 + TaxRate) * 100),
Dollars = Total_cents %/% 100, # whole dollars
Cents = Total_cents %% 100, # cents 0-99
# Apply 5-cent rounding rule
CentsRounded = sapply(Cents, function(x) {
last_digit <- x %% 10
if (last_digit %in% c(1,2,6,7)) {
return(floor(x / 5) * 5)
} else if (last_digit %in% c(3,4,8,9)) {
return(ceiling(x / 5) * 5)
} else {
return(x)
}
}),
# Final total in dollars
TotalRounded = Dollars + CentsRounded / 100,
# Rounding difference relative to nearest-cent total (in cents)
RoundingDiff = CentsRounded - Cents
)
head(df)
# --- 5. Summaries ---
mean_diff <- mean(df$RoundingDiff) # positive is benefit to company
sd_diff <- sd(df$RoundingDiff)
avg_abs <- mean(abs(df$RoundingDiff))
cat("Average rounding difference (¢):", round(mean_diff,3), "\n")
cat("SD of rounding difference (¢):", round(sd_diff,3), "\n")
cat("Average absolute rounding (¢):", round(avg_abs,3), "\n\n")
# Distribution of rounded cents
print(table(df$CentsRounded))
# --- 6. Histogram of rounding differences ---
bin_colors <- c("red", "green", "blue", "yellow", "purple")
hist(df$RoundingDiff,
breaks = seq(-2.5, 2.5, 0.5),
col = bin_colors,
main = "Distribution of Rounding Differences\nPositive is benefit to company",
xlab = "Rounding Difference (¢)",
ylab = "Number of States",
font.lab = 2)
# --- 7. State-by-state table of rounding effects ---
state_table <- as.data.frame(df) %>%
select(State, Price, TaxRate, Total_cents, TotalRounded, RoundingDiff) %>%
arrange(desc(RoundingDiff))
print(state_table)
End
#Change #good #dont #change #change #bloggers


