Your utility ratio
How many units of harm would you allocate to others to prevent 1 unit of harm to yourself?
A thought experiment: You must press one of two buttons, red or blue. The red button will give you a small electric shock. The blue one will give a similar shock to one hundred random strangers.1 You’ll never meet them and nobody will ever know it was you.
What are our obligations here? Feel free to answer the poll about what you would do before reading on.
Clearly most people would take the shock for themselves to prevent a much larger harm to others—100 times larger, in this case.
What if it was instead a choice between you and 10 strangers? It would be harder to take that sacrifice, but probably still the right thing.
You versus one stranger? In a lab setting I might do the virtuous thing and give the shock to myself, but in an analogous real-world situation I’d be shocking the other guy all day long.2
Utilitarianism is an ethical standard that judges right and wrong based on the total utility that our actions cause. There are different ways to define utility, but basically you’re supposed to try to maximize the world’s total pleasure, or minimize the total pain, or maximize pleasure minus pain.
The utilitarian choice to the thought experiment above would be to take the shock for yourself as long as the alternative is more than one other person. If it’s you versus one other person then the choice doesn’t matter.
If there is such a thing as objective morality, it probably looks something like utilitarianism. The alternatives for making moral judgments are things like religion, social customs, taboos, and our gut feelings. But these attitudes change all the time, and are therefore at high risk of being wrong. Maximizing the good of everyone is timeless.3
Nobody is a perfect utilitarian, because that would be too much. As long as there’s someone in the world who would gain more than you would lose by helping them, you’d have an obligation to do so. You’d have to help others up to the point that helping them would do more harm than good, because of the increasing cost to yourself and diminishing returns to them.
If you live and work in a wealthy country like the US, for example, you’re not a perfect utilitarian unless you’re living in a van and donating most of your income to people in developing countries.
The problem is the definition of the cost function. The function a utilitarian is supposed to maximize is
which would mean you’re indifferent to allocating utility to yourself and others. Of course we prefer ourselves. It’s more like:
if we value our own preferences R times more than others’. You might call it your utility ratio. Returning to the two-button thought experiment, R is the threshold number of strangers where you’d switch from pressing one button to the other.
The questions I’m interested in are:
What are the morally acceptable bounds for R?
What values of R are we implying in different ways in our lives?
I’ll go out on a limb and say I think the highest acceptable ratio is about 5. If I take a benefit for myself that would be five times more valuable to someone else, then I’ll feel a little guilty, but it’s a tough world out there and you’ve got to look out for yourself, et cetera. But if I’m doing something more lopsided than that, there’s an obligation for me to notice and change my behavior.
You might think five is already shockingly selfish—but I think it’s roughly in-line with things we do casually without thinking about it. The two big issues that come to mind are global poverty and animal suffering.4 But this post is getting long already.
Some of this feels cliched and obvious—yes, obviously we should help others—yes, obviously we favor ourselves to some extent. But the value in thinking about a ratio is that it makes these trade-offs explicit and can help find areas in our lives where our actions may be out of whack with our intentions.
This post is going to be a big rip-off of Peter Singer’s essay “Famine, Affluence, and Morality,” just FYI.
There might be experiments like this, but I didn’t find a great example. Please let me know if you know of one.
And the principle adapts to whatever happens to be good or bad for the person, time, and place.
Oops, sorry, now this post is turning into a rip-off of Peter Singer’s classic book “Animal Liberation.” Can you tell I’m a fan?

