Instrumental Judgment and Expectational Consequentialism
by Brian Tomasik
First written: 2005; last edit: 5 May 2013
[A]ctions are evaluated in terms of the range of likely consequences. [...] The actual consequences of an action may be highly significant, but they do not bear on the moral evaluation of the action.
--Noam Chomsky, Hegemony or Survival: America's Quest for Global Dominance (2003), p. 187
The preceding quote might sound odd in an essay on utilitarianism. If the consequentialist goal is to maximize good outcomes, why are we judging actions on the basis of expectations? That sounds more like an appeal to intention-based morality....
The general object which all laws have, or ought to have, in common, is to augment the total happiness of the community; and therefore, in the first place, to exclude, as far as may be, every thing that tends to subtract from that happiness: in other words, to exclude mischief.
[...] But all punishment is mischief: all punishment in itself is evil. Upon the principle of utility, if it ought at all to be admitted, it ought only to be admitted in as far as it promises to exclude some greater evil.
--Jeremy Bentham, An Introduction to the Principles of Morals and Legislation, Chapter 13 (1789)
What, then, is the greater good that punishment accomplishes?
General prevention ought to be the chief end of punishment, as it is its real justification. If we could consider an offence which has been committed as an isolated fact, the like of which would never recur, punishment would be useless. It would be only adding one evil to another. But when we consider that an unpunished crime leaves the path of crime open not only to the same delinquent, but also to all those who may have the same motives and opportunities for entering upon it, we perceive that the punishment inflicted on the individual becomes a source of security to all. That punishment, which, considered in itself, appeared base and repugnant to all generous sentiments, is elevated to the first rank of benefits, when it is regarded not as an act of wrath or of vengeance against a guilty or unfortunate individual who has given way to mischievous inclinations, but as an indispensable sacrifice to the common safety.
--Jeremy Bentham, The Rationale of Punishment, Book 1, Chapter 3 (1830)
Of course, there are many cases in which punishment does not accomplish the aim of prevention, and in those cases, it is not justified. The point is merely that it can be justified when it works.
Moral judgment serves the same purpose as punishment: changing future behavior. Like punishment, saying that an action is "moral" or "immoral" serves the instrumental goal of causing good future outcomes. It does so by changing the immediate individual utility that people feel toward different options.
Example. Person A is giving medicine to her ill cat. She neglects to read the dosage label and gives her cat far too many pills. As a result the cat becomes even more sick. While A was only trying to do the right thing, she ended up doing more harm than good. Ought we to express disapproval?
The answer depends, of course, on what disapproval would accomplish. If it would make A significantly more conscientious in the future, then we ought to scold her for making a bad decision. If it would make her more depressed and less able to care for her cat, then we ought to console her instead.
Example. Person B is a burglar climbing up the balconies that stick out from the side of a large apartment building. Person C, unwitting that B is hanging off the bottom of his balcony, walks out, steps and B's fingers, and causes B to lose her grip; B falls to her death. Should we say that C has made a wrong decision?
As before, the answer depends on the expected value of benefit that would result from deterrence. In this case, deterrence is probably not worthwhile. Unless the number of burglars climbing up apartment buildings rises dramatically, the probability that any given person in the future will kill someone by stepping out onto his balcony is tiny--sufficiently so that it's not worth the cost of scolding people.
Objection. How can you not say that C's action was wrong? After all, that action killed B!
Response. Certainly C's action turned out to be unfortunate; in retrospect, we can wish that it hadn't happened. But I am not using the word wrong synonymously with unfortunate. I reserve the former for the instrumental purpose of changing future outcomes. Inasmuch as it's not worthwhile to change future outcomes in this case, I do not use the word.
There seems often to be a notion that people deserve, in some ultimate sense, punishment for their bad actions. I have only a very weak ability to feel such an intuition, and it often puzzles me: What good would it accomplish to increase the amount of pain in the universe by inflicting punishment, other than to deter future behavior? In light of the logical incoherence of ultimate libertarian free will, the notion of
just desserts seems even more odd.
Evolutionary psychology has brought us these feelings of vengence in order to serve as a credible threat of retaliation, even when exacting revenge would provide no reparation for the harms committed. In this sense, irrationality can be rational. That said, now that we have governmental punishment, feelings of revenge no longer serve this purpose.
Rule. At time t_0, person D must make a decision among several options. At later time t_1, we must decide whether to judge his decision at t_0 right or wrong. Labeling a decision "right" will reinforce D's behavior; labeling it "wrong" will motivate D to choose a different and better option next time. If the conditions that prevailed at t_0 remain true at t_1 and into the indefinite future, then we ought to say that D acted correctly when he chose the option of maximum expected aggregated utility and incorrectly otherwise. This is true irrespective of whether D's decision actually did maximize aggregated utility.
Example. D is presented with the following decision. A wicked fox will perform either action E or action F in ten seconds, and D may choose which one the fox carries out.
Clearly, D ought to choose F, because the expected number of squirrels tortured is 2, rather than 10.
- Option E: The fox tortures 10 squirrels for one minute each.
- Option F: With probability 0.9, the fox tortures 0 squirrels; with probability 0.1, he tortures 20 squirrels for a minute each.
D chooses F, but unfortunately, it so happens that this is one of the few times when the fox does torture 20 squirrels. The actual outcome is worse than if D had chosen E. Yet, we still ought to say that D's decision was a good one. Why? Because the purpose of evaluating the wisdom of a decision at all is not to change the past (which is impossible) but to affect the future. If D were to make the decision over again at t_1, he still ought to choose F because F still has the better expected value. Since we want to encourage D to choose F in the future, we say that he was right to choose F at t_0.
The principle in the Rule always applies when conditions remain constant from t_0 to t_1 and into the indefinite future. But if conditions change, the rule may not apply. Suppose that, after D made his decision at t_0, the fox changed option F to consist instead of torturing 20 squirrels with probability 1. Now we ought to condemn D's choice of F--even though it was the best option he could have chosen at t_0--because we don't want him to choose F at t_1.