Arguments that arguments prove too much often prove too much.

It's common for approximately deductive arguments to receive responses of the form: "If this were true, something else, which clearly isn't, would also be true, therefore it's false." or "This argument proves too much." or "This argument can be modified in this way, but notice that its conclusion then becomes contrary to what it was before modification! This suggests that it shouldn't be assigned much weight." or "This argument is similar to another argument, and this other argument is susceptible to attacks of a particular kind, therefore something similar is presumably true of the original argument, and it can be assumed to crumble in response to them in an analogous way!". Although these general counterarguments can be extremely powerful when used correctly, they are not necessarily appropriate here. Applying a reductio-ad-absurdum counterargument to a deductive argument is directly analogous to attempting to prove by contradiction a particular claimed theorem to be false; it can only succeed inasmuch as the purported theorem is not, in fact, a theorem. The appropriate approach, therefore, would seem to be to identify the logical gap in the apparent proof of this theorem. The same is true of slightly less formal arguments, however this doesn't prevent people from employing these kinds of abstract, non-destructive counterarguments. For example, the LessWrong user Bentham's Bulldog posted a collection of deductive arguments intended to show that shrimp suffering is sufficiently likely to exist, and likely to be sufficiently immoral if it exists, that it would dominate a rational understanding of the moral value of eating infinite numbers shrimp and what that entails. 

The argument involved premises stating that the probability that shrimp are conscious and can suffer, although possibly minute, almost certainly exceeds 0.

Along with the, seemingly reasonable (although arguably objectionable within timeless decision theory), claim that the suffering of multiple beings is equivalent in its magnitude and moral significance to the sum of the corresponding quantities to each of those beings individually[1], this seems to imply that the expected utility of torturing an infinite number of shrimp would be itself infinite. 

As, in order to apply utilitarianism in a way which yields preferences, it's at least useful to assume that the value of a human life is finite, Bentham's Bulldog concludes that It's Better To Save Infinite Shrimp From Torture Than To Save One Person .

While this is a simplification of the arguments presented by Bentham's Bulldog, I believe it captures the ways in which people consider them to fail to be compelling.

Jiro commented: 

 It's better to save infinite electrons from torture than to save one person, by this reasoning. There's a certain non-zero probability that electrons can suffer. It's pretty tiny, of course, but if you have an infinite number of electrons, the expected reduction in suffering from saving them, even given this very tiny probability, would exceed the suffering of one person.

Either the infinite shrimp or infinite electron version is just another example of utilitarianism leading to crazy town.

This is an attempt to show that the original argument, or rather the logic underpinning it, can be applied to different premises to prove too much; I assume the reason why this approach was taken was that the original argument was in fact logically valid, in the same way, but this suggests that the problem was not with the logic at all, but with its premises. Therefore it would probably have been more helpful if comments had focused upon why the arguments premises were (if in fact they were) false. This kind of objection could never be applied to a mathematical theorem, as it would take the form of rejecting the axioms from which it was proven, but this is where the analogy to pure mathematics breaks down: while a theorem can be made an unconditional, yet interesting statement by incorporating its axioms into it, the value of informal deductive arguments applied to the real world lies in the truth of their conclusions (not of the fact that these conclusions follow from their premises), so their premises ought to be questioned in this situation.  In order for the argument to be sound, plausible justification for its premises must exist, and it seems plausible, as mentioned above, that the premise concerning the additivity of suffering is false, making this an appropriate point of contention. 

However, Jiro's comment does not do this, and instead provides an argument which is susceptible to the same kind of criticism (directed at implicit premises) , which in this case is that, unlike in the instance of a living being with a nervous system, it appears no more likely that any particular event which happened to an electron would cause it to experience pleasure than that it would cause the electron to experience pain, which removes the reason for taking any particular action, i.e. attempting to avert electron torture, which was the contested implication of the original argument applied to shrimp. 

This comment was, itself, upvoted quite a lot, while the original post was heavily downvoted, in spite of the fact that, as far as I can tell, there is an approximate symmetry between the ways in which the original argument, and the meta-argument that it proves too much, fail to be compelling[2].

This suggests that voters are reasoning in reverse as follows:

Since they agree with one of the arguments' conclusions and disagree with the other, and as the premises of both arguments seem reasonable superficially, the logical structure of one but not the other of these arguments must be valid, even if they cannot find an illogical step in either.

The above leads to a meta-principle concerning the evaluation of non-mathematical, approximately deductive arguments: wherever a deductive argument with valid inference appears to prove too much, question its premises. If there are multiple arguments of the same form with different conclusions, attempt to identify the additional premises which need to be stated in order for one or another of these arguments to be valid, and then select whichever includes premises which you agree with.

In addition, it demonstrates that arguments intended to show that arguments prove too much are likely to prove too much themselves, because they truly only apply to the deductive part of other arguments, but where this is valid, and the truth of the claims depends on the premises, there will certainly exist many true[3] and false arguments of the same form with different premises, all of which will be 'proven' to be false by the reductio ad absurdum meta-argument.

 

Note that this post uses the debate concerning shrimp welfare purely as an example of the general phenomenon and is not intended to contribute to it directly. No antagonism is intended towards either Jiro or Bentham's Bulldog.

  1. ^

    I believe that this principle is implicit in most of Bentham's Bulldog's arguments, and in particular, in this quote: "No matter how many other buttons you’ve pressed of each kind, it’s better to press the button that spares Graham’s number shrimp than the button that adds an extra millisecond to life!" .  Bentham's Bulldog goes on to admit that this principle, or at least its implications, is/are counterintuitive, but reaffirms it.

  2. ^

    What I mean by this is that, just as the likely reason why many disagree with the conclusion of the original argument has nothing to do with its logical structure and something to do with its premises, the main reason why I myself for example do not find Jiro's counter meta-argument compelling is because one of the (implicit) premises to the parody argument (the symmetry with respect to the valence of the hypothetical conscious experiences of electrons) is also a premise of the meta-argument, and since it is objectionable, so is the meta-argument. This was entirely predictable, since Jiro did not even attempt to question the premises directly, even though they are the 'high level generators of disagreement' in Scott Alexander's hierarchy of arguments. 

  3. ^

    Consider an argument that It's Better To Save Infinite Humans from torture than To Save One Person.



Discuss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top