How do we deny someone a good?

David Chalmers has coined the term “hard problem” to describe the issue of consciousness; science has had great success solving “easy” problems related to the functional roles of mental states but has had little (and perhaps no) success solving the deeper questions regarding consciousness itself.  Although, the existence of some consciousness problem beyond the functional (and scientifically testable) issues of mind is extremely controversial, I think a similar language might be useful when talking about ethics.

We can imagine a case in which someone desires some good (something that they desire to do, that gives them pleasure); perhaps they want to witness animals fight each other to the death.  Society, however, might prohibit this; if a person does do this sort of thing they will be put in jail.  That would certainly leave the person worse off as far as experiencing goods.  So, simply put, society has decided to deprive this person of some good.

There could be an “easy” solution to this problem.  If the person reasons morally within a similar moral context as the rest of society then we can show them that setting up animal fights is inconsistent with other moral views that they have.  Notice that if this works we are guaranteed a solution that is satisfactory for both society and the thwarted animal killer.  It is a matter of logic whether a given moral belief follows from others.  Of course, making our moral outlook internally consistent is a lengthy process, but at least we know the process by which this can be achieved.

However, there is another possibility that seems to be “hard”.  Imagine the person who wants the spectacle of animal slaughter simply says that they consider animals to have no moral rights whatsoever.  They may have metaphysical (or similarly, religious) views that allow them to do anything they want to animals just as we do with machines.  As Descartes believed, this person might think that animals are simply automatons; therefore, under their view, to protect animals from needless pain and death is as pointless as protecting robots from a similar threat (and we do watch robots destroy each other without feeling guilty).

This is a “hard” problem because we can not reason this person out of the wish to watch animals suffer.  Their view is, at some basic level, incompatible with our own and there is no reason for anyone to choose one view over the other unless they refer to their pre-existing moral views.  We can try to find some moral common ground with the person so that they will come around to our point of view, but this attempt may very well fail.  So, we must simply deny the person the pleasure they seek.  What gives us the right to do this?  At this point the process completely falls apart; we must reference yet more moral beliefs of ours (perhaps about how we have the right to protect creatures from harm) in order to justify any action we take.  For these cases there can not be any more moral argumentation, only conflict.  This makes the problem quite “hard” indeed.

Leave a Reply

Your email address will not be published. Required fields are marked *