On this forum the majority of persons practice some form of utilitarianism, including myself. The grounds of utilitarianism has changed many times over, in its relatively short history, often in adjustment to some criticism. This topic is concerned with the sects of "Preference," and "Welfare" (Harris) utilitarianism.
For those who don't know;
Preference Utilitarianism (PU) weighs utility by how many interests are abided by (for positive values) or against (for negative values). For this post, the important fact is that this form of utilitarianism is based upon what the moral patient thinks is good for them.
"welfare" Utilitarianism (WU) weighs utility by the effect an action has on the welfare of an individual, and as Harris gracefully points out "you can have an opinion on wellbeing, and you can be wrong about it." What he is saying is that utility is not weighed by the opinions of the moral patients.
A situation was posed to me some time ago when I affirmed PU, "There are wives who, often under religious mentality's, think it is best for them to be subservient to there husbands."
This situation highlights the difference between the two schools of thought. By PU it is clear that this is not morally bad, because the moral patients (the wives) do not have an interest in being non-subservient to there husbands. However by WU it is clear that there is still a moral objection, the wives are just wrong about how to maximise there wellbeing.
This is also interesting when applied to other situations, such as "Is it morally wrong to consume food that is known to be bad for your health?"
There are objections to consider regarding how your ill-health will effect those close to you, as well as how much good you would be able to do if you where not ill, but I am ignoring these as is suggested by the title of this topic. Does the moral agent have a moral obligation in which they are the moral patient?
By PU it would appear to be no, because there is consent, and no interest is gone against, or at least the moral agent thinks that other interests in favour of eating the food of ill health out weigh the interests not abided by. However by WU this would not matter, the food ultimately has a greater ill-effect on wellbeing than not consuming such food, and the moral agent is therefore wronged.
Tell me what you think.
Do we have moral obligations to ourself?
- bobo0100
- Senior Member
- Posts: 314
- Joined: Thu Jun 12, 2014 10:41 pm
- Diet: Vegan
- Location: Australia, NT
Do we have moral obligations to ourself?
vegan: to exclude—as far as is practicable—all forms of exploitation of, and cruelty to, animals for any purpose; and by extension, promotes the development and use of animal-free alternatives for the benefit of humans, animals and the environment.
- brimstoneSalad
- neither stone nor salad
- Posts: 10370
- Joined: Wed May 28, 2014 9:20 am
- Diet: Vegan
Re: Do we have moral obligations to ourself?
I'm not a Utilitarian, but:
Informed consent, not simply consent in the context of ignorance, is what matters.
"idealized" interests resolve the problems with welfare.
In terms of the more altruistic hybrid form of consequentialism I subscribe to: Yes, you do have to consider yourself to an extent.
You need to take care of yourself to do good, and serve as an example for others, as well, a system which is completely self sacrificing is not the optimal system due to poor adoption and retention.
Informed consent, not simply consent in the context of ignorance, is what matters.
"idealized" interests resolve the problems with welfare.
In terms of the more altruistic hybrid form of consequentialism I subscribe to: Yes, you do have to consider yourself to an extent.
You need to take care of yourself to do good, and serve as an example for others, as well, a system which is completely self sacrificing is not the optimal system due to poor adoption and retention.
- bobo0100
- Senior Member
- Posts: 314
- Joined: Thu Jun 12, 2014 10:41 pm
- Diet: Vegan
- Location: Australia, NT
Re: Do we have moral obligations to ourself?
That surprises me. May I ask what the primary difference between your "altruistic hybrid form of consequentialism" and Utilitarianism would be?brimstoneSalad wrote:I'm not a Utilitarian, but:
Not sure If you intended it, but "Idealised" interest's, that are based on what is good for the moral patient's, rather than what the moral patient thinks is good for them, is synonymous with the latter "welfare" utilitarianism of Sam Harris.brimstoneSalad wrote:"idealized" interests resolve the problems with welfare.
Not sure how familiar you are with Immanuel Kant's works. He wrote about humans having indirect duties to animals in as far as not abiding by such duties may lead to you being violent to fellow humans. In this case, although the beings whom are directly affected by your actions are the animals concerned, they are not the moral patients. The moral patients are of course Humans. Likewise in the examples you give the moral agent (yourself) is not also the moral patient, your moral consideration for yourself is indirect.brimstoneSalad wrote:In terms of the more altruistic hybrid form of consequentialism I subscribe to: Yes, you do have to consider yourself to an extent.
You need to take care of yourself to do good, and serve as an example for others.
These sorts of objections where brought up, and dismissed, as I am looking only for objections wherein the moral agent is also the moral patient.
"There are objections to consider regarding how your ill-health will effect those close to you, as well as how much good you would be able to do if you where not ill"
I find this a week argument. The nature of morality is no more concerned with "adoption and retention" than the nature of human health. This is a question that effective communication is better suited to deal with.brimstoneSalad wrote:a system which is completely self sacrificing is not the optimal system due to poor adoption and retention.
vegan: to exclude—as far as is practicable—all forms of exploitation of, and cruelty to, animals for any purpose; and by extension, promotes the development and use of animal-free alternatives for the benefit of humans, animals and the environment.
- brimstoneSalad
- neither stone nor salad
- Posts: 10370
- Joined: Wed May 28, 2014 9:20 am
- Diet: Vegan
Re: Do we have moral obligations to ourself?
I think you got it in the next section. In Utilitarianism, you inherently consider your own pleasure, and weigh it against another's.bobo0100 wrote:That surprises me. May I ask what the primary difference between your "altruistic hybrid form of consequentialism" and Utilitarianism would be?
If I expect to gain 10 units of pleasure for causing you 9 units of pain, I should do that -- indeed, I am morally obligated to do so.
I don't consider that an accurate depiction of morality. Morality is what we do when we are concerned for others -- even at the expense (or especially at the expense) of our own self interests.
Utilitarianism is more of a system of social ethics than applicable to personal morality.
It can be in some interpretations, but not quite.bobo0100 wrote:Not sure If you intended it, but "Idealised" interest's, that are based on what is good for the moral patient's, rather than what the moral patient thinks is good for them, is synonymous with the latter "welfare" utilitarianism of Sam Harris.
Welfare is poorly defined; a nebulous concept. Sam intentionally leaves it so, because he says what welfare is is a question for science to answer, as medical science answers questions about human health. I do not quite agree.
It is not welfare, but will that I am interested in. Most people would will to be better off in terms that Sam would recognize, but not all of them would. Some people, even with full information, will choose self destructive behavior.
Quite. And you are correct; the consideration is as a consequence. I don't agree with Kant, of course, about non-human animals.bobo0100 wrote:Not sure how familiar you are with Immanuel Kant's works.[...] Likewise in the examples you give the moral agent (yourself) is not also the moral patient, your moral consideration for yourself is indirect.
Look into Virtue Ethics, if you haven't; consequentialist virtue ethics make similar arguments.
Which is why I had to preface things by saying I'm not a utilitarian.bobo0100 wrote:These sorts of objections where brought up, and dismissed, as I am looking only for objections wherein the moral agent is also the moral patient.

This is a good comparison, and compliance is very relevant to human health. We aren't perfect.bobo0100 wrote:I find this a week argument. The nature of morality is no more concerned with "adoption and retention" than the nature of human health.brimstoneSalad wrote:a system which is completely self sacrificing is not the optimal system due to poor adoption and retention.
A diet of pure vegetables and some legumes with no oil or grain may be the healthiest possible, but if few to nobody follows it, it's not the best recommendation to make.
It's not just about communication, it's about application. We should serve as good practical examples of a living morality that other people can follow. And to the extent that we do, morality is socially contextual to the degree that its application is necessarily imperfect.
- bobo0100
- Senior Member
- Posts: 314
- Joined: Thu Jun 12, 2014 10:41 pm
- Diet: Vegan
- Location: Australia, NT
Re: Do we have moral obligations to ourself?
I believe Jeremy Bentham wrote something along the line of "utilitarianism, when followed leads ether to altruism or egoism."brimstoneSalad wrote:I think you got it in the next section. In Utilitarianism, you inherently consider your own pleasure, and weigh it against another's.
This is a very basic understanding, and does not take into consideration negative utilitarianism. Pleasure is not as important to us as pain is. By negative utilitarianism 10 units of pleasure is given less weight than the nine units of pain and would not be considered good. I largely agree with this. But even than you could imagine a situation wherein the difference between the pleasure and the suffering is so significant that it would be considered moral permissible. Are you hinting to the "utility monster" objection?brimstoneSalad wrote:If I expect to gain 10 units of pleasure for causing you 9 units of pain, I should do that -- indeed, I am morally obligated to do so.
Sacrifice has been romanticized historically, but I don't see any reason why it should be. Your doing just as much good if it does not effect you negatively.brimstoneSalad wrote:...even at the expense (or especially at the expense) of our own self interests.
Your right about this MAJOR flaw in Harris theory, but I'm not trying to defend the theory as a whole but rather just how his heory differs from preference utilitarianism. In this it is perhaps right that "idealised interests" is a better descriptor.brimstoneSalad wrote:Welfare is poorly defined; a nebulous concept. Sam intentionally leaves it so, because he says what welfare is is a question for science to answer, as medical science answers questions about human health. I do not quite agree.bobo0100 wrote:Not sure If you intended it, but "Idealised" interest's, that are based on what is good for the moral patient's, rather than what the moral patient thinks is good for them, is synonymous with the latter "welfare" utilitarianism of Sam Harris.
This is right on the money, but I don't think the question has been answered, do they wrong themselves when they chose a self destructive path?brimstoneSalad wrote:It is not welfare, but will that I am interested in. Most people would will to be better off in terms that Sam would recognize, but not all of them would. Some people, even with full information, will choose self destructive behavior.
bobo0100 wrote:Not sure how familiar you are with Immanuel Kant's works.[...] Likewise in the examples you give the moral agent (yourself) is not also the moral patient, your moral consideration for yourself is indirect.
So you disagree with utilitarianism because it allows for moral patients and moral agents to be one in the same?brimstoneSalad wrote:Quite. And you are correct; the consideration is as a consequence. I don't agree with Kant, of course, about non-human animals.
Look into Virtue Ethics, if you haven't; consequentialist virtue ethics make similar arguments.
I see Virtue Ethics as more of a principle of social interactions than a comprehensive moral code.
But this misses the point, it may be better to suggest that a lower point on the moral landscape is optimal as it would increase compliance, but that would not change the fact that there are higher points in the landscape. I'm not interested in compliance, I'm interested in what is actually optimal.brimstoneSalad wrote:This is a good comparison, and compliance is very relevant to human health. We aren't perfect.bobo0100 wrote:I find this a week argument. The nature of morality is no more concerned with "adoption and retention" than the nature of human health.
A diet of pure vegetables and some legumes with no oil or grain may be the healthiest possible, but if few to nobody follows it, it's not the best recommendation to make.
vegan: to exclude—as far as is practicable—all forms of exploitation of, and cruelty to, animals for any purpose; and by extension, promotes the development and use of animal-free alternatives for the benefit of humans, animals and the environment.
- brimstoneSalad
- neither stone nor salad
- Posts: 10370
- Joined: Wed May 28, 2014 9:20 am
- Diet: Vegan
Re: Do we have moral obligations to ourself?
That's very interesting, and I haven't heard that.bobo0100 wrote: I believe Jeremy Bentham wrote something along the line of "utilitarianism, when followed leads ether to altruism or egoism."
Do you remember the context?
I think that's pretty accurate; Classical Utilitarianism is like balancing personal and others' interests on a razor's edge. Both altruism and egoism are more stable formulations that are easier to follow.
Negative utilitarianism doesn't really make sense, because that's not inherently true. There are clearly amounts of pain we will endure to experience pleasure.bobo0100 wrote: This is a very basic understanding, and does not take into consideration negative utilitarianism. Pleasure is not as important to us as pain is.
First, I should clarify that I only regard pleasure and pain as meaningful to the extent sentient organisms want or do not want them.bobo0100 wrote: By negative utilitarianism 10 units of pleasure is given less weight than the nine units of pain and would not be considered good.
But here you're missing the important notion of implicit normalization. It makes no sense to compare these without it.
What is a unit of pain? What is a unit of pleasure?
A unit of pain or pleasure are comparable when they are considered equal by those experiencing them.
I like spicy food. At a certain point of spiciness, the qualia of pain and pleasure balance out such that I could "take it or leave it".
In this case, the pleasure and pain are exactly equal.
You can't say people avoid pain more than seek pleasure, because there's no sense of scale there. You can't compare pleasure and pain except within the context of those experiencing them, and in that context they have a clear exchange rate.
(Assume I'm a moral patient of myself) If I'm eating something spicy that I don't on the whole dislike, you can't come around and say I'm being unethical because the pain is more important than the pleasure, so I shouldn't eat it because you have a better idea of how pain and pleasure compare than I do.
Do you get my point?
Right, but rationally, it doesn't have to be very significant at all. See above.bobo0100 wrote:But even than you could imagine a situation wherein the difference between the pleasure and the suffering is so significant that it would be considered moral permissible. Are you hinting to the "utility monster" objection?
And yes, this is the Utility Monster objection. We're surrounded by Utility Monsters, or beings that fancy themselves as such.
When doing good affects you positively, you may still be doing a beneficial thing (even equally so), but there's a good chance you are now operating less as a moral agent, and more as an amoral one.bobo0100 wrote:Sacrifice has been romanticized historically, but I don't see any reason why it should be. Your doing just as much good if it does not effect you negatively.
When a tornado destroys a house and causes suffering, is it being an evil moral agent? No, it just does.
Amorality is acting out the default of 'wild' behavior. For a tornado, that's spinning around and breaking stuff randomly, For a wolf, that may be eating grandmothers. For a being without will, that just means acting in any way it acts. For a being of will, that means acting purely in self interest to the extent it understands it. For a rational being of will, that means acting purely in terms of rational self interest. These are states of default behavior that don't really care about others.
When you move beyond self interest to act either helpfully or destructively, then you start getting into moral and immoral action.
A very large part of what we often consider morally good action (for example, veganism), isn't a good action at all, but just abstaining from immoral/evil action, and moving closer to a rational ideal of neutrality given our ability to reason.
Vegans aren't necessarily good, just less evil.
It's when you put yourself out and go beyond what strictly materially benefits you that you start dealing in morally good deeds.
If you are a true island, and your actions affect no others: no, you can not wrong yourself, unless you do it by accident due to incomplete information.bobo0100 wrote:This is right on the money, but I don't think the question has been answered, do they wrong themselves when they chose a self destructive path?
You can't harm yourself by following your own will, because that defines your interests.
Sam Harris can't define your interests for you, although you could be mistaken about your actual interests (that is why we have to talk about idealized interests, or informed consent).
However, it's also not always that simple. This is all given the assumption that you're really of one mind, which isn't necessarily true.
I think so. Although I think there is probably more to it than that. An investigation of the Utility monster would help shine light on that.bobo0100 wrote:So you disagree with utilitarianism because it allows for moral patients and moral agents to be one in the same?
Right. It's a useful heuristic though.bobo0100 wrote:I see Virtue Ethics as more of a principle of social interactions than a comprehensive moral code.
I think you missed the idea of the moral landscape. If compliance were increased, that would be a higher point. The moral landscape is consequential; it looks at the end result of systems.bobo0100 wrote:But this misses the point, it may be better to suggest that a lower point on the moral landscape is optimal as it would increase compliance, but that would not change the fact that there are higher points in the landscape. I'm not interested in compliance, I'm interested in what is actually optimal.
Harris uses weak metrics to define the goodness he's after there, but the visualization as a landscape with peaks and valleys is very good.