Any books defending utilitarianism on metaphysical grounds?

I'm often told that, unlike other systems such as virtue ethics, utilitarianism doesn't live up to the demands of metaphysics. And I've seen utilitarians who completely disregard metaphysics, which doesn't help matters much. Have any philosophers recently attempted to defend utilitarianism on metaphysical grounds?

πŸ‘︎ 7
πŸ’¬︎
πŸ“…︎ Aug 10 2021
🚨︎ report
Why is utilitarianism/consequentialism so common among rationalists?

Is it true that there is a certain kind of person who is attracted to the quasi-scientific, systemizing theory of morality known as (rule) utilitarianism? If so, what kind of person is that, and why do you find it attractive? Or is it more likely that the effective altruism movement and people like Peter Singer have influenced it? I am going to start by identifying what I think are the strongest motivations for utilitarian theorizing, and then I am going to explain a series of problems that I don't think there are good answers for.

Most rationalists I have asked about the subject tell me their interest in utilitarianism largely comes down to their theoretical preference parsimony--"it boils everything down to once clear principle." Which is strange, seeing as consequentialism is a pluralistic theory that encompasses more than one starting variable. Pleasure and pain are morally relevant--and, for utilitarians, relative impartiality in the distribution of utilities is also thought to matter, which is yet another principle.

As someone who already acknowledges the intrinsic significance of more than one moral factor, it should not be hard for a utilitarian to appreciate the appeal of counting further factors as being morally fundamental (i.e. by saying that, even when consequences are the same or worse, considerations of honesty, bodily autonomy rights, promises, special relationships, reciprocity after acceptance of benefits, etc. can tip the moral scales in favor of some action). If you doubt that pleasure and pain are distinct experiences and moral granules, consider whether a state of consciousness with zero experience of pleasure is one of great pain, rather than simply one of no pleasure. It seems implausible to think that such a state is impossible, or that it would be agonizing.

The misgiving I have about this is that parsimony (even in science) is only an explanatory virtue if it actually is explanatory; no scientist would prefer a more parsimonious theory that explains away the evidence to a theory that acknowledges it. A really parsimonious theory of everything investigated by science would be to deny the phenomena even exist in the first place, and are just illusions created by a mad scientist stimulating our brains: the earth was created 9 minutes ago with a false appearance of age, and the objects in your everyday life aren't real. This theory posits far fewer entities in order to generate an explanation when compared to the "re

... keep reading on reddit ➑

πŸ‘︎ 100
πŸ’¬︎
πŸ‘€︎ u/SoccerSkilz
πŸ“…︎ Jan 28 2022
🚨︎ report
All commies have is some books, some strawmen, social constructs like utilitarianism and egalitarianism, and some suicidal twitter girls. They can’t win!
πŸ‘︎ 227
πŸ’¬︎
πŸ‘€︎ u/AncapElijah
πŸ“…︎ Feb 15 2021
🚨︎ report
Utilitarianism gang
πŸ‘︎ 82
πŸ’¬︎
πŸ‘€︎ u/MarieJoeHanna
πŸ“…︎ Jan 15 2022
🚨︎ report
I like utilitarianism, but without common sense reasonable ideas can be taken to absurd extremes thus making it ideal for humor
πŸ‘︎ 1k
πŸ’¬︎
πŸ‘€︎ u/air-bonsai
πŸ“…︎ Jan 23 2022
🚨︎ report
All commies have is a few books, cheap social constructs like utilitarianism and egalitarianism, and and hordes of white girl twitter users 😀
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/AncapElijah
πŸ“…︎ Feb 15 2021
🚨︎ report
Debunking misconceptions of utilitarianism

Hi! I recently got into a debate around utilitarianism. One person said that utilitarianism can be used to support racism, using the classic example that if there are 99 racists and one person of color, utilitarianism would argue in favor of oppressing the person of color. My understanding is that this is a misconception. I was wondering if anyone has any articles/resources debunking this? Thanks in advance!

πŸ‘︎ 17
πŸ’¬︎
πŸ‘€︎ u/Brilliant_Cat_803
πŸ“…︎ Jan 26 2022
🚨︎ report
Looking for feedback on the best books on utilitarianism.

I've recently been trying to help people learn about philosophy by putting together a collection of reading lists on a variety of different topics, including utilitarianism. The list is primarily based on books found in university course syllabi, encyclopedia bibliographies, and recommendations from the r/askphilosophy subreddit, such as the following:

University Course Syllabi:

Bibliographies:

Forum Recommendations:

It's not a foolproof method but I think it works relatively well. I used those links to create the following list of the best books on utilitarianism:

  • Utilitarianism: A Very Short Introduction – Katarzyna de Lazari-Radek & Peter Singer
  • Understanding Utilitarianism – Tim Mulgan
  • The Cambridge Companion to Utilitarianism – Ben Eggleston & Dale E. Miller
  • The Classical Utilitarians: Bentham and Mill – John Troyer
  • An Introduction to the Principles of Morals and Legislation – Jeremy Bentham
  • Utilitarianism – John Stuart Mill
  • The Methods of Ethics – Henry Sidgwick
  • The Point of View of the Universe – Katarzyna de Lazari-Radek & Peter Singer

It's intended to suit a variety of audiences and learning styles so it has a mix of beginner-friendly introductions, more academic overviews, and a few classic readings.

The full post can be found here. Please let me know if you have any feedback on these books/resources.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/noplusnoequalsno
πŸ“…︎ Oct 01 2020
🚨︎ report
Hear me out. Reality is universally shit. Therefore, denial of reality benefits us all. Therefore, we should deny reality and leap into the sphere of lunatic utilitarianism. Thank you for coming to my ted talk which now you cannot claim to have attended, because you should deny reality.
πŸ‘︎ 45
πŸ’¬︎
πŸ‘€︎ u/Matli_Bussibaer
πŸ“…︎ Jan 24 2022
🚨︎ report
The organ transplant counterexample shouldn't be considered a defeater of Utilitarianism. youtube.com/watch?v=iY95R…
πŸ‘︎ 18
πŸ’¬︎
πŸ‘€︎ u/Mon0o0
πŸ“…︎ Jan 13 2022
🚨︎ report
What are some good books about utilitarianism that you know of?

Greetings, fellow furless bipedal big astute monkeys.

I'm looking for books about utilitarianism (on an easy to grasp level) that help adopt this philosophy and worldview into one's lifestyle. Can you guys suggest me some?

Thanks in advance!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/-Yandjin-
πŸ“…︎ Sep 16 2020
🚨︎ report
Kaiba Signed My Utilitarianism Book
πŸ‘︎ 71
πŸ’¬︎
πŸ‘€︎ u/HopOnTheHype
πŸ“…︎ Sep 07 2019
🚨︎ report
The end result of utilitarianism
πŸ‘︎ 47
πŸ’¬︎
πŸ“…︎ Jan 04 2022
🚨︎ report
Utilitarianism Man - Existential Comics existentialcomics.com/com…
πŸ‘︎ 60
πŸ’¬︎
πŸ‘€︎ u/-lousyd
πŸ“…︎ Dec 28 2021
🚨︎ report
Seven Problems with Utilitarianism: The Morality of the Machine expressiveegg.org/2021/11…
πŸ‘︎ 220
πŸ’¬︎
πŸ‘€︎ u/Jakeoid
πŸ“…︎ Dec 13 2021
🚨︎ report
There is an actual philosophical problem with hypotheticals (which let's utilitarianism seem plausible)

*Edit for clarification: My arguments do not concern the debate on utilitarianism v. deontology. They concern the function of hypotheticals (more precisely: moral dilemmas) in assessing the moral framework of a person and the biased nature of hypotheticals towards utilitarianism.*

Vaush's debate with NonCompete included a discussion on the (philosophical) status and worth of hypotheticals. NonCompete seemed to be out of his depth, claiming that hypotheticals 'are Idealism' (?) and therefore (?) an illegitimate tool for debating or something like that. Obviously this kind of reasoning does not even count as an argument.

I do not want to defend NonCompete's confused stance. Although Vaush is more knowledgeable on the topic and correctly critiques NonCompete, he nonetheless takes a stance on the issue of hypotheticals that in turn is worthy of a critique.

In the debate, Vaush presented NonCompete with a hypothetical including an Alien invasion. I.e. he constructed a classical moral dilemma (which I will equate to hypotheticals from now on): The invading Aliens give one person a morally charged choice: Either do nothing and doom humanity as a whole or murder one child to save the entire human species. (Or something similar.) The extreme nature of the choice at hand, let's one immediately lean towards the less disastrous option of murdering the child in order to save humanity from extinction.

The philosophical problem with these kind of moral dilemmas lies in the fact that they construct a situation from which there is no escape: one has exactly two options (this is analytically true and makes it a dilemma) and the criteria one applies to determine which of the two options is preferable is supposed to give insight into the moral framework of the person who is (imagining) making the given choice (which makes it a moral dilemma.) The most famous example of this kind of dilemma is the 'trolley problem'. My contention is, that the dilemmatic nature of the choice actually renders a real judgment on what kind of moral framework the choosing person applies impossible. This is the case because a dilemma necessarily excludes one option: Withdrawing from the situation, thereby evading the dilemma and reconstituting one's freedom of action which was denied by the dilemma. This does not mean that dilemmas are morally neutral: Their set-up itself can be immoral. In Vaush's dilemma, the Aliens are the immoral ones, not the decision one makes based on the options. T

... keep reading on reddit ➑

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Allerernsteste
πŸ“…︎ Jan 11 2022
🚨︎ report
Regarding meta-ethics, shouldn't utilitarianism be subjective instead?

As i know it in their thesis they equate human

wellbeing = Good

suffering = bad

But aren't these two things mind dependent?

So how is utilitarianism a objective thesis in meta ethics?

πŸ‘︎ 4
πŸ’¬︎
πŸ“…︎ Jan 27 2022
🚨︎ report
Same-sex relationships and Utilitarianism

Hi all, I am writing a research paper in support of homosexuality or same-sex relationships through a Utilitarian perspective. My argument is that same-sex relationships can provide the same utility as opposite-sex relationships. Also, trying to force one to conform to one sexuality or identity increases pain instead of utility. I have been researching to see what would be a common objection to this view and how would I respond to it as a Utilitarian but had no luck finding any? If anyone has some suggestions, please feel free to share :)

Thank you!

πŸ‘︎ 52
πŸ’¬︎
πŸ‘€︎ u/matfalw
πŸ“…︎ Dec 14 2021
🚨︎ report
Utilitarianism as a justification for belief? (Personal Question)

Recently I've been interrogating some of my own beliefs through some of the techniques of SE, and I've come across some questions that I would love some input from the community on.

Primarily, I'd like to explore my god-belief. To give some background context, I was raised Christian and for a short time early in my life believed in the Christian god. It was fairly early on, if I remember correctly, that I began to question the religion and was unsatisfied with the answers. After leaving the Christian school I was made to attend, I came out of the atheism closet and probably would have been considered an anti-theist for a while after that (this is all pretty standard ex-Christian stuff, I think).

A few years ago, though, I began to revisit the idea of a higher power. I'm still very skeptical of religion, so this was more of a personal exploration of what it might feel like to believe in a "responsive universe", to put it one way. Throughout that process, I experimented with the outcomes of different beliefs--i.e. if I believe in a kind, loving higher power, what kinds of decisions do I make and what are the outcomes of those decisions? How does this belief affect my self-talk and other aspects of my psychology?

I'm a pretty strict utilitarian, so when these experiments yielded positive results (greater general happiness, positive decisions, more fluent creativity, etc.), that seemed like a good enough reason to choose to "believe" in said higher power.

The conflict I'm having, I think, is with my desire to believe things that are true. I have a high degree of personal confidence in this higher power, but only because I've chosen to. I don't think I could ever have any external confidence in this belief, as in thinking that someone else should think it's true, or that it's true in any demonstrable way. But if this belief if helpful and positive in my own life, and there isn't any evidence to directly contradict it, is that a good enough reason to believe it?

So, I guess I'm curious if anyone else uses utilitarianism as a reason justifying belief? What is the ultimate purpose in striving to believe things that are true, and does choosing certain beliefs that would otherwise be uncertain inherently contradict that?

πŸ‘︎ 9
πŸ’¬︎
πŸ“…︎ Jan 13 2022
🚨︎ report
Diminishing marginal utility and Utilitarianism

I thought of a funny thought experiment based on how utility is modelized by many economist.

Sorry if this is not clear but english is not my native language and this is a bit technical so ill provide a link to detail the utility concept https://courses.lumenlearning.com/boundless-economics/chapter/the-demand-curve-and-utility/

Utility is a 'score' that an economic actor gives a commodity, the actor then chose to trade commodities, so that to maximise his utility gain (the agents are considered to be rational).

The goal of a utilitarian economist should be to maximise the 'total utility' (sum of all utilities from all commodity owned by each actors)

Now utility is generally though as diminishing, having 1 house is great, having 2 house is cool, having 3 is meh, having 4 is tiring etc

Generally we would represent utility with a bell curve or a logistic function, lets say in our example we use

utility(x) = (5/(1+e^(-1x))-(5/2)

OK now for the funny part lets say we have two actors A with 100X and B with 0X, the total utility would be f(100)+f(0) ~= 2.5, if we redistribute X and transfer 10X from A to B the total utility would be f(90)+f(10) ~=5.0

So this very simple example seems to show that redistribution might be the way to go according by how the economy is modelized by economist, and in a hope to maximise total utility.

Happy to read about it if anyone know counter points to this, as I guess this point has been made before

πŸ‘︎ 29
πŸ’¬︎
πŸ‘€︎ u/PropagandaLama
πŸ“…︎ Nov 22 2021
🚨︎ report
Basic justification for utilitarianism?

Hey, sorry if this is a question you get a lot (if so I’m happy be to be directed toward whatever resources), but there’s one aspect to utilitarianism that’s always bugged me. Basically, I see two parts to the argument for it. First is that more happiness = more good. The second is that more good = what one should aim for. To me this seems be sort of making too many leaps.

β€œGood” is an ambiguous term. For that reason you are free to define it how you want- but you can’t give it two non-synonymous definitions. If you define β€œgood” as β€œhappiness”, then you can’t define β€œgood” as β€œthe object of correct action” (and vice versa). You have to actually make an argument for why happiness should be the object of what you β€œshould” do. So, what is your argument for this last part? What argument can you make for utilitarianism without using the term β€œgood”? (This is not a rhetorical question btw. I’m genuinely interested)

(Or, to phrase things another way- I often see utilitarians just assume that we should aim for happiness, and then sort of argue from there. It might seem common sense, but I guess that’s just not enough for me)

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/commiekid68
πŸ“…︎ Jan 01 2022
🚨︎ report
Why does Alasdair MacIntyre reject utilitarianism?

In After Virtue, MacIntyre presents a picture of modern moral theories, like branches of deontological and consequentialist ethics, failing due to their rejection of teleology which renders them unintelligible and self-referred, however at the same time powerful instruments of social manipulation in the current "culture of emotivism".

While I somewhat get that deontology succumbs to this critique as there are no goals to achieve but actions being right and wrong in themselves, I don't get why consequentialist theories like utilitarianism lack telos. If one makes use of MacIntyre's telos of man to pass from a state of "untutored human nature" to "man-as-he-could-be-if-he-realized-his-telos" by embracing several virtues, then why can't one realize one's telos by Bentham's and Mill's "greatest happiness for the greatest number"? Why is man's telos not maximizing happiness in such a way?

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/Wehrsteiner
πŸ“…︎ Jan 08 2022
🚨︎ report
Utilitarianism and Self Sacrifice? What else is there to utilitarianism?

We were discussing utilitarianism in my university literature course, and utopias in literature was brought up, which then lead to discussion of utilitarianism.

I have a basic understanding of utilitarianism and have just started to look into the direct opposites of it so that I can get a better grasp on these concepts. We were talking about the classic trolley problem and the sort of modified version where instead of pulling a lever, you actually have to push someone onto the tracks (I think?). I started thinking about how if someone were to pull the lever of push the person into the path of the train, they would be saving more lives at the sacrifice of one. However, this also comes at the sacrifice of one's own morals (that is, if you believe that murder is wrong and so forth). Are most ideas of the practice of utilitariansm inevitably self-sacrificing if applied to one's personal life? Or maybe in just hypotheticals? If so, would this mean that utilitarianism is the more "selfless" philosophical theory?

But I've met a lot of people who are very sure of their stance on utilitarianism, saying that it is the more practical and selfless perspective, bolstering their moral standing over a person who possibly would believe that the ends do not always justify the means. I still am kind of confused about the idea of the inevitable existence of a "Loser" inorder to provide satisfaction/needs met/happiness of another. I'm just thinking about this in hypotheticals, not in the applications to our current world, because I want to better understand the basic definitions and concepts of utilitarianism and its oppositions. Thanks!

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Slav_Defense
πŸ“…︎ Jan 25 2022
🚨︎ report
What is a good book on negative utilitarianism?

I couldn't really find anything on google books or library genesis. The wikipedia article for it was good though

πŸ‘︎ 7
πŸ’¬︎
πŸ“…︎ Oct 05 2019
🚨︎ report
First Kabalite painted! I just read the renegade book and decided to go for more utilitarian, metalic look for my custom kabal that has their lair hiden within iron thorn sub-realm. What do you think? reddit.com/gallery/qepkb3
πŸ‘︎ 111
πŸ’¬︎
πŸ‘€︎ u/ReadingSame
πŸ“…︎ Oct 24 2021
🚨︎ report
How do you feel about Moral idealism vs utilitarianism?
πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Jan 27 2022
🚨︎ report
Utilitarianism by John Stuart Mill is a milestone in ethics and philosophy. John Stuart Mill’s manifest presents a system of thought and action that declares that the morally right action is the action that results in the most good, happiness or less suffering. holybooks.com/utilitarian…
πŸ‘︎ 222
πŸ’¬︎
πŸ‘€︎ u/hanslicht
πŸ“…︎ Dec 16 2021
🚨︎ report
Plants and Preference Utilitarianism

Forgive me if this has been asked before, I'm new to both philosophy and Reddit!

Can a plant be said to have an interest or preference in living?

"The capacity for suffering and enjoyment is... not only necessary, but also sufficient for us to say that a being has interests." - Animal Liberation (Singer)

Is a plant growing towards a light source evidence of a preference? And is adapting clever ways to spread seeds demonstrating an interest in reproduction? Or is this just bio-programming (for lack of the correct term).

I realize that this preference may not be worth moral consideration because the plant arguably cannot suffer. However, after reading the above quote, I was wondering whether Singer was correct or if he needs to change "interests" to "interests worthy of consideration".

Any thoughts are very welcome - as I said, I'm very new to philosophy and am hoping that someone can clarify or affirm my thoughts on this.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Crumble_Pies
πŸ“…︎ Jan 19 2022
🚨︎ report
What's the difference between communtiarianism and utilitarianism?

I know communtiarianism focuses on the common good, whereas utilitarianism focuses on maximizing the well-being of individuals overall, but don't those two thing usually coincide? I can't think of a real word example in which we, as citizens, sacrifice something for our community that doesn't also maximize the wellbeing of our fellow citizens. Please help!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/agreatsock
πŸ“…︎ Jan 17 2022
🚨︎ report
Are there any rebuttals to the following critiques of utilitarianism by Elizabeth Anscombe, Marx and Nick Land?

From "Marxism and Ethics" by Paul Blackledge:

>"By focusing on the ends of actions rather than the means through which these ends are brought about, the broader family of consequentialist morality of which utilitarianism forms a part, is necessarily, in the words of Elizabeth Anscombe, β€œa shallow philosophy,” because for them β€œthe question β€˜What is it right to do in such-and-such circumstances?’ is a stupid one to raise” (Anscombe 1981, 36). The idea that our unmediated desires can act as a basis for the good life is fundamentally problematic. For, desires both change over time and exist as pluralities which do not necessarily pull in the same direction. We therefore must choose between them, and on these types of choices consequentialism has very little of interest to say. Indeed, by its focus on the ends of action, utilitarianism downplays just that aspect of our practice which is centrally important to moral theory: the means through which we aim to realize our ends. This lacuna goes a long way to explaining how, despite its radical roots, this approach has been used to justify all manner of inhuman acts in the name of their future consequences (MacIntyre 1964), and by conflating happiness with increased wealth it is blind to the way that modern societies generate so much unhappiness (Ferguson 2007; cf Frank 1999, Ch. 10; and Wilkinson 2005).”

Nick Land's essay "Utilitarianism is Useless**":**

>Utilitarianism is completely useless as a tool of public policy, Scott Alexander discovers (he doesn’t put it quite like that). In his own words: β€œI am forced to acknowledge that happiness research remains a very strange field whose conclusions make no sense to me and which tempt me to crazy beliefs and actions if I take them seriously.”
>
>Why should that surprise us?
>
>We’re all grown up (Darwinians) here. Pleasure-pain variation is an evolved behavioral guidance system. Given options, at the level of the in- dividual organism, it prompts certain courses and dissuades from others. The equilibrium setting, corresponding to optimal functionality, has to be set close to neutral. How could a long-term β€˜happiness trend’ under such (minimally realistic) conditions make any sense whatsoever?
>
>Anything remotely like chronic happiness, which does not have to be earned, always in the short-term, by behavior selected β€” to some level of abstraction β€” across deep history for its adaptiveness, is not only useless, but positively

... keep reading on reddit ➑

πŸ‘︎ 27
πŸ’¬︎
πŸ‘€︎ u/pirateprentice27
πŸ“…︎ Jul 28 2021
🚨︎ report
πŸ‘︎ 14
πŸ’¬︎
πŸ‘€︎ u/dumnezero
πŸ“…︎ Jan 05 2022
🚨︎ report
Utilitarianism, chaos, and greedy algorithms

An argument/ thought that I would be happy to hear some feedback about.

  • By utilitarianism, we need to strive to commit actions (a) that maximize general utility
  • The function that governs U(a) is chaotic, small changes in things that we do can have huge implications because there is a casual link very far into the future (a seemingly insignificant action that was done today can cause huge aggregate harm/benefit in surprising ways if calculated very far into the future)
  • A silly example: you decide to donate to a charity that helps a woman escape poverty go to school, and have kids at a later age, one of her descendants becomes a horrible dictator in the 25th century that enslaves and kills 20 billion people. If you didn't donate, she would have kids earlier and this dictator wouldn't have come into existence.
  • It's impossible to predict this casual chain very far, all we can do is try to predict short term implications of our actions using some heuristics and assume that far future implications are symmetrical - so we still gain utility in the short time, because that's the best we can do.
  • For the same silly example: we assume donating money to effective charities is good on average for the short term, but long-term implications can be good or bad in symmetrical ways thus canceling each other in expected utility.
  • In computer science, this type of short-term maximizing algorithm is called 'greedy algorithm' and they are usually producing very suboptimal results compared to more sophisticated long thinking algorithms. (not sure about their performance in chaotic systems)
  • Another issue is that sometimes short-term benevolent thinking can be worse than doing nothing. e.g. things that have not-extremely-trivial second-order negative implications like rent control which make the situation worse on average while seeming benevolent from a short-term perspective.
  • My question is, under chaotic utility function are the greedy utilitarian algorithms we use in practice really improving things significantly? could it be they actually make things worse compared to other more 'intuitive' moral behaviors? I'm really not sure about what is the answer here and would be interested to hear opinions each way.
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/EntropyMaximizer
πŸ“…︎ Jan 04 2022
🚨︎ report
What is the Marxist view on utilitarianism?

Especially with regards to ancient stoists such as Plato, who argues for a rudimentary form of utilitarianism in some of his works such as "The Republic", although I'd argue that Marxist dialectics is a more similative comparison to his view of applying paternalistic policies in order to guarantee the prosperity of the community, even advocating that the exchange and sale of private property is an unjust form of living?

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/staloidona
πŸ“…︎ Jan 19 2022
🚨︎ report
Utilitarianism and multiple worlds

If you are a utilitarian and believe in the existence of many worlds where all possible events play out, why should you care about what happens in the world you happen to live in? That is, in reacting to the event that X happens in this world, why should you care, given that not-X happened in another world and the experiences of people in that world are equally important?

One response could be that the experiences of people in the other worlds are not equally important - this seems plausible in modal realism (and I am curious what modal realists think about this), but not plausible for more physically based theories.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/halveamind
πŸ“…︎ Dec 30 2021
🚨︎ report
How does deontology work and why do deontologists support it rather than utilitarianism?

It may because I’m not too well read on ethics, but utilitarianism sounds vastly superior to deontology. Of course, deontology is one of the most widely taught schools of ethics, so it’s definitely because I’m missing something. To make this easier on me, how would a deontologist respond to something popular like the trolley problem and why?

(Note: I don’t exactly know the history of the trolley problem, so if it was made intentionally to oppose deontology, just tell me that and use a different example)

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/lone_ichabod
πŸ“…︎ Jan 06 2022
🚨︎ report
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/69EpicChungus1488
πŸ“…︎ Jan 24 2022
🚨︎ report
Objecting to Utilitarianism Over Its Moral Demands is a Self-Defeating Argument

The claim that utilitarianism makes too much of a moral demand is self-defeating. It's not a true objection to utilitarianism, but more of a failure to recognize one's own limitations.

I.e. the idea that one should make massive sacrifices for the greater good assumes that there are massive sacrifices that are within one's capacity to make, on a day to day basis.

Moments of "diving on the grenade" are highly unusual. It's also not pragmatic (and therefore, not utilitarian); one could spend so much time debating over whether or not to be vegan, whether or not to buy or not buy these shoes over potential slave labor. OR you could devote your life to finding the ways in which you could maximize good, making small choices on things like animal suffering like eating lesser to the benefit of your own health without altering your lifestyle in a way that might be personally unsustainable.

And, instead you perhaps start a business or organization related to something that you're good at, or build on a skill that you're good at, enriching your own life and improving the lives of others at the same time.

I don't believe that utilitarianism is a perfect moral system, but I do believe that the yet-unrealized perfect moral system must have maximal utility across time.

And, utility is probably the best absolute test for a moral system, but it's difficulty comes from measurement.

πŸ‘︎ 12
πŸ’¬︎
πŸ‘€︎ u/OssOfSoyce
πŸ“…︎ Dec 26 2021
🚨︎ report
According to utilitarianism, would it be ethical to destroy an entire civilization if it were about to simulate a single being experiencing eternal torture?

Let's say a civilization has the technological means of altering the perception of time or simulating beings, and you have incontrovertible proof that they were about to use this capacity to trap a sentient being into an eternal hell of the highest pain imaginable. In this case, I mean eternal in a literal sense: the subjective experience of the torture would never stop or diminish in its intensity from the perspective of the being. Essentially something more or less identical to the βˆ’βˆž found in Pascal's Wager.

If the only method to prevent this were to eliminate every single person or trace of that civilization, would it be ethically permissible to do so under utilitarianism?

What if the proof were not incontrovertible? Would the mere nonzero probability of one being experiencing eternal torture also permit the destruction of the civilization? How low would the probability have to be for this to no longer be permissible given that it is the worst utilitarian outcome possible or imaginable for the single being involved?

πŸ‘︎ 16
πŸ’¬︎
πŸ‘€︎ u/__-_____-_-__---_
πŸ“…︎ Dec 02 2021
🚨︎ report
why Utilitarianism is not a "metaethical" theory?

according to Wikipedia meta-ethics are:

In metaphilosophy and ethics, meta-ethics is the study of the nature, scope, and meaning of moral judgment. It is one of the three branches of ethics generally studied by philosophers, the others being normative ethics (questions of how one ought to be and act) and applied ethics (practical questions of right behavior in given, usually contentious, situations).

While normative ethics addresses such questions as "What should I do?", evaluating specific practices and principles of action, meta-ethics addresses questions such as "What is goodness?" and "How can we tell what is good from what is bad?", seeking to understand the assumptions underlying normative theories. Another distinction often made is that normative ethics involves first-order or substantive questions; meta-ethics involves second-order or formal questions.

"What is goodness?" and "How can we tell what is good from what is bad?

whether you consider it bad or good answers, utitlitarism seems to answers this questions. what is good it's what increases happiness and what is bad it's what decreases happiness

i know i'm being simplistic here, but my point i that from this start point you can create more a different complex frameworks, so i don't understand why is not a metaethical theory

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/4ndual
πŸ“…︎ Nov 24 2021
🚨︎ report
God and utilitarianism

I asked a christian about gods morality and why he does things that are heinous but are good and his respons was this "God's commands could completely over-lap with utilitarianism . God decides what is good, but that means God could decide that something we consider to be wrong, because of utilitarianism etc is good. That means that God could say torturing babies is good. People think torturing babies is wrong because of utilitarianism, etc, so most people probably wouldn't accept divine command as what makes something good. I don't see anyone, when faced with this situation, accepting that if God commanded torturing babies to be good, that it would actually be good. Not because it violates our intuitions, but because it violates utilitarianism"...and I asked him "so being homosexual , and wearing mixed fabrics and picking up sticks on a Saturday is basically immoral to gods divine law right?" And he said "yes".....then I said "then your god is a snowflake edgy deity"

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/Fit_Channel4913
πŸ“…︎ Sep 14 2021
🚨︎ report
Is Buddhism utilitarianist in any way? How much do the two ideologies overlap? What are your views towards utilitarianism as Buddhists?

Thanks in advance for any answers :)

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/bigbrothero
πŸ“…︎ Nov 24 2021
🚨︎ report
Is there such a thing as "Comunist"-Utilitarianism? What I mean is a type of Utilitarianism that has the goal of both increasing utility, but also increasing the equal distribution of this utility?

In my opinion, one problem that Utilitarianism might have is that some beings might suffer so others can feel pleasure if that maximizes the utility (utility = pleasure - suffering). This might be a problem because it might seem as unfair and as it fails to respect the separation of each being. Example: someone who is a slave won't be happy even if that maximizes the utility and makes happy other citiziens.

So, does exist a type of Utilitarianism that is like the following or similar?:

The goal of this Utilitarianism is both to maximize the utility, but also to maximize the distribution of the utility.

Example. Let's imagine 2 worlds:

World A:

  • Human 1 (Pleasure: 7, Suffering: 5)
  • Cow (Pleasure: 1, Suffering: 2)
  • Sentient robot (Pleasure: 3 Suffering: 1)
  • Human 2 (Pleasure: 1, Suffering: 8)

Utility = (7 + 1 + 3 + 1) - (5 + 2 + 1 + 8) = 12 - 16 = -4

Distribution of utility: very bad

World B:

  • Human 1 (Pleasure: 3, Suffering: 4)
  • Cow (Pleasure: 3, Suffering: 4)
  • Sentient robot (Pleasure: 3, Suffering: 4)
  • Human 2 (Pleasure: 3, Suffering: 4)

Utility = (3 + 3 + 3 + 3) - (4 - 4 - 4 - 4) = 12 - 16 = -4

Distribution of utility: excellent

So in both worlds, the utility is the same. But in the second world, the utility is distributed much much better. My moral intuition tells me that it's preferable that World B exists rather than World A. But I don't know for sure.

Important things to say:

  • Probably it won't never be possible to having an exact distribution of utility, but the closer it gets, the better.
  • I'm aware that it could be difficult to know how to distribute utility, since this is not like money. But maybe utility could be distributed in the way we treat each sentient being and the goods and services that recives each sentient being. Example 1: if there is a pigeon bleeding and in agony in the street, it should be more important to give them medical attention that some person with a headache. Example 2: if some rich person has a lot of tasty food, they should give some of their tasty food to the homeless people.
  • We could calculate the distribution of Utility by using maths like Measures of Dispersion.
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/meewwekcw
πŸ“…︎ Jan 03 2022
🚨︎ report
Is the morality of TST Utilitarianism?

From SEP

>Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good. There are many ways to spell out this general claim. One thing to note is that the theory is a form of consequentialism: the right action is understood entirely in terms of consequences produced. What distinguishes utilitarianism from egoism has to do with the scope of the relevant consequences. On the utilitarian view one ought to maximize the overall good β€” that is, consider the good of others as well as one's own good.

πŸ‘︎ 7
πŸ’¬︎
πŸ“…︎ Jan 03 2022
🚨︎ report
I don't understand how someone else's happiness compensates for my pain in utilitarianism

There should be nothing wrong with utilitarianism here if 10 people rape a woman and there is great pleasure, and then the woman is killed and no one knows about it. But I don't see how their pleasure can compensate for my pain. We are different people. How can their happiness affect my pain?

πŸ‘︎ 104
πŸ’¬︎
πŸ‘€︎ u/moses1392
πŸ“…︎ Nov 21 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.