A list of puns related to "Utilitarianism (book)"
I'm often told that, unlike other systems such as virtue ethics, utilitarianism doesn't live up to the demands of metaphysics. And I've seen utilitarians who completely disregard metaphysics, which doesn't help matters much. Have any philosophers recently attempted to defend utilitarianism on metaphysical grounds?
Is it true that there is a certain kind of person who is attracted to the quasi-scientific, systemizing theory of morality known as (rule) utilitarianism? If so, what kind of person is that, and why do you find it attractive? Or is it more likely that the effective altruism movement and people like Peter Singer have influenced it? I am going to start by identifying what I think are the strongest motivations for utilitarian theorizing, and then I am going to explain a series of problems that I don't think there are good answers for.
Most rationalists I have asked about the subject tell me their interest in utilitarianism largely comes down to their theoretical preference parsimony--"it boils everything down to once clear principle." Which is strange, seeing as consequentialism is a pluralistic theory that encompasses more than one starting variable. Pleasure and pain are morally relevant--and, for utilitarians, relative impartiality in the distribution of utilities is also thought to matter, which is yet another principle.
As someone who already acknowledges the intrinsic significance of more than one moral factor, it should not be hard for a utilitarian to appreciate the appeal of counting further factors as being morally fundamental (i.e. by saying that, even when consequences are the same or worse, considerations of honesty, bodily autonomy rights, promises, special relationships, reciprocity after acceptance of benefits, etc. can tip the moral scales in favor of some action). If you doubt that pleasure and pain are distinct experiences and moral granules, consider whether a state of consciousness with zero experience of pleasure is one of great pain, rather than simply one of no pleasure. It seems implausible to think that such a state is impossible, or that it would be agonizing.
The misgiving I have about this is that parsimony (even in science) is only an explanatory virtue if it actually is explanatory; no scientist would prefer a more parsimonious theory that explains away the evidence to a theory that acknowledges it. A really parsimonious theory of everything investigated by science would be to deny the phenomena even exist in the first place, and are just illusions created by a mad scientist stimulating our brains: the earth was created 9 minutes ago with a false appearance of age, and the objects in your everyday life aren't real. This theory posits far fewer entities in order to generate an explanation when compared to the "re
... keep reading on reddit β‘Hi! I recently got into a debate around utilitarianism. One person said that utilitarianism can be used to support racism, using the classic example that if there are 99 racists and one person of color, utilitarianism would argue in favor of oppressing the person of color. My understanding is that this is a misconception. I was wondering if anyone has any articles/resources debunking this? Thanks in advance!
I've recently been trying to help people learn about philosophy by putting together a collection of reading lists on a variety of different topics, including utilitarianism. The list is primarily based on books found in university course syllabi, encyclopedia bibliographies, and recommendations from the r/askphilosophy subreddit, such as the following:
University Course Syllabi:
Bibliographies:
Forum Recommendations:
It's not a foolproof method but I think it works relatively well. I used those links to create the following list of the best books on utilitarianism:
It's intended to suit a variety of audiences and learning styles so it has a mix of beginner-friendly introductions, more academic overviews, and a few classic readings.
The full post can be found here. Please let me know if you have any feedback on these books/resources.
Greetings, fellow furless bipedal big astute monkeys.
I'm looking for books about utilitarianism (on an easy to grasp level) that help adopt this philosophy and worldview into one's lifestyle. Can you guys suggest me some?
Thanks in advance!
*Edit for clarification: My arguments do not concern the debate on utilitarianism v. deontology. They concern the function of hypotheticals (more precisely: moral dilemmas) in assessing the moral framework of a person and the biased nature of hypotheticals towards utilitarianism.*
Vaush's debate with NonCompete included a discussion on the (philosophical) status and worth of hypotheticals. NonCompete seemed to be out of his depth, claiming that hypotheticals 'are Idealism' (?) and therefore (?) an illegitimate tool for debating or something like that. Obviously this kind of reasoning does not even count as an argument.
I do not want to defend NonCompete's confused stance. Although Vaush is more knowledgeable on the topic and correctly critiques NonCompete, he nonetheless takes a stance on the issue of hypotheticals that in turn is worthy of a critique.
In the debate, Vaush presented NonCompete with a hypothetical including an Alien invasion. I.e. he constructed a classical moral dilemma (which I will equate to hypotheticals from now on): The invading Aliens give one person a morally charged choice: Either do nothing and doom humanity as a whole or murder one child to save the entire human species. (Or something similar.) The extreme nature of the choice at hand, let's one immediately lean towards the less disastrous option of murdering the child in order to save humanity from extinction.
The philosophical problem with these kind of moral dilemmas lies in the fact that they construct a situation from which there is no escape: one has exactly two options (this is analytically true and makes it a dilemma) and the criteria one applies to determine which of the two options is preferable is supposed to give insight into the moral framework of the person who is (imagining) making the given choice (which makes it a moral dilemma.) The most famous example of this kind of dilemma is the 'trolley problem'. My contention is, that the dilemmatic nature of the choice actually renders a real judgment on what kind of moral framework the choosing person applies impossible. This is the case because a dilemma necessarily excludes one option: Withdrawing from the situation, thereby evading the dilemma and reconstituting one's freedom of action which was denied by the dilemma. This does not mean that dilemmas are morally neutral: Their set-up itself can be immoral. In Vaush's dilemma, the Aliens are the immoral ones, not the decision one makes based on the options. T
... keep reading on reddit β‘As i know it in their thesis they equate human
wellbeing = Good
suffering = bad
But aren't these two things mind dependent?
So how is utilitarianism a objective thesis in meta ethics?
Hi all, I am writing a research paper in support of homosexuality or same-sex relationships through a Utilitarian perspective. My argument is that same-sex relationships can provide the same utility as opposite-sex relationships. Also, trying to force one to conform to one sexuality or identity increases pain instead of utility. I have been researching to see what would be a common objection to this view and how would I respond to it as a Utilitarian but had no luck finding any? If anyone has some suggestions, please feel free to share :)
Thank you!
Recently I've been interrogating some of my own beliefs through some of the techniques of SE, and I've come across some questions that I would love some input from the community on.
Primarily, I'd like to explore my god-belief. To give some background context, I was raised Christian and for a short time early in my life believed in the Christian god. It was fairly early on, if I remember correctly, that I began to question the religion and was unsatisfied with the answers. After leaving the Christian school I was made to attend, I came out of the atheism closet and probably would have been considered an anti-theist for a while after that (this is all pretty standard ex-Christian stuff, I think).
A few years ago, though, I began to revisit the idea of a higher power. I'm still very skeptical of religion, so this was more of a personal exploration of what it might feel like to believe in a "responsive universe", to put it one way. Throughout that process, I experimented with the outcomes of different beliefs--i.e. if I believe in a kind, loving higher power, what kinds of decisions do I make and what are the outcomes of those decisions? How does this belief affect my self-talk and other aspects of my psychology?
I'm a pretty strict utilitarian, so when these experiments yielded positive results (greater general happiness, positive decisions, more fluent creativity, etc.), that seemed like a good enough reason to choose to "believe" in said higher power.
The conflict I'm having, I think, is with my desire to believe things that are true. I have a high degree of personal confidence in this higher power, but only because I've chosen to. I don't think I could ever have any external confidence in this belief, as in thinking that someone else should think it's true, or that it's true in any demonstrable way. But if this belief if helpful and positive in my own life, and there isn't any evidence to directly contradict it, is that a good enough reason to believe it?
So, I guess I'm curious if anyone else uses utilitarianism as a reason justifying belief? What is the ultimate purpose in striving to believe things that are true, and does choosing certain beliefs that would otherwise be uncertain inherently contradict that?
I thought of a funny thought experiment based on how utility is modelized by many economist.
Sorry if this is not clear but english is not my native language and this is a bit technical so ill provide a link to detail the utility concept https://courses.lumenlearning.com/boundless-economics/chapter/the-demand-curve-and-utility/
Utility is a 'score' that an economic actor gives a commodity, the actor then chose to trade commodities, so that to maximise his utility gain (the agents are considered to be rational).
The goal of a utilitarian economist should be to maximise the 'total utility' (sum of all utilities from all commodity owned by each actors)
Now utility is generally though as diminishing, having 1 house is great, having 2 house is cool, having 3 is meh, having 4 is tiring etc
Generally we would represent utility with a bell curve or a logistic function, lets say in our example we use
utility(x) = (5/(1+e^(-1x))-(5/2)
OK now for the funny part lets say we have two actors A with 100X and B with 0X, the total utility would be f(100)+f(0) ~= 2.5, if we redistribute X and transfer 10X from A to B the total utility would be f(90)+f(10) ~=5.0
So this very simple example seems to show that redistribution might be the way to go according by how the economy is modelized by economist, and in a hope to maximise total utility.
Happy to read about it if anyone know counter points to this, as I guess this point has been made before
Hey, sorry if this is a question you get a lot (if so Iβm happy be to be directed toward whatever resources), but thereβs one aspect to utilitarianism thatβs always bugged me. Basically, I see two parts to the argument for it. First is that more happiness = more good. The second is that more good = what one should aim for. To me this seems be sort of making too many leaps.
βGoodβ is an ambiguous term. For that reason you are free to define it how you want- but you canβt give it two non-synonymous definitions. If you define βgoodβ as βhappinessβ, then you canβt define βgoodβ as βthe object of correct actionβ (and vice versa). You have to actually make an argument for why happiness should be the object of what you βshouldβ do. So, what is your argument for this last part? What argument can you make for utilitarianism without using the term βgoodβ? (This is not a rhetorical question btw. Iβm genuinely interested)
(Or, to phrase things another way- I often see utilitarians just assume that we should aim for happiness, and then sort of argue from there. It might seem common sense, but I guess thatβs just not enough for me)
In After Virtue, MacIntyre presents a picture of modern moral theories, like branches of deontological and consequentialist ethics, failing due to their rejection of teleology which renders them unintelligible and self-referred, however at the same time powerful instruments of social manipulation in the current "culture of emotivism".
While I somewhat get that deontology succumbs to this critique as there are no goals to achieve but actions being right and wrong in themselves, I don't get why consequentialist theories like utilitarianism lack telos. If one makes use of MacIntyre's telos of man to pass from a state of "untutored human nature" to "man-as-he-could-be-if-he-realized-his-telos" by embracing several virtues, then why can't one realize one's telos by Bentham's and Mill's "greatest happiness for the greatest number"? Why is man's telos not maximizing happiness in such a way?
We were discussing utilitarianism in my university literature course, and utopias in literature was brought up, which then lead to discussion of utilitarianism.
I have a basic understanding of utilitarianism and have just started to look into the direct opposites of it so that I can get a better grasp on these concepts. We were talking about the classic trolley problem and the sort of modified version where instead of pulling a lever, you actually have to push someone onto the tracks (I think?). I started thinking about how if someone were to pull the lever of push the person into the path of the train, they would be saving more lives at the sacrifice of one. However, this also comes at the sacrifice of one's own morals (that is, if you believe that murder is wrong and so forth). Are most ideas of the practice of utilitariansm inevitably self-sacrificing if applied to one's personal life? Or maybe in just hypotheticals? If so, would this mean that utilitarianism is the more "selfless" philosophical theory?
But I've met a lot of people who are very sure of their stance on utilitarianism, saying that it is the more practical and selfless perspective, bolstering their moral standing over a person who possibly would believe that the ends do not always justify the means. I still am kind of confused about the idea of the inevitable existence of a "Loser" inorder to provide satisfaction/needs met/happiness of another. I'm just thinking about this in hypotheticals, not in the applications to our current world, because I want to better understand the basic definitions and concepts of utilitarianism and its oppositions. Thanks!
I couldn't really find anything on google books or library genesis. The wikipedia article for it was good though
Forgive me if this has been asked before, I'm new to both philosophy and Reddit!
Can a plant be said to have an interest or preference in living?
"The capacity for suffering and enjoyment is... not only necessary, but also sufficient for us to say that a being has interests." - Animal Liberation (Singer)
Is a plant growing towards a light source evidence of a preference? And is adapting clever ways to spread seeds demonstrating an interest in reproduction? Or is this just bio-programming (for lack of the correct term).
I realize that this preference may not be worth moral consideration because the plant arguably cannot suffer. However, after reading the above quote, I was wondering whether Singer was correct or if he needs to change "interests" to "interests worthy of consideration".
Any thoughts are very welcome - as I said, I'm very new to philosophy and am hoping that someone can clarify or affirm my thoughts on this.
I know communtiarianism focuses on the common good, whereas utilitarianism focuses on maximizing the well-being of individuals overall, but don't those two thing usually coincide? I can't think of a real word example in which we, as citizens, sacrifice something for our community that doesn't also maximize the wellbeing of our fellow citizens. Please help!
From "Marxism and Ethics" by Paul Blackledge:
>"By focusing on the ends of actions rather than the means through which these ends are brought about, the broader family of consequentialist morality of which utilitarianism forms a part, is necessarily, in the words of Elizabeth Anscombe, βa shallow philosophy,β because for them βthe question βWhat is it right to do in such-and-such circumstances?β is a stupid one to raiseβ (Anscombe 1981, 36). The idea that our unmediated desires can act as a basis for the good life is fundamentally problematic. For, desires both change over time and exist as pluralities which do not necessarily pull in the same direction. We therefore must choose between them, and on these types of choices consequentialism has very little of interest to say. Indeed, by its focus on the ends of action, utilitarianism downplays just that aspect of our practice which is centrally important to moral theory: the means through which we aim to realize our ends. This lacuna goes a long way to explaining how, despite its radical roots, this approach has been used to justify all manner of inhuman acts in the name of their future consequences (MacIntyre 1964), and by conflating happiness with increased wealth it is blind to the way that modern societies generate so much unhappiness (Ferguson 2007; cf Frank 1999, Ch. 10; and Wilkinson 2005).β
Nick Land's essay "Utilitarianism is Useless**":**
>Utilitarianism is completely useless as a tool of public policy, Scott Alexander discovers (he doesnβt put it quite like that). In his own words: βI am forced to acknowledge that happiness research remains a very strange field whose conclusions make no sense to me and which tempt me to crazy beliefs and actions if I take them seriously.β
>
>Why should that surprise us?
>
>Weβre all grown up (Darwinians) here. Pleasure-pain variation is an evolved behavioral guidance system. Given options, at the level of the in- dividual organism, it prompts certain courses and dissuades from others. The equilibrium setting, corresponding to optimal functionality, has to be set close to neutral. How could a long-term βhappiness trendβ under such (minimally realistic) conditions make any sense whatsoever?
>
>Anything remotely like chronic happiness, which does not have to be earned, always in the short-term, by behavior selected β to some level of abstraction β across deep history for its adaptiveness, is not only useless, but positively
An argument/ thought that I would be happy to hear some feedback about.
Especially with regards to ancient stoists such as Plato, who argues for a rudimentary form of utilitarianism in some of his works such as "The Republic", although I'd argue that Marxist dialectics is a more similative comparison to his view of applying paternalistic policies in order to guarantee the prosperity of the community, even advocating that the exchange and sale of private property is an unjust form of living?
If you are a utilitarian and believe in the existence of many worlds where all possible events play out, why should you care about what happens in the world you happen to live in? That is, in reacting to the event that X happens in this world, why should you care, given that not-X happened in another world and the experiences of people in that world are equally important?
One response could be that the experiences of people in the other worlds are not equally important - this seems plausible in modal realism (and I am curious what modal realists think about this), but not plausible for more physically based theories.
It may because Iβm not too well read on ethics, but utilitarianism sounds vastly superior to deontology. Of course, deontology is one of the most widely taught schools of ethics, so itβs definitely because Iβm missing something. To make this easier on me, how would a deontologist respond to something popular like the trolley problem and why?
(Note: I donβt exactly know the history of the trolley problem, so if it was made intentionally to oppose deontology, just tell me that and use a different example)
The claim that utilitarianism makes too much of a moral demand is self-defeating. It's not a true objection to utilitarianism, but more of a failure to recognize one's own limitations.
I.e. the idea that one should make massive sacrifices for the greater good assumes that there are massive sacrifices that are within one's capacity to make, on a day to day basis.
Moments of "diving on the grenade" are highly unusual. It's also not pragmatic (and therefore, not utilitarian); one could spend so much time debating over whether or not to be vegan, whether or not to buy or not buy these shoes over potential slave labor. OR you could devote your life to finding the ways in which you could maximize good, making small choices on things like animal suffering like eating lesser to the benefit of your own health without altering your lifestyle in a way that might be personally unsustainable.
And, instead you perhaps start a business or organization related to something that you're good at, or build on a skill that you're good at, enriching your own life and improving the lives of others at the same time.
I don't believe that utilitarianism is a perfect moral system, but I do believe that the yet-unrealized perfect moral system must have maximal utility across time.
And, utility is probably the best absolute test for a moral system, but it's difficulty comes from measurement.
Let's say a civilization has the technological means of altering the perception of time or simulating beings, and you have incontrovertible proof that they were about to use this capacity to trap a sentient being into an eternal hell of the highest pain imaginable. In this case, I mean eternal in a literal sense: the subjective experience of the torture would never stop or diminish in its intensity from the perspective of the being. Essentially something more or less identical to the ββ found in Pascal's Wager.
If the only method to prevent this were to eliminate every single person or trace of that civilization, would it be ethically permissible to do so under utilitarianism?
What if the proof were not incontrovertible? Would the mere nonzero probability of one being experiencing eternal torture also permit the destruction of the civilization? How low would the probability have to be for this to no longer be permissible given that it is the worst utilitarian outcome possible or imaginable for the single being involved?
according to Wikipedia meta-ethics are:
In metaphilosophy and ethics, meta-ethics is the study of the nature, scope, and meaning of moral judgment. It is one of the three branches of ethics generally studied by philosophers, the others being normative ethics (questions of how one ought to be and act) and applied ethics (practical questions of right behavior in given, usually contentious, situations).
While normative ethics addresses such questions as "What should I do?", evaluating specific practices and principles of action, meta-ethics addresses questions such as "What is goodness?" and "How can we tell what is good from what is bad?", seeking to understand the assumptions underlying normative theories. Another distinction often made is that normative ethics involves first-order or substantive questions; meta-ethics involves second-order or formal questions.
"What is goodness?" and "How can we tell what is good from what is bad?
whether you consider it bad or good answers, utitlitarism seems to answers this questions. what is good it's what increases happiness and what is bad it's what decreases happiness
i know i'm being simplistic here, but my point i that from this start point you can create more a different complex frameworks, so i don't understand why is not a metaethical theory
I asked a christian about gods morality and why he does things that are heinous but are good and his respons was this "God's commands could completely over-lap with utilitarianism . God decides what is good, but that means God could decide that something we consider to be wrong, because of utilitarianism etc is good. That means that God could say torturing babies is good. People think torturing babies is wrong because of utilitarianism, etc, so most people probably wouldn't accept divine command as what makes something good. I don't see anyone, when faced with this situation, accepting that if God commanded torturing babies to be good, that it would actually be good. Not because it violates our intuitions, but because it violates utilitarianism"...and I asked him "so being homosexual , and wearing mixed fabrics and picking up sticks on a Saturday is basically immoral to gods divine law right?" And he said "yes".....then I said "then your god is a snowflake edgy deity"
Thanks in advance for any answers :)
In my opinion, one problem that Utilitarianism might have is that some beings might suffer so others can feel pleasure if that maximizes the utility (utility = pleasure - suffering). This might be a problem because it might seem as unfair and as it fails to respect the separation of each being. Example: someone who is a slave won't be happy even if that maximizes the utility and makes happy other citiziens.
So, does exist a type of Utilitarianism that is like the following or similar?:
The goal of this Utilitarianism is both to maximize the utility, but also to maximize the distribution of the utility.
Example. Let's imagine 2 worlds:
World A:
Utility = (7 + 1 + 3 + 1) - (5 + 2 + 1 + 8) = 12 - 16 = -4
Distribution of utility: very bad
World B:
Utility = (3 + 3 + 3 + 3) - (4 - 4 - 4 - 4) = 12 - 16 = -4
Distribution of utility: excellent
So in both worlds, the utility is the same. But in the second world, the utility is distributed much much better. My moral intuition tells me that it's preferable that World B exists rather than World A. But I don't know for sure.
Important things to say:
From SEP
>Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good. There are many ways to spell out this general claim. One thing to note is that the theory is a form of consequentialism: the right action is understood entirely in terms of consequences produced. What distinguishes utilitarianism from egoism has to do with the scope of the relevant consequences. On the utilitarian view one ought to maximize the overall good β that is, consider the good of others as well as one's own good.
There should be nothing wrong with utilitarianism here if 10 people rape a woman and there is great pleasure, and then the woman is killed and no one knows about it. But I don't see how their pleasure can compensate for my pain. We are different people. How can their happiness affect my pain?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.