Is there more to morality?

New Topic
This topic has been archived, and won't accept reply postings.
 Jon Stewart 07 Apr 2020

...than our evolved instincts?

Continued from a thread that started out about how long you can walk for under the lockdown rules.

In reply to Coel Hellier:

> Really?  Compared to many philosophers (Neitszche, Heidegger, Wittgenstein, etc) he is a model of straightforward clarity!

Fair point! But, what I'm getting at here is that if presented with a moral dilemma, I can't tell you what Hume would say about it. He'll tell me that morality is ultimately subjective (I can't get an ought from an is), I can't reason my way to an answer, I'll need to consult my feelings, and he'll suggest that I might feel that the action with the highest utility seems like the right thing to do.

But what if it doesn't. What if I just feel it's better to keep black people out of my shop. I think black people are dangerous and might harm me, my customers would prefer it if I had a "no blacks" sign up, I'd do better business, and I feel that it's the right thing to do.

Reason/utilitarianism/moral realism would tell me that I haven't got any good reasons to prioritise the feelings of me and my in group over the equally important feelings of black people, and that the suffering caused by my racism makes it wrong. But going with my feelings - natural feelings that I evolved - tells me it's right.

So according to you and Hume, what ought I do, and why?

> But, utilitarianism does provide a good description of ways in which humans think about morality. 

That's the whole challenge to utilitarianism - it fails to chime with our evolved instincts! You can't brush that under the carpet and pretend that utilitarianism is the natural outcome of evolution, that's patently false. The problems we have in the world of conflict and exploitation stem from the fact that our evolved behaviour doesn't deliver good outcomes - we hoard resources while others starve, we kill each other so our in-group can rule the patch of land, etc, etc. 

> I actually think that it does.  Once you've described what human morality is, where it came from and why we have it, what else is there in meta-ethics left to do?

Errm, give us a basis for deciding right from wrong, perhaps?

You've said that you agree with Hume that you can't get an ought from an is. And given no alternative as to how you get an ought, other than "because of feelings". So you've answered one metaethical question (moral realism or anti-realism) and then left it there, which is a position of wishy-washy anything-goes relativism. You've got no framework to distinguish right from wrong, just a load of "is" statements about human behaviour, and an endorsement that if you feel it's right, just go ahead.

> If you're rooting things in human feelings, desires and preferences, then we're largely agreed.   But, the issue comes when one turns to moral prescriptions.  Utilitarianism is moral-realist and seeks to provide moral prescriptions.  But, ultimately, it can provide no warrant for those prescriptions, except by rooting them in "because humans want it that way".

If you make the requirement that moral realism must boil down to something that transcends or does not depend upon human desires, then utilitarianism is not moral realism. But who cares? I'm not bothered how a philosopher would classify the theory, I'm bothered about whether it's any good. Can I use this theory to justify moral choices that I believe in?

I want to show that the way to reach good moral judgements is to employ reason, and that the passions (our evolved instincts) aren't to be trusted. That doesn't require a transcendence of human feelings. It requires us to discuss human feelings according to reason, that's all.

> Now, there is absolutely nothing wrong with that last, that is all there is. But it does not give you a moral-realist scheme.

As I said, so what. If it gives me an ethical theory that's founded in reason, then it means that we expect to reach consensus: when you analyse and debate the facts, different people will be forced to similar conclusions. We can all get at the same answers. Sounds to me like it's becoming objective, even though it doesn't meet your (pointless) bar of being rooted in something above and beyond what humans want.

> You're right, I'm not!  I do assert that there can be no moral prescriptions other than those ultimately rooted in "because I want it that way".  

I can't make sense of this. If your reasons for moral prescriptions can't go beyond "because I want it that way", then that is the definition of "wishy-washy anything-goes relativist".

- "I don't want blacks in my shop, and should not have to tolerate them, because I want it that way"

 -"Well I want it a different way, in which people have equal rights and we achieve higher overall utility"

Are both viewpoints equally valid, or not?

> But, the "wishy-washy anything-goes relativist" then says "therefore you can't say that X is wrong".  But that's not so, you can indeed say that.   "X is wrong!".   Nothing prevents you advocating for how you want things to be.  Nothing says you need to merely accept things that you see as odious and harmful. 

What your view says is that all you can say is "I don't like it". Only if you rule the world does "I don't like it" provide a way to change something. Wishy-washy anything-goes relativism allows you to say "I don't like it, but my reasons for not liking it are no more valid than yours for liking it". Which is exactly what you're saying.

Post edited at 10:37
11
Gone for good 07 Apr 2020
In reply to Jon Stewart:

Don't know about you but I think I'll have steak pie and chips for dinner. Peas or beans? Peas or beans?

1
 olddirtydoggy 07 Apr 2020
In reply to Gone for good:

We're going to do a curry, chicken thigh on the bone after 24 hours of marination. I like to wok fry the rice with fresh coconut, gives it a real lift. This is the food and cooking thread isn't it?

2
OP Jon Stewart 07 Apr 2020
In reply to Gone for good:

> Peas or beans?

Which would Hume have?

1
 Yanis Nayu 07 Apr 2020
In reply to Jon Stewart:

I often find myself asking that. 

> Which would Hume have?

In reply to Jon Stewart:

Morality is subjective, each person has their own set of ways of functioning as part of society, some of these behaviours are acceptable to the general population some less so. 

There most probably have was a time when something like tribalism and fear of the outsider was a true evolutionary driver, more likely to survive as part of a strong group.

But learnt behaviour also forms part of each individual's moral code, and there is often no evolutionary driver for them.

For me it's too simplistic to say our morals are evolutionarily wired into us or they are societal pressures, it's got to be a mixture of both. So I would be quite in agreement with morality being a set of "wishy-washy is statements about human behaviour" because it's either that or the equally wishy washy everyone has their own morality, which whilst true is not a helpful way of describing it as a tool to understand human behaviour.

For where we are now, we have got to point where even the most immoral act is unlikely to prevent reproduction so the mechanisms of societal pressure become the bigger force.

1
 Martin Haworth 07 Apr 2020
In reply to Gone for good:

Hmmm...tricky...tell you what, blow self discipline Id go for peas AND beans, seems to me that has the greatest utility.

 AllanMac 07 Apr 2020
In reply to Jon Stewart:

> Peas or beans?

Which would Hume have?

They are both leghumes.

Post edited at 12:00
 HansStuttgart 07 Apr 2020
In reply to Jon Stewart:

> peas or beans

> Which would Hume have?

the chickpeas, to make Humus

OP Jon Stewart 07 Apr 2020
In reply to Moomin.williams:

> Morality is subjective, each person has their own set of ways of functioning as part of society, some of these behaviours are acceptable to the general population some less so. 

That's a fair statement of how things are, but I'm asking, is any one set of values any better than another, or are they all equally valid? Is a racist society just as good as one with equal rights, are they just different subjective preferences?

> For me it's too simplistic to say our morals are evolutionarily wired into us or they are societal pressures, it's got to be a mixture of both.

Agreed. Our evolutionary history informs what societal pressures exist. But we also managed to come up with reason, and I think reason is the way to distinguish a good set of preferences from a bad one.

> So I would be quite in agreement with morality being a set of "wishy-washy is statements about human behaviour" because it's either that or the equally wishy washy everyone has their own morality, which whilst true is not a helpful way of describing it as a tool to understand human behaviour.

I agree that our moral instincts and behaviour are just a set of statements about how things are. The choice though is not between that and a total free-for-all. I'm proposing the alternative that while what *is* can be explained by evolutionary psychology, what *ought* can be determined by reason.

> For where we are now, we have got to point where even the most immoral act is unlikely to prevent reproduction so the mechanisms of societal pressure become the bigger force.

Where we are now, we can overcome our evolutionary instincts because we have developed sufficiently good use of reason to determine right from wrong. A society with equal rights is better than a racist society for good reasons, not just a matter of subjective preference.

1
 Yanis Nayu 07 Apr 2020
In reply to Jon Stewart:

I guess the question of which value system is best  comes down to which gives society the best outcomes - what represents the best outcomes is a whole other question. 

 Lankyman 07 Apr 2020
In reply to Martin Haworth:

> blow self discipline Id go for peas AND beans

That's just so much hot air!

OP Jon Stewart 07 Apr 2020
In reply to Yanis Nayu:

> I guess the question of which value system is best  comes down to which gives society the best outcomes

Exactly.

> what represents the best outcomes is a whole other question. 

The least suffering.

 DD72 07 Apr 2020
In reply to Jon Stewart:

It is definitely more than instinct because evolutionary drivers are pretty simple and consistent whereas morality and moral codes vary widely across time and space. 

Relativism can't account for the reality that moral codes can be pretty strong and difficult to break for most people although they debate (particularly around the edges where they are unclear or evolving) and test them all the time.

You touch on something important when you point out that the relative morality of people in power (world rulers) is important. But they still get their power from their position within a community (albeit at the top) and its hard for them to exercise that power (for long) if they are seen as completely immoral.

 DD72 07 Apr 2020
In reply to Jon Stewart:

It sounds seductively simple but how do you even start to quantify suffering in order to identify what is the least?

 Timmd 07 Apr 2020
In reply to DD72:

I guess 'net suffering' needs to be balanced against the rights of the individual, because there may be times when there is conflict between reducing net suffering and upholding individual rights. Perhaps to do with medical care and who receives what treatment or resources.

Post edited at 13:44
 DD72 07 Apr 2020
In reply to Timmd:

Yes I agree, the utilitarians have a way round this with the whole act Vs rule utilitarianism but personally, I think it is a way fo fixing the flaws in the original argument. I would say rights are something that have developed (very imperfectly and by no means consistently) over time in a broadly positive direction. 

I would also say that individuals are not the only things that have rights, animals or even species, ecosystems etc.

 DD72 07 Apr 2020
In reply to Timmd:

In medical care, we do have a form of utilitarianism (quality of life year measures) which can be imperfect and breaks down a lot in the face of public pressure but it is an attempt. I'm not sure how it works in the current situation where judgments are being made quickly.

1
OP Jon Stewart 07 Apr 2020
In reply to DD72:

> It is definitely more than instinct because evolutionary drivers are pretty simple and consistent whereas morality and moral codes vary widely across time and space. 

I think our moral codes are behaviours are just full of hypocrisy, because they stem from evolved drives that had no adaptive reason to be consistent. Religions give us a lot of insight into the kinds of results you get when you throw reason out of the window and try to codify moral instincts. Love thy neighbour (unless they're black or gay); subjugate women; don't eat pork or shrimps on a Tuesday afternoon; etc etc.

> Relativism can't account for the reality that moral codes can be pretty strong and difficult to break for most people although they debate (particularly around the edges where they are unclear or evolving) and test them all the time.

Relativism is fine if you don't care about outcomes. It's a perfectly consistent account of evolution endowing us with moral instincts and behaviours, that turn out different in different groups of people. What relativism fails to account for is any reasons to think that a society in which lots of people suffer is any worse than one in which everyone feels good. Relativism says that moral progress is impossible or meaningless - if we follow the path of relativism, then whoever has the power gets to make society the way they want it for their own self interest. It's a path to widespread misery because it says there's nothing wrong with totalitarian control, putting women in cloth bags or holding slaves. 

> You touch on something important when you point out that the relative morality of people in power (world rulers) is important. But they still get their power from their position within a community (albeit at the top) and its hard for them to exercise that power (for long) if they are seen as completely immoral.

Depends what tools they have at their disposal to keep themselves in power - this isn't something I'd like to rely on.

OP Jon Stewart 07 Apr 2020
In reply to DD72:

> It sounds seductively simple but how do you even start to quantify suffering in order to identify what is the least?

I think this is where there is a problem with utilitarianism. 

Utilitarianism is a quantitative theory, and depends on quantifying conscious states that are entirely qualitative. If you're going to say that there is an actual true answer to the question, "is x or y the better action/policy" then you need to believe that your quantitative account of suffering is correct. Given the qualitative nature of experience and our lack of access to other minds, how could you ever know that?

So maybe there isn't a true answer. But that doesn't mean you can't use reason and come to a consensus about which has the most compelling case for producing the least suffering. It may be that you can never know which action will produce the least suffering, but it doesn't mean that you can't come to a reasonable expectation, and give good reasons why you chose to do x rather than y.

While there's something definitely flaky about a quantitative account of suffering, it's better than trying to come up with rules (what makes the rules right? God? They just feel right?) or giving up and being a wishy-washy anything-goes relativist.

Post edited at 14:18
OP Jon Stewart 07 Apr 2020
In reply to Timmd:

> I guess 'net suffering' needs to be balanced against the rights of the individual

How do you decide what rights an individual has?

 Yanis Nayu 07 Apr 2020
In reply to Jon Stewart:

> Exactly.

> The least suffering.

Most equal? How do you think people’s value systems coincide with / compete with inate drives - competitiveness for example?

 DD72 07 Apr 2020
In reply to Jon Stewart:

I'm not sure you can have an evolved drive that has ho adaptive potential. Religion isn't perfect but the fact that it persists suggests that it serves some purpose. Also remember it is generally at the fringes of religion that the worst of the racism, prejudice and intolerance occurs. The mainstream generally concerns themselves with trying to do the right thing as they see it.

I'm an atheist by the way just not a Branch Dawkinian.

 freeflyer 07 Apr 2020
In reply to Jon Stewart:

> I think our moral codes are behaviours are just full of hypocrisy,

Ok, I'll bite

You want to find some defensible moral basis for your action and opinion, and I think you are suggesting also that Socratic debate is a test of rationality; in other words if you can't find anything wrong with it, your moral standpoint has a greater chance of being both useful and correct. If so, I like this plan.

I'd suggest a look at the rules for discernment invented a while ago by Ignatius of Loyola, specifically for this purpose. Stripped of the religious context of the time and absurdly simplified, they go something like this:

I have an internal understanding of the world and my place in it; I am happy when the world is how it should be; when it isn't I am unhappy, and have to decide whether my expectations are the problem, or whether the world is the problem.

Ignatius says that when I am happy, I am in harmony with the world; remember this feeling. When I am unhappy, I am not in harmony - so I am the problem, and my task is to re-align my expectations and attitudes in order to get back in harmony again. It's not the place I am in, which may be awful or delightful; what counts is my attitudes and feelings.

Once I have sorted out my head, Ignatius suggests that the moral basis of my opinions will be both good and defensible; I have to trust myself. I then have the challenge of persuading others who are unhappy that they are the problem; if that doesn't work, I am unhappy - start again!

So unfortunately, there is no solely rational basis for attitudes and opinions, since as Moomin.williams points out, morality is subjective. However rationality inevitably intrudes into a discussion between individuals, especially large numbers of them, and so there is a basis for discussion, and a hope that we may be able to live together happily.

Peas or beans

In Albert Camus' book La Peste (The Plague) he describes the reaction of his various characters to the plague that overcomes their city. One such is an old asthma patient of the doctor, who passes his time by staying in bed, and taking peas from one pan and putting them in another, as well as passing comment on the townspeople and their activities.

I'm feeling some kinship with this guy while sitting at home; however like the doctor, I'm also taking consolation from helping out where I can.
 

OP Jon Stewart 07 Apr 2020
In reply to Yanis Nayu:

> Most equal?

I don't like the sound of that much.

> How do you think people’s value systems coincide with / compete with inate drives - competitiveness for example?

I think people's value systems are a consequence of their innate drives. This makes those value systems (e.g. religion, political libertarianism) untrustworthy. However, in an account of what leads to the least suffering, you have to take into account people's innate drives (e.g. competitiveness) and try to set things up so people get to live out their innate desires while causing the least suffering. You want a society with sport rather than war, for example.

 DD72 07 Apr 2020
In reply to Jon Stewart:

I would tend to agree utilitarianism gives a reasonable direction of travel but if you rely on it alone you can get into all sorts of trouble. A lot of eugenic arguments are broadly utilitarian.

I also don't think it has to be an either-or (Hume was too simplistic is and oughts are not so separate). Religion has something to bring to the party but like utilitarianism, you can't rely on it exclusively.

OP Jon Stewart 07 Apr 2020
In reply to DD72:

> I'm not sure you can have an evolved drive that has ho adaptive potential.

Nor do I - sorry if I gave that impression.

> Religion isn't perfect but the fact that it persists suggests that it serves some purpose.

Agreed. Before we had the enlightenment, it was probably better than a free-for-all and was there for an adaptive reason.

> Also remember it is generally at the fringes of religion that the worst of the racism, prejudice and intolerance occurs. The mainstream generally concerns themselves with trying to do the right thing as they see it.

That's true in an environment where reason has taken hold, but in the past the mainstream has been 100% behind the worst possible racism and systematic moral failure.

OP Jon Stewart 07 Apr 2020
In reply to DD72:

> I would tend to agree utilitarianism gives a reasonable direction of travel but if you rely on it alone you can get into all sorts of trouble. A lot of eugenic arguments are broadly utilitarian.

They're bad utilitarian arguments that fail to make a good fist of considering the consequences.

> I also don't think it has to be an either-or (Hume was too simplistic is and oughts are not so separate). Religion has something to bring to the party but like utilitarianism, you can't rely on it exclusively.

Can we call this ethical theory mishmashism?

 DD72 07 Apr 2020
In reply to Jon Stewart:

I call it Dewyian Pragmatism but the allegation has been raised. 

I would throw in a bit of Taoism as well, which may be where Freeflyer is going as well, albeit from a different point. 

There is a right course of action but it is contingent and not usually knowable by those who are actually acting it out, in circumstances where it really matters like now.

It is worth noting that on some of the big things 'thou shalt not kill' the room for ambiguity is pretty narrow. I'm not sure anyone would bother with some of the convoluted and pedantic arguments about why they should be allowed to..... that I have been enjoying watching played out here over the last few weeks.

OP Jon Stewart 07 Apr 2020
In reply to freeflyer:

Thanks that's really interesting. I'll have to go away and have a think about it though, as it's new to me and quite radically different to how I think of ethics

My first reaction though is: what if I'm unhappy because some f*cker just kicked me in the balls for apparently no reason? That's not something I want to be in harmony with, nor to adjust my expectations and attitudes towards. It doesn't strike me at first glance that Ignatius of Loyola is providing a theory that could be used to structure society and its institutions - but maybe that's not what he was trying to do?

In reply to Jon Stewart:

> That's a fair statement of how things are, but I'm asking, is any one set of values any better than another, or are they all equally valid? Is a racist society just as good as one with equal rights, are they just different subjective preferences?

I would say no a racist society is not as good as one with equal rights, but someone can have a different opinion and we can then have a discussion about the pros and cons of each point of view. So yes they are subjetive preferences. Where the distinction comes is when a society has a debate and comes to a consensus that racism is a bad idea, then it changes from being an individual preference to a societal moral, it is still possible to hold the counter view point, but the individual has to live with the consequences of behaving in a way that society has deemed unacceptable. 

> Agreed. Our evolutionary history informs what societal pressures exist. But we also managed to come up with reason, and I think reason is the way to distinguish a good set of preferences from a bad one.

This is where the definition of "morals" comes in. I use a definition of: the rules a society or individual uses to navigate life. Then yes we have moved on from using Darwinian evolution to create our morals, through some religious basis and hopefully now using reason and logic in the main. They're just tools employed to determine the rules of engagement for society. Now we use reason I think you could argue we have a better set of rules, as we reject old rules and replace them with better ones.

> I agree that our moral instincts and behaviour are just a set of statements about how things are. The choice though is not between that and a total free-for-all. I'm proposing the alternative that while what *is* can be explained by evolutionary psychology, what *ought* can be determined by reason.

I think we agree see section above, our moral codes are changing as we use better tools to determine better rules.

> Where we are now, we can overcome our evolutionary instincts because we have developed sufficiently good use of reason to determine right from wrong. A society with equal rights is better than a racist society for good reasons, not just a matter of subjective preference.

Again agreed, but it's not a different set of moral codes, it's just a better way of determining what they should be. There is no definitive set of morals that are good or bad, each idea must stand and fall on its own merit. We've simply progressed from what will keep to alive, to what some guy in a silly hat says, to using reason to try and establish what is best for the greatest number of people. It's messy, imperfect and often corrupted by those who have different goals in mind, but it's got us this far and we should continue to separate the good ideas from the bad with the best tools we have available. 

maybe we should focus on the tools we use to determine to good from the bad, rather than the rules themselves.

 freeflyer 07 Apr 2020
In reply to DD72:

> a bit of Taoism as well, which may be where Freeflyer is going as well

Good spot I'm an eclectic but if pushed I would say my roots are there and in Zen. I suspect a proper Jesuit would give me a right roasting for my previous post, but then, that would be a good debate!

ff

In reply to Jon Stewart:

Just to add to my previous post, even though it's too long for anyone to read already. 

Simplistically I don't give a monkeys where your morals come from. What's important to me is what are they and why do you hold them.

For example, I have decided it's acceptable to take my dog for a second walk for 15mins, late in the evening in a place where we see no one. Others can disagree, and we can have a discussion on why, they may change my mind.

On societal norms and morals it's been interesting to watch the change in approach from the regional police Twitter feeds. In a pre covid world they consisted of "we arrested this bad person and they're going to court" or "we're looking for this bad person can you help" and the responses were overwhelmingly positive, because what they were doing conformed to the accepted norms. They then continued this approach for covid stay at home messages most famously Derbyshire's drone video "look at these bad people out walking their dog" they probably expected the same positive response. Society, however, hasn't had time to work out what the new rules of engagement are so the response was mixed! Since then they've gone for positive reinforcement messages (photos of empty roads) "thanks for staying home" this is a much safer way of engaging until there is an agreement on acceptability of a behaviour. It's no different to drink driving or driving on the phone.

 1poundSOCKS 07 Apr 2020
In reply to Jon Stewart:

> The least suffering.

I'm not even sure I'm at my happiest and most fulfilled when I have the least suffering.

 freeflyer 07 Apr 2020
In reply to 1poundSOCKS:

Tommy Caldwell on suffering:

youtube.com/watch?v=PnMs_qLwaes&

 Tom Walkington 07 Apr 2020
In reply to Jon Stewart:

I think 'moral behaviour' is simply 'intent'. 'Good intent' is 'morally good/right',and 'bad intent' is 'morally bad/wrong'.

If people are genuinely using 'good intent',then we can accept that as a given,and move on to the problem of what is 'the best thing to do'.'The best thing to do' is that which has 'the best outcome'.

An all knowing superbeing that understood the whole of existence would be able to know 'the best thing to do' to get 'the best outcome'.

As humans we have limited understanding,but we can at least try to work towards a 'best outcome'.

From our human perspective, we put  value on experiences for ourselves and others (including any sentient beings we are aware of).

'The best outcome' is that which produces the greatest total score of positive experience.This score is the sum of the positive and negative experiences.

In the future we may be able to directly  measure our experiences to some extent by instruments,which would help us make better decisions.

 Tom Walkington 08 Apr 2020
In reply to Jon Stewart:

Just thought as a postcript to my post that I would define 'good intent' as 'wanting the best outcome'.

OP Jon Stewart 08 Apr 2020
In reply to 1poundSOCKS:

> I'm not even sure I'm at my happiest and most fulfilled when I have the least suffering.

Can't beat a spot of war, famine and disease.

OP Jon Stewart 08 Apr 2020
In reply to Moomin.williams:

> Where the distinction comes is when a society has a debate and comes to a consensus that racism is a bad idea, then it changes from being an individual preference to a societal moral, it is still possible to hold the counter view point, but the individual has to live with the consequences of behaving in a way that society has deemed unacceptable. 

So far, so subjective...

> Now we use reason I think you could argue we have a better set of rules, as we reject old rules and replace them with better ones.

> I think we agree see section above, our moral codes are changing as we use better tools to determine better rules.

I agree entirely, but you've made the move away from subjective morality to something objective - you're saying, like me, that when you employ reason, it's a better tool and thus gives you better rules. Not just rules that you happen to like more, because that's your taste. 

> Again agreed, but it's not a different set of moral codes, it's just a better way of determining what they should be. There is no definitive set of morals that are good or bad, each idea must stand and fall on its own merit. We've simply progressed from what will keep to alive, to what some guy in a silly hat says, to using reason to try and establish what is best for the greatest number of people.

I agree with utilitarianism, and I think most atheists (and even many religious people if they could reflect deeply enough) do without realising it - but I don't think that we (the human race) are anywhere near this consensus yet.

> maybe we should focus on the tools we use to determine to good from the bad, rather than the rules themselves.

Agreed, this is the heart of the discussion (metaethics). We agree that utilitarianism is the best ethical theory, but we might disagree about the ethical conclusions we draw from it. As has been pointed out, people make (bad) utilitarian arguments for eugenics.

OP Jon Stewart 08 Apr 2020
In reply to Tom Walkington:

> Just thought as a postcript to my post that I would define 'good intent' as 'wanting the best outcome'.

You're another utilitarian then, hurrah! I totally agree that intentions matter. A classic argument against the simplest form of consequentialism is "what if you something with good consequences when you intended to do something nasty - is that morally right?".

As you point out, a good moral choice is one which you've thought about and come to the conclusion that it will have the best consequences.

In reply to Jon Stewart:

> Agreed, this is the heart of the discussion (metaethics). We agree that utilitarianism is the best ethical theory, but we might disagree about the ethical conclusions we draw from it. As has been pointed out, people make (bad) utilitarian arguments for eugenics.

I'm going to keep right out of this, because it's such a difficult subject. 'Utilitarianism is the best ethical theory' seemed to be true in about the mid C19th ... I highly recommend that everyone studies this in some considerable depth before pontificating. I spent three years as an undergraduate studying moral philosophy (1969-72), then got back to it a bit in c. 1976-8. Then a long gap, and got right back into it again in the late 80s to early 2000s.

The reason I give this warning (if I were on an iPhone I'd insert multiple emojis) is that it's the only subject I know that gets ever more difficult the more you study it.  Well, I finally just about got it sorted it out to my own satisfaction after about 35 years (and it is certainly not crude 'greatest number' utilitarianism), but I'm absolutely not prepared to discuss it at all because [of course] one can't do philosophy at all successfully over the internet, for many reasons.

I'm sure we've had discussions here about 10-20 years ago, and all I could say then was that one of the best starting points was (and still probably is) Macintyre's Short History of Ethics. Otherwise it's a list as long as your arm.

The only other one I'd strongly recommend in the same way is Downie and Telfer's 'Respect for Persons'. Quite a rare beast now, but if you use advanced Amazon search with Downie as author, plus the title, you'll find there are quite a few copies available for not much more than 2 quid. But ensure you get the original edition with the red cover.

Post edited at 16:55
1
OP Jon Stewart 08 Apr 2020
In reply to Gordon Stainforth:

>'Utilitarianism is the best ethical theory' seemed to be true in about the mid C19th ...

It still seems to be best ethical theory to a lot of contemporary philosophers e.g. Peter Singer, William McAskill and I'm sure countless others who've spent their entire careers doing nothing except studying philosophy.

I'm quite prepared to consider that these guys might be wrong and others might have better ideas, but not on an appeal to authority I'm afraid. 

Post edited at 16:54
In reply to Jon Stewart:

You mention Peter Singer. And all I can say is Umm, and umm and ummm... Again.

I'll leave this now. But let me assure you that I was trying to be helpful and positive. No other motive. The books I mentioned cover just about the whole range of possible arguments, and so are the opposite of proscriptive.

PS. When you read philosophy widely there is no 'authority'; all it does is blows the subject wide open, and typically blows one's mind when you come across such a range of extraordinarily sharp of thinking. These are people we can learn much from. Particularly when they're all saying different intelligent things.

Am going now. [Back to the biog]

 1poundSOCKS 08 Apr 2020
In reply to Jon Stewart:

> Can't beat a spot of war, famine and disease.

Not tried the war or famine yet. And I do hate being ill. Although I'm not convinced that's a robust defence. 

OP Jon Stewart 08 Apr 2020
In reply to Gordon Stainforth:

> You mention Peter Singer. And all I can say is Umm, and umm and ummm... Again.

First an appeal to authority, followed up by an ad hom! Great stuff

> I'll leave this now. But let me assure you that I was trying to be helpful and positive. No other motive. The books I mentioned cover just about the whole range of possible arguments, and so are the opposite of proscriptive.

I have no doubt that there's a whole world of rich and intelligent thinking in the whole of the history of philosophy. However, philosophy has to deal with the problem that as we learn more about the world through science, some questions which seemed to be philosophical later come within the purview of science.

I'm not going as far as some (Sam Harris in particular) to say that ethics is now within the purview of science, but I am saying that any ethical theory that does not account for evolutionary psychology isn't worth the paper it's written on. You can't have a theory of, say, the orbits of the planets while still believing that the world is flat. Once you've got a good *description* of what human morals *are*, which is what I think Jonathan Haidt achieved in The Righteous Mind, then you can consider ethical theories in the context that they must, as a matter of fact, work within.

Post edited at 17:31
In reply to Jon Stewart:

Final comment. I confess to the ad hom. Sorry

But I've no idea about this 'appeal to authority'. There is none. Just an appeal to study, to use one's own brain, for hours and years, which means: rejecting authority. Yes, most of ethics can be accounted for by evolution, but not everything. That's the puzzle, the problem we're left with (it won't go away). Obvious examples occur with extreme bravery in war. Then there's the whole vast subject of 'virtues' ... and ... and ... and ...  So I won't even start. Please don't feel you have to answer this; in fact, please don't. As I said as my starting point, I just don't want to get drawn into discussing this because it's just so HUGE. It's a bit as if we were keen amateur scientists and said let's have a little ten-minute chat about quantum physics?

I'll have to confess, I was absolutely stunned yesterday to see that, in the midst of this harsh Coronavirus crisis, when we all need to be channelling all our thoughts into the immediate problems around us, trying as far as we can to help and alleviate the suffering, that someone on UKC was actually starting an abstract intellectual discussion about moral philosophy! I kind of blinked at it in sheer disbelief. 

Edited to change huge to Bold Caps. It's a pity I can't change the font size.

'Bye.

Post edited at 17:48
OP Jon Stewart 08 Apr 2020
In reply to Gordon Stainforth:

Gordon, it is not wise to discourage people from discussing ethics unless they are as learned as you. It is not for you to set the bar as to when one becomes qualified to discuss ethics.

Everybody on the planet has direct experience of morality and it is in everybody's interests that they each reflect on where their instincts come from and which they should trust and which they might want to question. It is NOT like doing amateur quantum physics, because that is to comment on something you cannot appreciate by examining your own experience in the world.

I will continue to discuss ethics on the internet, and you will simply have to ignore it, even though your deep study of moral philosophy has led you to believe that my actions are wrong.

OP Jon Stewart 08 Apr 2020
In reply to 1poundSOCKS:

> Not tried the war or famine yet. And I do hate being ill. Although I'm not convinced that's a robust defence. 

I think that it's a given that everyone's going to suffer a bit, because our loved ones will always die, we will always get ill, our relationships will end, etc. But I presume you'd rather spend less of your time, rather than more, having these types of experiences?

As Sam Harris puts it, all I'm asking you to agree with is that the worst possible misery for everyone is bad, and that if your actions result in consequences that take us further away from that state of the universe, rather than closer towards it, then you can justify your moral choices.

I agree that a life with no suffering at all would be rather bland - but don't worry, that's simply not on the table.

 krikoman 08 Apr 2020
In reply to Gordon Stainforth:

> I'll have to confess, I was absolutely stunned yesterday to see that, in the midst of this harsh Coronavirus crisis, when we all need to be channelling all our thoughts into the immediate problems around us, trying as far as we can to help and alleviate the suffering, that someone on UKC was actually starting an abstract intellectual discussion about moral philosophy! I kind of blinked at it in sheer disbelief. 

When should we have this?

People seem to be quite happy to bandie around accusations of virtue-signaling, while not knowing much about the circumstances of many of the cases, and WHY shouldn't people discuss anything at any time? Surely, now is a good time to ponder the intricacies of human thought, most of us have a lot of time to think at the moment, maybe this is the exact time we should be discussing moral philosophy.

It's a little bit like people telling others not to criticise the government or Boris, while we're fighting the virus.

Post edited at 18:39
 1poundSOCKS 08 Apr 2020
In reply to Jon Stewart:

> I think that it's a given that everyone's going to suffer a bit, because our loved ones will always die, we will always get ill, our relationships will end, etc. But I presume you'd rather spend less of your time, rather than more, having these types of experiences?

I think you're just focusing on certain suffering that none of us really want. But a certain amount of pain and hardship can provide a more fulfilling experience of life overall. Look at high altitude mountaineering for a start, not that I'm interested. 

And I think more attention should be paid to self determination. Although listening to the Tommy Caldwell piece linked above, a certain amount of imposed suffering isn't necessarily a bad thing either.

 Ratfeeder 08 Apr 2020
In reply to Gordon Stainforth:

>As I said as my starting point, I just don't want to get drawn into discussing this because it's just so HUGE. It's a bit as if we were keen amateur scientists and said let's have a little ten-minute chat about quantum physics?

> I'll have to confess, I was absolutely stunned yesterday to see that, in the midst of this harsh Coronavirus crisis, when we all need to be channelling all our thoughts into the immediate problems around us, trying as far as we can to help and alleviate the suffering, that someone on UKC was actually starting an abstract intellectual discussion about moral philosophy! I kind of blinked at it in sheer disbelief. 

I totally sympathise with your stance here. It is a huge and very difficult subject (though hopefully not quite as difficult as quantum physics!). Discussing the issues in an informed and meaningful way is a bit like playing three-dimensional chess (not that I've ever played that). A lot of us do have more time on our hands than usual, though, so maybe a thread like this is not such a bad idea?

I think Jon has made a very good effort in his opening post. It's not dryly academic and abstract, but gives a sincere response to the problem of how to decide what we ought to do. Whether or not one agrees with him that utilitarianism offers the best solution, it raises some very interesting an materially important issues which deserve thought and discussion. But to get involved in a way that does any justice to the depth and breadth of the subject is indeed a daunting prospect!  

OP Jon Stewart 08 Apr 2020
In reply to Ratfeeder:

I found the moral prescription that "we all need to be channelling all our thoughts into the immediate problems around us" to be utterly nonsensical and hypocritical. And I "kind of blinked at it in sheer disbelief" especially that it should have come from someone with such wide reading in moral philosophy!

OP Jon Stewart 08 Apr 2020
In reply to 1poundSOCKS:

> But a certain amount of pain and hardship can provide a more fulfilling experience of life overall.

In which case, that kind of suffering is justified by leading to overall higher utility. I'm all for it.

> And I think more attention should be paid to self determination.

What's so good about self determination? I'd suggest that it's a relief from the suffering that results from having your personal freedom curtailed.

Post edited at 21:17
 1poundSOCKS 08 Apr 2020
In reply to Jon Stewart:

> What's so good about self determination? I'd suggest that it's a relief from the suffering that results from having your personal freedom curtailed.

The kind of suffering I was thinking as a positive usually comes from something we choose to do. And the kind of suffering you highlighted as undesirable is usually not chosen.

OP Jon Stewart 08 Apr 2020
In reply to 1poundSOCKS:

> The kind of suffering I was thinking as a positive usually comes from something we choose to do. And the kind of suffering you highlighted as undesirable is usually not chosen.

I don't think the delayed-gratification kind of suffering carries a lot of weight in the debate, as it's so soon outweighed. Just as the kind of superficial pleasure you might get from doing a line of cocaine isn't something you should you put much effort into seeking out, the kind of suffering you get when a hanging belay is uncomfortable isn't something you need to put any effort into avoiding. Rather, we should be seeking the deeper pleasures, like feeling confident that our community is a safe place to raise children; and avoiding the deep suffering of unnecessary death and disease.

Post edited at 22:45
 1poundSOCKS 08 Apr 2020
In reply to Jon Stewart:

> I don't think the delayed-gratification kind of suffering carries a lot of weight in the debate, as it's so soon outweighed.

Now I'm forgetting what you originally said (good  = minimal suffering?) I was only stating I didn't think that was universal true, or indeed that minimal suffering was ever an ideal state for a satisfying life. But now I'm repeating myself...

OP Jon Stewart 08 Apr 2020
In reply to 1poundSOCKS:

> Now I'm forgetting what you originally said (good  = minimal suffering?) I was only stating I didn't think that was universal true, or indeed that minimal suffering was ever an ideal state for a satisfying life. 

I agree with you that taken to its extreme, the idea of eliminating suffering entirely would throw a bit of baby out with the bathwater, as to have a satisfying life, there's got to be some rough with the smooth.

To be a bit more detailed about my utilitarian position:

I think that we should aim for the best balance of "pleasure" and "pain" for everyone, and I prioritise people over non-human animals. I think that "shallow" pleasures (like a cocaine blowjob) count for very little, whereas "deep" pleasures (like reflecting on the state of the human race and thinking that everything's going to be just fine) count for a lot. The same with suffering - a bit of physical pain you might endure, or the embarrassment you feel when someone points out you made a mistake don't count for much, but the deep suffering of losing loved ones is important.

The reason I put the emphasis on reducing suffering rather than increasing pleasure is simply that it's more efficient. You can shift the balance by a much greater degree by protecting a family from malaria with some mosquito nets (classic example from Will McAskill) than by, I dunno, putting that effort into helping out at the local primary school's sports day. Both are good, but one is a lot more effective at shifting that overall balance.

The emphasis on reducing suffering (rather than increasing pleasure) is a pragmatic thing, rather than something crucial to the ethical theory. 

Post edited at 23:51
 freeflyer 09 Apr 2020
In reply to Jon Stewart:

Good post If you eliminate suffering, you indeed throw the baby out with the bathwater. We all suffer - it's the human way. But is it inevitably so?

I would argue that pain is not suffering; they are two separate things. And to be clear, we are talking about emotional pain here, not physical pain. They are two different words, but the meanings are often confused, in my opinion.

Pain is inevitable, those feelings that occur as part of our human life, when bad things happen. Also, you can inflict pain on yourself, deliberately, in pursuit of of some future goal which will make you happy, and presumably, result in less pain, or so you hope; climbing El Cap with 9.5 fingers via blank rock.

Suffering is more tricky. There's a feeling of repetition, lack of control, pain that is inflicted on me without my say-so. I absolutely agree that reducing suffering is the key.

Looking at suffering in more detail, it seems that it amounts to my feelings about the pain that is inflicted on me. Who is in control of those feelings? I submit that *I* am in control of those feelings, even if my entire life has been taken up with the assumption that someone else is. I am the one causing my suffering.

So in my view the best utilitarian position is to aim to reduce my suffering, by understanding that I am the instigator of it.

 Ratfeeder 09 Apr 2020
In reply to Jon Stewart:

> If you make the requirement that moral realism must boil down to something that transcends or does not depend upon human desires, then utilitarianism is not moral realism. But who cares? I'm not bothered how a philosopher would classify the theory, I'm bothered about whether it's any good. Can I use this theory to justify moral choices that I believe in?

> I want to show that the way to reach good moral judgements is to employ reason, and that the passions (our evolved instincts) aren't to be trusted. That doesn't require a transcendence of human feelings. It requires us to discuss human feelings according to reason, that's all.

> As I said, so what. If it gives me an ethical theory that's founded in reason, then it means that we expect to reach consensus: when you analyse and debate the facts, different people will be forced to similar conclusions. We can all get at the same answers. Sounds to me like it's becoming objective, even though it doesn't meet your (pointless) bar of being rooted in something above and beyond what humans want.

> - "I don't want blacks in my shop, and should not have to tolerate them, because I want it that way"

>  -"Well I want it a different way, in which people have equal rights and we achieve higher overall utility"

> Are both viewpoints equally valid, or not?

> What your view says is that all you can say is "I don't like it". Only if you rule the world does "I don't like it" provide a way to change something. Wishy-washy anything-goes relativism allows you to say "I don't like it, but my reasons for not liking it are no more valid than yours for liking it". Which is exactly what you're saying.

I think your original post is excellent. What you are convincingly arguing against here is emotivism - otherwise known as the "boo-hurrah" theory of ethics - which is basically a systematic form of moral scepticism. Emotivism is based on Hume's idea that moral utterances ('should' 'ought') are not statements of belief (cognitive), but expressions of taste, preference, desire or emotion (non-cognitive). Note, and this is important, Hume didn't think that emotions or feelings have cognitive content. Subsequent emotivists, such as A.J. Ayer and C.L. Stevenson, inherit this assumption. So did Kant, and that was why, as a cognitivist, he thought that an action motivated by a feeling or emotion ruled it out as a moral act.  Many later cognitivists, however, dispute the Humean assumption that emotions always lack cognitive content. In this view, a moral utterance can be both an expression of feeling or desire and a statement of belief (about the wrongness or rightness of an action). So an act motivated, say, by the desire to help someone is not ruled out as a moral act because the desire itself in informed by the belief that helping said person is the right thing to do.

Now you are absolutely right that this Humean moral scepticism gives us no basis for determining whether one moral opinion or decision or act is any better (objectively) than another. Beliefs are either true or false, but desires or tastes or emotions can't be either. If a moral utterance expresses only such a non-cognitive state, it has no truth-value. So how do we get from this to being able to decide in any given circumstance what we ought to do, as opposed to simply what we feel like doing?

Well, many non-cognitivists propose utilitarianism for this very purpose. R.M. Hare's prescriptivism is perhaps the most well known example of such a theory. Hare claims that happiness consists of having one's preferences satisfied. So, one should always act in a way that maximizes or optimizes preference satisfaction. Where preferences conflict, it will be a question of compromising so that the fewest preferences remain unsatisfied. Note, all preferences must be treated equally. There can be no judgement that one person's preference is any 'better' or 'worse' than another. So Myra Hindley's preference for murdering children must be given equal consideration to Alexander Fleming's preference for developing an antibiotic.

I suggest that this is not the kind of utilitarianism that you are looking for. It fails to satisfy your fundamental objection to emotivism - that it provides no objective method of distinguishing right from wrong. So I think it makes a material difference whether, as a utilitarian, you are a cognitivist or a non-cognivist - a realist or an anti-realist. A cognitivist would say that judging Alexander Fleming's actions to be better than Myra Hindley's is a matter of moral belief, not merely of preference or taste that has no truth-value. But a cognitivist who is a consequentialist would be forced to say that the only reason Fleming's actions are better than Hindley's is that the former resulted in a better outcome over all (in whatever terms 'better' is to be couched).

I'll have to leave it there for now, but it's a start.

 1poundSOCKS 09 Apr 2020
In reply to freeflyer:

> I would argue that pain is not suffering; they are two separate things

The dictionary definition of suffering includes pain. Although dictionary definitions do vary.

 freeflyer 09 Apr 2020
In reply to Jon Stewart:

> It doesn't strike me at first glance that Ignatius of Loyola is providing a theory that could be used to structure society and its institutions - but maybe that's not what he was trying to do?

Yes I think that's right - the rules of discernment were intended to provide the seekers with a solid basis for making firm moral decisions about their personal beliefs and actions. With respect to society and institutions, as the Society of Jesus was/is a strictly hierarchical authoritarian institution, they took a vow of obedience, and did what the Pope said! Not a viewpoint that would appeal to most posters on here, including me.

However I like the way that he attempts to disentangle feelings and beliefs; I find it helpful to consider that my beliefs may be driven by my emotions.

> what if I'm unhappy because some f*cker just kicked me in the balls for apparently no reason? That's not something I want to be in harmony with, nor to adjust my expectations and attitudes towards. 

While you are doubled up, you are definitely feeling pain, and your unhappiness might come from your belief that it shouldn't have happened. Once you have kicked the said f*cker in the balls in retaliation, you may feel happiness, or at least contentment, but also a certain amount of guilt. Unhappiness and happiness, no matter how reasonable, are what suffering is. Of course, you may choose to suffer.

At this point I should apologise and admit that I am straying from the Jesuit path into more buddhist territory. But then, I'm an eclectic, so that is what you get!

 freeflyer 09 Apr 2020
In reply to 1poundSOCKS:

> > I would argue that pain is not suffering; they are two separate things

> The dictionary definition of suffering includes pain. Although dictionary definitions do vary.

Yes, good point; suffering may include or be caused by pain. I was trying to make a distinction, and say that although suffering could be a consequence of pain, the suffering part is voluntary, because you have a choice about how to frame your experience, as Tommy Caldwell put it.

 freeflyer 09 Apr 2020
In reply to Ratfeeder:

> this is important, Hume didn't think that emotions or feelings have cognitive content.

In the case of President Trump, I'm with Hume on this.

Thanks for your great post.

 Coel Hellier 09 Apr 2020
In reply to Jon Stewart:

Hi Jon, I rather overlooked this (and was out in the sun a lot last couple of days -- oops, shouldn't admit that ).

> Fair point! But, what I'm getting at here is that if presented with a moral dilemma, I can't tell you what Hume would say about it.

That's because he's talking about meta-ethics (the nature and status of morality) not applied ethics (someone's own feelings on whether particular conduct should be approved of or deplored). 

> He'll tell me that morality is ultimately subjective (I can't get an ought from an is), I can't reason my way to an answer, I'll need to consult my feelings, and he'll suggest that I might feel that the action with the highest utility seems like the right thing to do.

He's right, you might feel that way!

> But what if it doesn't. What if I just feel it's better to keep black people out of my shop. I think black people are dangerous and might harm me, my customers would prefer it if I had a "no blacks" sign up, I'd do better business, and I feel that it's the right thing to do.

If you're asking: Is there some objective moral arbiter, that could come along and say: "Nope, you're wrong", then no there isn't. 

Of course other people might express their subjective dislike of your stance, and want you to change it, but there is no objective referee to tell you who is more in the right.  Indeed, there is no meaningful concept of being objectively morally right, there is only what people like or dislike.  

> Reason/utilitarianism/moral realism would tell me that I haven't got any good reasons to ...

Which is fine, but the only moral standing that such an analysis would have would be your advocacy, which then reverts to simply being the above --  a subjective dislike of something and a subjective advocacy of something else, based on your own feelings and desires.

> So according to you and Hume, what ought I do, and why?

According to me and Hume, there is no such thing as "what I ought to do" in the abstract. The only "oughts" that exist are instrumental; that is, of the form: "If you want to attain X then you ought to do Y", which can be re-phrased "doing Y would attain X", which is a factual statement, and thus is a statement that can have a truth value.

But an abstract statement: "You ought to do Y" is not meaningful. It lacks the goal or aim that the doing Y would attain. 

There are no abstract oughts (existing and holding independently of our aims, desires and goals).   That's the central message of Hume.    It is hugely counter-intuitive, which is why, centuries after Hume, most people intuitively reject the idea, and why philosophers still debate it. 

It's also the only conclusion that makes sense from an  evolutionary perspective.  Why on earth would there be an objective, independent moral arbiter that tells one person he is more morally right than another?

What does an abstract, objective ought ("You ought to do X") even mean?  Why ought I do X? "Because it's morally right!".  What does "morally right" mean?  It means you ought to do it!  Why ought I do it?  This just goes round in circles.  There are no abstract oughts, there are only instrumental oughts deriving from a human goal or desire.

 Coel Hellier 09 Apr 2020
In reply to Jon Stewart:

> That's the whole challenge to utilitarianism - it fails to chime with our evolved instincts!

OK, I meant that it gives a *partial* description of how people think about morals.  Other philosophical analyses (virtue ethics, deontology) also give partial descriptions of human psychology about ethics.   Actual human psychology is a complex mixture of those (and not necessarily a consistent one, since nothing requires that human psychology be consistent).

> The problems we have in the world of conflict and exploitation stem from the fact that our evolved behaviour doesn't deliver good outcomes - we hoard resources while others starve, we kill each other so our in-group can rule the patch of land, etc, etc. 

Well they're good for some people -- not so good for others of course.

>> I actually think that it does.  Once you've described what human morality is, where it came from and why we have it, what else is there in meta-ethics left to do?

> Errm, give us a basis for deciding right from wrong, perhaps?

First give some basis for supposing that there is such a thing as morally right and morally wrong in the objective sense (not just what people like or dislike). The point of the evolutionary perspective is that it tells us that there will be no such thing (or, more precisely, that there is no reason to suppose that there is any such thing, at which point Occam excises it).

> You've said that you agree with Hume that you can't get an ought from an is. And given no alternative as to how you get an ought, ...

There are no objective oughts! (Indeed the very idea of such is nonsensical, in that no sensible account of the idea can be made.)  That's why I've given no prescription for arriving at them. 

> So you've answered one metaethical question (moral realism or anti-realism) and then left it there, ...

Yes, solving meta-ethics is enough for one day! 

> ... which is a position of wishy-washy anything-goes relativism.

It's not.  Nothing in the above prevents us having very strong opinions and feelings about how we want things to be, and then advocating and campaigning for those things.    All the above says is that this does all rest on human opinions and feelings.

> You've got no framework to distinguish right from wrong, ...

Far worse than that, I'm denying that the idea of objective moral right versus objective moral wrong is not even meaningful!

>  ... just a load of "is" statements about human behaviour, and an endorsement that if you feel it's right, just go ahead.

No!  I am not giving any endorsement or permission or warrant for "just going ahead".     Whether I want them to "go ahead" would depend on my subjective feelings about how I want society to be.  I may not want them to "just go ahead". 

That, of course, is not saying that such would be "objectively wrong", but it does say that they do not have my endorsement.

 Coel Hellier 09 Apr 2020
In reply to Jon Stewart:

> If you make the requirement that moral realism must boil down to something that transcends or does not depend upon human desires, then utilitarianism is not moral realism. But who cares? I'm not bothered how a philosopher would classify the theory, I'm bothered about whether it's any good. Can I use this theory to justify moral choices that I believe in?

That's revealing.  At root are the "moral choices that you believe in".   So, at root, it comes down to: "I want to live in a society that ...".

But, you then want to "justify" that stance.  You think it would be more convincing and more binding if it were not just your preference, but was backed by an objective warrant. 

The traditional thing to do at that point is to invoke a God (constructed in your own image) that backs up your opinion and (surprise, surprise) mandates whatever it is you already wanted.  But that tactic has gone out of fashion. 

So the more-modern thing to do is to invoke a philosophical scheme such as utilitarianism -- constructed out of your own moral feelings and ideas -- that, in the end, backs you up and mandates the "moral choices that I believe in".

> I want to show that the way to reach good moral judgements is to employ reason, and that the passions (our evolved instincts) aren't to be trusted.

But you can't.  You can't construct a moral scheme out of reason alone.  You can only construct a moral scheme out of the "moral choices that I believe in".  That's because you can't get an "ought" out of an "is". 

The only oughts are instrumental ones, deriving from our aims, goals and desires.  And human aims, goals and desires are part of our nature. You can't arrive at them from reason alone. 

> If it gives me an ethical theory that's founded in reason, ...

Which is what I deny is possible. 

> ... then it means that we expect to reach consensus: when you analyse and debate the facts, different people will be forced to similar conclusions.

But that won't happen since people are different and have different goals, desires and values.  Those are primary here, not reason. 

> Sounds to me like it's becoming objective, even though it doesn't meet your (pointless) bar of being rooted in something above and beyond what humans want.

Yes, I agree, if you could do that (found an ethical theory in reason) then it would give you an objective morality.  I don't think you can.  I think you'd need axioms that derive, not from reason, but from your human values.   Reason itself is a-moral.

 Coel Hellier 09 Apr 2020
In reply to Jon Stewart:

> I can't make sense of this. If your reasons for moral prescriptions can't go beyond "because I want it that way", then that is the definition of "wishy-washy anything-goes relativist".

It depends what you mean by "anything goes".  Is that descriptive?  Yes, as a description of how things are, "anything goes" is true.   The only thing stopping someone behaving badly is other people, and those other people act based on their own feelings and preferences. 

What stops someone being a mass murderer?    If you're about to pull a trigger, do the heavens open, does a voice boom out, and does a hand reach down to stop you?   No.  Other people might try to stop you, based on what they want to happen.

Or, by "anything goes" are you talking prescriptively?  Are you suggesting that my scheme gives permission and warrant for acting however you like?  No, it doesn't. Absolutely nothing in my scheme says anything such.  

Indeed, the whole point is there can be neither objective moral approval nor objective moral disapproval (since those concepts make no sense; they amount to a value judgement without anyone doing the valuing).  

So nothing at all in my scheme gives people endorsement or approval for acting however they wish.  And as for me personally giving approval or disapproval, well that's up to me based on my values and feelings (it is not a matter of meta-ethics, which is what my scheme is about). 

> - "I don't want blacks in my shop, and should not have to tolerate them, because I want it that way"

>  -"Well I want it a different way, in which people have equal rights and we achieve higher overall utility"

> Are both viewpoints equally valid, or not?

Answer: the question is ill-posed.  Something cannot be morally "valid" in an abstract and objective sense.  

> What your view says is that all you can say is "I don't like it".

Correct.

> Only if you rule the world does "I don't like it" provide a way to change something.

No, not true.  Martin Luther King, suffragettes, civil-rights activists, gay-rights activists, Greta Thunberg, et cetera.  Did any of them rule the world? No.  Did they say "I don't like it"? Yes.  Can this persuade people and change things? Why, yes, it can! 

In reply to Coel Hellier:

Thanks all I've found this thread really interesting, and yes I do have more time to ponder these things at the moment! My previous thoughts and encounters with ethics have not gone beyond discussing with family members who claim to get their morality from a big beardy guy in the sky or a book inspired by said guy in the sky. As a result I didn't need to think very hard to challenge their views and they now don't want to discuss such topics with me!

Cole's thoughts on the lack of an objective moral arbiter makes a lot of sense, the only possible arbiter can be another human.

Someone up thread mentioned Sam Harris and his view that there is no moral discussion that can't be improved with the use of science and just using philosophy is insufficient. I'm drawn to this opinion, without using reason and evidence any stance on a topic is on shakey ground imo. Just because you claim you've thought about it for a really long time doesn't mean you have the right answer, that's how religion works. I'd be interested if anyone has any examples that refute this position. Nothing springs to mind, but I've only been thinking about it for 30mins and clearly already have my own internal biases!

As with most things when you can't find the answer to a problem I find it's all about the definitions. In beauty pageant language I want to make the world a better place. The tough bit is agreeing what better looks like, until that's nailed down there's no point looking for ways of making it better. I'm not saying we need to solve the big question once before doing anything, but for any given problem decide what the objective is first.

For example let's say we want to improve the lives of people where there is insufficient food. One option is to take sacks of food and dish it out. If we are more specific with the problem and phrase it as "we want to ensure the people in this region have continued access to sufficient food" Then the more sustainable solutions of education, birth control, irrigation etc become the solutions

 Ratfeeder 09 Apr 2020
In reply to Coel Hellier:

> According to me and Hume, there is no such thing as "what I ought to do" in the abstract. The only "oughts" that exist are instrumental; that is, of the form: "If you want to attain X then you ought to do Y", which can be re-phrased "doing Y would attain X", which is a factual statement, and thus is a statement that can have a truth value.

> But an abstract statement: "You ought to do Y" is not meaningful. It lacks the goal or aim that the doing Y would attain. 

> There are no abstract oughts (existing and holding independently of our aims, desires and goals).   That's the central message of Hume.    It is hugely counter-intuitive, which is why, centuries after Hume, most people intuitively reject the idea, and why philosophers still debate it. 

> It's also the only conclusion that makes sense from an  evolutionary perspective.  Why on earth would there be an objective, independent moral arbiter that tells one person he is more morally right than another?

> What does an abstract, objective ought ("You ought to do X") even mean?  Why ought I do X? "Because it's morally right!".  What does "morally right" mean?  It means you ought to do it!  Why ought I do it?  This just goes round in circles.  There are no abstract oughts, there are only instrumental oughts deriving from a human goal or desire.

Are moral requirements hypothetical imperatives? Phillipa Foot says yes; John McDowell says no. If you fully subscribe to Hume's account of moral motivation, then you'll answer yes. According to Hume, the state of being motivated to act needs two separate elements - a desire, and a belief (about how the desire might be satisfied). The desire has a world-to-agent direction of fit; the belief has an agent-to-world direction of fit. All the cognitive content of this motivating state is exhausted by the 'how'; none is left for the 'what'. What is desired is not a question of rational deliberation. We can't help what we desire; our desires just emerge unconsciously, demanding to be satisfied. Darwin's work suggests an explanation of why these desires emerge; Freud develops that explanation into a psychological theory.

Kant also goes along with the Hume's account up to a point. He accepts that, commonly, our motivation to act is conditional upon our desires. Hence, the rationale of the average action is captured by a hypothetical imperative. But he thinks that moral motivation is a special case. What distinguishes an act as a moral act is precisely that it is not motivated by a desire. Instead, it is motivated purely by a sense of duty - the duty to act according to the 'moral law', which he formulates, in various ways, as 'the categorical imperative'. This is Kant's attempt to put moral motivation on a rational footing - to insist that the cognitive content of a morally motivating state applies at least as much to the 'what' as to the 'how'.

Kant's account is, of course, flawed. Not least of it's shortcomings is its extraordinarily austere view of moral rationality, and its expectation that human beings could be motivated by it. No desires allowed. But that is partly Hume's fault, because Kant has inherited the Humean assumption that desires in themselves are non-cognitive - that a desire for something, or for something to be the case, cannot be the result of rational deliberation. But what if it can? What if the desire to live in a world without cruelty, without bigotry, where social justice and fairness prevails, where people are sensitive to each other's needs and as free as those needs allow us to be, is the result not of some blind instinct welling up from our evolutionary prehistory, but of thought and consideration, of imagination and compassion, of reading books and of conversations with friends? Then we have an entirely different basis for moral motivation, neither Humean nor Kantian, yet both Humean and Kantian.

In reply to Ratfeeder:

My goodness, what a brilliant summary of Kant and his 'categorical imperative'.

 Coel Hellier 09 Apr 2020
In reply to Ratfeeder:

> But that is partly Hume's fault, because Kant has inherited the Humean assumption that desires in themselves are non-cognitive - that a desire for something, or for something to be the case, cannot be the result of rational deliberation. But what if it can?

A desire for something can itself be instrumental, deriving from another desire.   Thus: "I desire A, because more fundamentally I desire B, and reason tells be that A will get me B".  (But reason alone can never get you to a desire, you always need to add in a desire, and that cannot come from logic or a-moral reason.)    

Additionally, reason and facts can heavily influence our psychology and our desires.    To give an example: "I care about the suffering of conscious creatures. But I don't care about boiling lobsters alive because I don't think they are conscious and don't think they suffer".    Factual and reason-based argument that changed the "I don't think they suffer" would then change the "I don't care about boiling lobsters alive".  

Human psychology is complex, in that our feelings and desires are influenced by all sorts of things (including facts and reason).  But I still don't think there is any way that facts and reason can themselves lead to a desire, they can only do it by levering some other pre-existing desire. 

> What if the desire to live in a world without cruelty, without bigotry, where social justice and fairness prevails, where people are sensitive to each other's needs and as free as those needs allow us to be, is the result not of some blind instinct welling up from our evolutionary prehistory, but of thought and consideration, of imagination and compassion, of reading books and of conversations with friends?

Those two things are not either/or, they are "both". 

The "desire to live in a world without cruelty, without bigotry, where social justice and fairness prevails" derives from "thought and consideration, of imagination and compassion, of reading books and of conversations with friends", acting on a brain full of values and feelings and desires, including empathy and compassion and all sorts of similar things that are part of our nature as a result -- ultimately -- of our evolutionary heritage.

Without that starting point, the fact that the "thought and consideration" acts on a brain that is already full of values and feelings, the "thought and consideration" would itself get nowhere.

In reply to Jon Stewart:

> You've said that you agree with Hume that you can't get an ought from an is. And given no alternative as to how you get an ought, other than "because of feelings". So you've answered one metaethical question (moral realism or anti-realism) and then left it there, which is a position of wishy-washy anything-goes relativism. You've got no framework to distinguish right from wrong, just a load of "is" statements about human behaviour, and an endorsement that if you feel it's right, just go ahead.

> If you make the requirement that moral realism must boil down to something that transcends or does not depend upon human desires, then utilitarianism is not moral realism. But who cares? I'm not bothered how a philosopher would classify the theory, I'm bothered about whether it's any good. Can I use this theory to justify moral choices that I believe in?

> I want to show that the way to reach good moral judgements is to employ reason, and that the passions (our evolved instincts) aren't to be trusted. That doesn't require a transcendence of human feelings. It requires us to discuss human feelings according to reason, that's all.

> As I said, so what. If it gives me an ethical theory that's founded in reason, then it means that we expect to reach consensus: when you analyse and debate the facts, different people will be forced to similar conclusions. We can all get at the same answers. Sounds to me like it's becoming objective, even though it doesn't meet your (pointless) bar of being rooted in something above and beyond what humans want.

I think you could probably get to your ethical theory that's founded in reason as a framework to distinguish right from wrong if you understand that the choices that cause harm and unhappiness- that minimise utility - are rooted in the ego. A persons ego causes them to act with cruelty, bigotry, unfairness, without sensitivity to each other's needs and freedoms. 

A utilitarian framework is quantitative. Instead of just attempting to quantify the net unhappiness inflicted on people you could maybe add to that an attempt to quantify the selflessness of the choice?

Moral realism then does depend on human desires but the moral utilitarian calculation would have to factor in the selflessness of the choice as well as the overall utility.

 Ratfeeder 10 Apr 2020
In reply to Gordon Stainforth:

> My goodness, what a brilliant summary of Kant and his 'categorical imperative'.


Thanks Gordon!

 TobyA 10 Apr 2020
In reply to Jon Stewart:

Have you studied philosophy formally Jon? At A level or university? Or read the people whose ideas we discussing directly? Actually, a keen amateur is more likely to the latter probably more than the uni student who will probably just read the copied extracts their tutor says is obligatory before revising from "Aristotle for dummies" or the equivalent.   I do wonder if your perspective comes from your interest in evolutionary psychology - so your thoughts to Gordon, that ethics HAS to be based on evolutionary pyschology, is a reflection of that. Whilst his "don't talk about this until you've read as many books as I have" obviously reflects his position!

 TobyA 10 Apr 2020
In reply to Jon Stewart:

> I think that we should aim for the best balance of "pleasure" and "pain" for everyone, and I prioritise people over non-human animals. I think that "shallow" pleasures (like a cocaine blowjob) count for very little, whereas "deep" pleasures (like reflecting on the state of the human race and thinking that everything's going to be just fine) count for a lot. 

Surely this is just Bentham and Mill's argument over "higher and lower pleasures"? Push pin or poetry (although by the sounds of it a cocaine blowjob might give poetry more of a run for its money than skittles...)?

 Coel Hellier 10 Apr 2020
In reply to TobyA:

> I do wonder if your perspective comes from your interest in evolutionary psychology - so your thoughts to Gordon, that ethics HAS to be based on evolutionary pyschology, is a reflection of that. Whilst his "don't talk about this until you've read as many books as I have" obviously reflects his position!

My position certainly comes from the evolutionary perspective.    Gordon very much comes to it from the philosophical-literature perspective.

I suggest that his: "it's the only subject I know that gets ever more difficult the more you study it" illustrates that the philosophical-literature approach is the wrong approach, and is getting nowhere.  Academic philosophers are split down the middle on moral realism versus anti-realism, and that sort of impasse, after a couple of thousand years of philosophical enquiry, suggests that their whole approach is wrong. 

"Nothing in biology makes sense except in the light of evolution" -- Dobzhansky. Thus morality (which is very much a biological phenomenon, being a manifestation of social living in animals that have evolved to live socially) only makes sense from the evolutionary perspective.    From that perspective, meta-ethics is straightforward.  Darwin solved it in Descent of Man.

 Ratfeeder 10 Apr 2020
In reply to Coel Hellier:

> A desire for something can itself be instrumental, deriving from another desire.   Thus: "I desire A, because more fundamentally I desire B, and reason tells be that A will get me B".  (But reason alone can never get you to a desire, you always need to add in a desire, and that cannot come from logic or a-moral reason.)    

> Additionally, reason and facts can heavily influence our psychology and our desires.    To give an example: "I care about the suffering of conscious creatures. But I don't care about boiling lobsters alive because I don't think they are conscious and don't think they suffer".    Factual and reason-based argument that changed the "I don't think they suffer" would then change the "I don't care about boiling lobsters alive".  

> Human psychology is complex, in that our feelings and desires are influenced by all sorts of things (including facts and reason).  But I still don't think there is any way that facts and reason can themselves lead to a desire, they can only do it by levering some other pre-existing desire. 

> Those two things are not either/or, they are "both". 

> The "desire to live in a world without cruelty, without bigotry, where social justice and fairness prevails" derives from "thought and consideration, of imagination and compassion, of reading books and of conversations with friends", acting on a brain full of values and feelings and desires, including empathy and compassion and all sorts of similar things that are part of our nature as a result -- ultimately -- of our evolutionary heritage.

> Without that starting point, the fact that the "thought and consideration" acts on a brain that is already full of values and feelings, the "thought and consideration" would itself get nowhere.

I think we're getting to the nub of the issue here - we can begin to address your question of how it can even make any sense to think of categorical imperatives (X ought to do y) as having an objective basis.

I'll get back to your comments about the role of desires in motivation in a minute. Firstly I want to make it clear that I'm not in any way denying the evolutionary origins of our psychology. What I am saying, though, is that this doesn't necessarily imply a specifically Humean account of morality. Evolutionary biology is amazingly compatible with Kantian metaphysics. To realise that the way world is comprehensible to us is determined and limited by the way our brains and senses have evolved is precisely Kant's ground-breaking insight - his 'Copernican revolution'. And this is a significant advance on the crude empiricism of Locke, Berkeley and Hume. It certainly explains why Kant's most immediate and worthy follower, Schopenhauer, was able to so remarkably anticipate both Darwin and Freud.

Hume was of course a genius and took empiricism to its ultimate, logical conclusion, but, as Bertrand Russell says:

'by making [empiricism] self-consistent [he] made it incredible'. Russell goes on to say that 'To refute [Hume] has been, ever since he wrote, a favourite pastime among metaphysicians. For my part, I find none of their refutations convincing; nevertheless, I cannot but hope that something less sceptical than Hume's system may be discoverable.'

Russell was rather too dismissive of Kant. Unlike his protégé, Wittgenstein, he was never quite able to shake off the empiricist assumptions that prevented him from embracing the paradigm shift represented by Kantian metaphysics. Empirical reality - the world that is possible for us to know - is not the same as reality in itself, which lies beyond the capacity of our given cognitive apparatus to apprehend. In other words, objective knowledge cannot be absolutely objective in the sense of being independent of our physical and cognitive capacities, but is nevertheless possible within those constraints - within empirical reality. That is what knowledge means for us.

What moral realists, and particularly pure cognitivists, want to suggest is that moral judgements can have objectivity in empirical reality (as opposed to reality in itself). Just because moral properties are not properties of reality in itself, doesn't mean they are not properties of the world as we experience it. Similarly, colours are properties of the world as we experience it, but are not independent of our sensory capacities. We can have beliefs about the colours of things, which have genuine truth-values, but we are still taking about properties which are in an important sense relative to us. So that is how we can understand the objectivity of a moral judgement. A moral belief can be either true or false in that sense.

So let's now return to the issue of moral motivation and some of your comments about the way desires come in to that. You say that reason and facts can heavily influence our psychology and desires. I would entirely agree with that. But then you say that reason alone can never get you to a desire; that facts and reason can only do so by 'levering some other pre-existing desire'. Yes, that is very Humean. You seem to think that our pre-existing desires are somehow immutable. To take your example of the lobster. You care about the suffering of conscious creatures, so you have the desire to avoid causing conscious creatures to suffer. But you don't believe lobsters are conscious, so you have no desire to avoid boiling them alive (but you would have that desire if you did have that belief). Hence, a cognitive state has influenced your desires, but only because you had a pre-existing desire in the first place. Ok, but where has that pre-existing desire come from? Why do you care about the suffering of conscious creatures? There are plenty of people who don't. If you didn't have the desire to avoid causing conscious creatures to suffer, would it be impossible for you to develop that desire, quite independently of any other desires you may have? I'd say that a big influence on our desires is not just reason, but experience. Someone who doesn't care about the welfare of sentient creatures may well start to care when, for example, they witness the suffering caused to pigs by intensive farming practices in America. Direct experience of this sort can have a profound effect on our attitudes and values. This is exactly the sort of thing the moral realist wants to get a handle on. The realist would like to say that your sense of the wrongness of this situation amounts to the apprehension of a moral fact. The treatment of sentient creatures in this way is in fact morally wrong. This apprehension comes from your immediate experience, not from some pre-existing desire (since no such desire existed prior to this experience).

Cognitivists therefore offer a different account of moral motivation from Hume's, while still giving some credence to his belief-desire theory of motivation. Instead of insisting that there must always be a motivating desire, they propose, for example, that desires can be motivated by beliefs. Best of all, in my view, is the pure ascription theory, where desires are consequentially ascribed to a cognitive motivating state. What does the motivating here is the gap between two representations, one of the way things are and the other of how things could be. The desire that the second representation should replace the first is ascribed to an agent as a consequence of his belief that it should. So it is really the belief that motivates the intention to act, but a desire is ascribed to the intention when it leads to action (the action is evidence of the desire). This gives a purely cognitive motivating state which can be described as either a belief or a desire. A whole realist theory of ethics can be built on this foundation.

 Coel Hellier 10 Apr 2020
In reply to Ratfeeder:

> Hume was of course a genius and took empiricism to its ultimate, logical conclusion, but, as Bertrand Russell says: ...

I do agree that Hume needs to be updated to a post-Darwinian world.  (And yes, Kant may have got much of the way there prior to Darwin.)

> What moral realists, and particularly pure cognitivists, want to suggest is that moral judgements can have objectivity in empirical reality (as opposed to reality in itself).

OK, let's pursue this idea ...

> Just because moral properties are not properties of reality in itself, doesn't mean they are not properties of the world as we experience it. Similarly, colours are properties of the world as we experience it, but are not independent of our sensory capacities.

I'm fine with this concept.   Tom's subjective brain states are real, since brains are real and the states they are in are real. So Tom may indeed really be experiencing "red" when looking at a fire engine, and may be enjoying a chocolate ice cream.  There can be truth-apt statements about those subjective brain states.

>  So that is how we can understand the objectivity of a moral judgement. A moral belief can be either true or false in that sense.

Can you expound this step a bit more?    The statements "Tom is seeing red" and "Tom likes ice cream" are truth apt.  The statements "Tom deplores murder" and "Tom feels a physical revulsion to murder" are also truth apt.  All of the previous statements are ones about Tom's subjective brain states.    So far, ok. 

As I've explained above, "moral" talk is (to me) reports of our subjective values.   So, to me, Tom declaring "Murder is immoral" is akin to Tom declaring "fire engines look red" or "ice cream is delicious".  He is reporting his brain states.  So, in the same way that Tom declaring "ice cream is delicious" maps to "Tom likes ice cream", similarly, Tom declaring "murder is immoral" maps to "Tom deplores murder".

But I don't see that this gives you an "objective moral judgement".   The moral judgement (the deploring) is still being made by Tom.  Moral language is a report of values, and somebody needs to be doing the valuing.  Tom's declaration "murder is immoral" is no more an objective one than his declaration "ice cream is delicious".

Both of those are reports of Tom's brain state.  Of course it could be true that Tom's brain really was in that state, and one can make truth-apt statements about the state of Tom's brain.  But the "moral judgement" is still a property of Tom's brain and thus a subjective one and not an objective one. 

 Coel Hellier 10 Apr 2020
In reply to Ratfeeder:

> So let's now return to the issue of moral motivation and some of your comments about the way desires come in to that. You say that reason and facts can heavily influence our psychology and desires. I would entirely agree with that.

OK.

> But then you say that reason alone can never get you to a desire; that facts and reason can only do so by 'levering some other pre-existing desire'. Yes, that is very Humean.

Good!

> You seem to think that our pre-existing desires are somehow immutable.

No, not necessarily.     I maintain that facts, reason, experience can indeed influence and change our feelings and values, but only by acting on a brain that already is full of feelings and values. 

But this does not require that some of the values be "primary" or immutable.  One can conceive of:

{Facts, reason, experience} acting on {value A} lead to {value B}, but then {facts, reason, experience} acting on {value B} lead to rejecting {value A} and adopting {value not-A}.

The point is that human brains are neural networks, and no part of a neural network is primary or immutable.  All parts of the network can be changed under the influence of the rest of the neural network. 

> Someone who doesn't care about the welfare of sentient creatures may well start to care when, for example, they witness the suffering caused to pigs by intensive farming practices in America. Direct experience of this sort can have a profound effect on our attitudes and values.

But -- I assert -- the experience would have no effect if it were not operating on a human that already had feelings and values in their neural-network brain, feelings that would be triggered by the experience. 

Thus, you could create a robot with any amount of factual knowledge and reasoning ability, and then expose that robot to any number of experiences.  But, that alone would never lead to the robot experiencing revulsion at the plight of the pigs.   Only if the robot already were a feeling, valuing entity, with capability to feel revulsion, would such feelings be aroused.  

Thus there is no logical path from facts or reason alone to feelings and values.  Hume was right on that.  Us arriving at moral judgements depends on us being social animals, programmed by evolution to feel pity or revulsion or sympathy or anger or all sorts of other emotions.   Though, yes, experience, facts and reason can strongly influence whether and how those emotions are triggered. 

So, as an aside, your above sentence might be bettered phrased: "Someone who doesn't care about the welfare of sentient creatures that are not suffering in front of him may well start to care when witnessing the suffering in person".

(In fact, most humans are like that, we may not care very much about a child starving to death thousands of miles away in Africa, but would care a lot if that same child were suffering in our living room.  Utilitarians don't like this aspect of our evolved natures.) 

> The realist would like to say that your sense of the wrongness of this situation amounts to the apprehension of a moral fact.

Whereas I would say the "sense of the wrongness of this situation" is simply a dislike, a revulsion.  Indeed, likely evolution took existing aesthetic sentiments (e.g. revulsion at putrid meat) and re-purposed the mechanism to produce revulsion about another sentient-being suffering. So moral sentiments are basically a sub-category of aesthetic sentiments. 

> The treatment of sentient creatures in this way is in fact morally wrong.

But what does that mean?  I can understand what it means if the declaration that something "is morally wrong" is a declaration made by a human, and what they are saying is equivalent to them expressing a revulsion and a desire that it stop.   That's a value judgement, and value judgements require someone to make that judgement based on their feelings and values.  Necessarily, such value judgements are subjective (being properties of their brain state).

But, if that is not the correct interpretation, then what does "X ...  is in fact morally wrong" even mean?   Can you re-phrase it to elucidate what you mean by "morally wrong"?

> This apprehension comes from your immediate experience, not from some pre-existing desire (since no such desire existed prior to this experience).

So, could a robot, programmed with facts and reason (but having no pre-existing feelings, desires or values) also come to that apprehension?    

 Ratfeeder 11 Apr 2020
In reply to Coel Hellier:

> Can you expound this step a bit more?    The statements "Tom is seeing red" and "Tom likes ice cream" are truth apt.  The statements "Tom deplores murder" and "Tom feels a physical revulsion to murder" are also truth apt.  All of the previous statements are ones about Tom's subjective brain states.    So far, ok.

Well, "Tom is seeing red" is a statement about what Tom is experiencing at a particular time. If you translate that experience into the neuro-physiological terms of a brain state, then it is a statement about Tom's brain state. Either way, yes, such statements are either true or false. But I'm not talking about statements of this sort.

If Tom says "This fire engine is red", that is a statement about the colour of a fire engine. That too is either true or false. "This fire engine is red" could be true, while "Tom is seeing red" could at the same time be false (or vice-versa). They are two different statements with two different sets of truth conditions. My analogy concerned statements like "This fire engine is red", not "Tom is experiencing red". The point was that the former has a truth-value despite the fact that colours, as such (as qualia), exist only as sensory phenomena (they are mind-dependent). Colours are "part of the furniture of the world". Analogously, "Tom's confiscation of his mother's credit card was wrong" is a statement about the immorality of a particular action. That action had the property of being wrong (or not), just as a fire engine has the property of being red. Both statements, according to the cognitivist, have truth-values. Whether or not "Tom's confiscation of his mother's credit card was wrong" is true, depends on the circumstances surrounding it and the reasons that Tom had for doing it. The act is either justified or unjustified by the reasons and circumstances, which are all morally relevant considerations. For the moral realist, there is a fact of the matter to be discovered, despite moral properties being mind-dependent (in a way loosely analogous to colours).

> As I've explained above, "moral" talk is (to me) reports of our subjective values.   So, to me, Tom declaring "Murder is immoral" is akin to Tom declaring "fire engines look red" or "ice cream is delicious".  He is reporting his brain states.  So, in the same way that Tom declaring "ice cream is delicious" maps to "Tom likes ice cream", similarly, Tom declaring "murder is immoral" maps to "Tom deplores murder".

Not brain-states! That would require neuro-physiological terminology. In the case of "fire engines look red" Tom is describing the colour of fire engines, and in the case of "ice cream is delicious" he is expressing his liking for ice cream. He wouldn't be that surprised if someone else didn't find ice cream delicious, but he would be very surprised if someone else didn't think the red fire engine he was looking at looked red.

Moral talk is about what one should or shouldn't do. For the cognitivist that involves talk about the moral properties of actions and other morally relevant considerations (reasons and circumstances). "Murder is immoral" is true by definition, since murder is defined as the unjustified killing of another person. " Some killings may be justified (e.g. when an armed policeman shoots a terrorist), so "killing is immoral", as opposed to "murder is immoral" is not true by definition. Whether or not killing a particular person at a particular time is justified or not will depend on the circumstances in which the killing occurred and the reasons the agent had for doing it. The only types of action which are always immoral are those, like murder, which are by definition unjustified. So "killing is immoral", not "murder is immoral" is analogous to "fire engines are red". If either statement is true, it is only contingently true.

> But I don't see that this gives you an "objective moral judgement".   The moral judgement (the deploring) is still being made by Tom.  Moral language is a report of values, and somebody needs to be doing the valuing.  Tom's declaration "murder is immoral" is no more an objective one than his declaration "ice cream is delicious".

(Values are not subjective just because somebody needs to be doing the valuing, any more than knowledge is subjective just because somebody needs to be doing the knowing.)

You've picked a bad example here (from your point of view), since "murder is immoral" is true by definition. "Killing is immoral" would be a better example, since if true, it's only contingently so. In fact, assuming it formally means "All killing is immoral", it's false, since some killings are justified (and hence not immoral). Whether or not you find ice cream delicious is a matter of personal taste. "Ice cream is delicious" (formally "All ice cream is delicious") is likely to be false, since even Tom is unlikely to find all possible flavours of ice cream delicious. Otherwise, "Ice cream is delicious" is not analogous to "killing is immoral" because "killing is immoral", though false, is not an expression of personal taste. If you genuinely think that judging the killing of someone to be justified or unjustified is just a matter of personal taste like the taste of ice cream, then I suggest there's something wrong with you. Sorry to put it that bluntly.  

> Both of those are reports of Tom's brain state.  Of course it could be true that Tom's brain really was in that state, and one can make truth-apt statements about the state of Tom's brain.  But the "moral judgement" is still a property of Tom's brain and thus a subjective one and not an objective one. 

Tom believes that the earth is a sphere. Tom's belief is a property of Tom's brain. Does that mean the only truth-apt statement to be made here is "Tom believes the earth is a sphere"? Of course not. Tom's belief that the earth is a sphere is true or false independently of the truth or falsehood of "Tom believes the earth is a sphere". The belief itself is truth-apt, because it has cognitive content (the earth being a sphere). Same applies to moral beliefs. They're another sort of belief with their own cognitive content, concerning the morality or otherwise of actions or possible actions - according to cognitivists.

In reply to Ratfeeder:

(Oh my god this is fun. I'm so much reminded of a boxing ring and its notorious 'ropes'. The ropes at the end of the day amounting to little more than hard, not very resilient logic.)

Post edited at 00:25
OP Jon Stewart 11 Apr 2020
In reply to Gordon Stainforth:

Told you  Although it's telling that Ratfeeder, who is obviously well read, drew you in.

The advantage of discussing philosophy online, rather than formally with experts, is that you can discuss what a range of people actually believe. And people's views are born of different backgrounds and traditions, which might be western philosophy, might be science (like Coel and I), might be Zen Buddhism. An academic argument might be fascinating for philosophers, but it's likely to boil down to "x said this, but y made that critique" rather than "it's obvious to me from being a human being that...", which is where my interest lies. Some philosophers seem to think it's an accolade if what they're saying is almost impossible to decipher and departs radically from human experience and common sense! 

There is also, in my opinion, a unique and pathological obsession amongst philosophers with deductive truth, which makes talking to philosophers an often trying experience. We don't use deductive arguments to find out about the world (except indirectly in maths, so no use here at all). We are pragmatic and use abductive reasoning. Coel has acquired a dose of this pathology, but it's only infected his thinking on morality - he's perfectly happy to use common sense and abductive reason the rest of the time. I'll try to expand more on this in my response to Coel, which I will get round to.

 Coel Hellier 11 Apr 2020
In reply to Ratfeeder:

> My analogy concerned statements like "This fire engine is red", not "Tom is experiencing red".

By "This fire engine is red", do you mean:  (A) "This fire engine preferentially reflects light of a particular range of wavelengths" (a statement about the external world, with a truth value) or do you mean (B) "The light from this fire engine triggers the qualia "experiencing red" in Tom"?     (That also has a truth value, but is similar to my "Tom is experiencing red".

> The point was that the former has a truth-value despite the fact that colours, as such (as qualia), exist only as sensory phenomena (they are mind-dependent).

Agreed.

> Analogously, "Tom's confiscation of his mother's credit card was wrong" is a statement about the immorality of a particular action. That action had the property of being wrong (or not), just as a fire engine has the property of being red. Both statements, according to the cognitivist, have truth-values.

My problem is that I don't know what the cognitivist means by saying "that action was wrong" or "... was immoral".   Indeed, this is my central critique of the cognitivist, moral-realist position.  No cognitivist has ever been able to explain to me what they mean by the basic language they are using.

So: "Tom's confiscation of his mother's credit card was wrong" could mean "Jane dislikes the fact that Tom confiscated the card".   Or it could mean: "that action violated societal norms of conduct".  But neither of those get you to moral realism, both amount to value judgements made by other people, and so are subjective.

> In the case of "fire engines look red" Tom is describing the colour of fire engines, and in the case of "ice cream is delicious" he is expressing his liking for ice cream. He wouldn't be that surprised if someone else didn't find ice cream delicious, but he would be very surprised if someone else didn't think the red fire engine he was looking at looked red.

Again, Tom's remark: "fire engines look red" could be taken to be about the external properties of the fire engine and light, or about his inner qualia.   It may be that bees (with very different visual systems that are UV sensitive) experience different qualia than us, when looking at the same object.   Or, someone else might be entirely colour blind (we have three different types of photo-cell in our eyes, that enable us to see in colour; if one of these is defective that person might not experience the qualia "red").  

If you are going to ground morality in our qualia, I don't see how you are going to get from there to moral realism.  Indeed, our qualia are the epitome of things that are subjective. 

 Coel Hellier 11 Apr 2020
In reply to Ratfeeder:

> Moral talk is about what one should or shouldn't do.

OK, but what does that mean?  Why should you or shouldn't you do it?  As I see it: "You should not do it" could mean "I would dislike you doing it". (That gets you to emotivism.)  Or it could mean "doing it would attain the opposite of your aim".  (Thus all "shoulds" are instrumental, being of the form "in order to attain X you should do Y", which is then equivalent to "doing Y will attain X".)

But neither of those will get you to "moral facts" and moral realism. 

> For the cognitivist that involves talk about the moral properties of actions and other morally relevant considerations (reasons and circumstances).

Well yes, but that's not helpful unless the cognitivist tells us what he means by "moral" properties of actions.  That, indeed, being the entire question here!

> "Murder is immoral" is true by definition, since murder is defined as the unjustified killing of another person.

Actually, it's defined as the unlawful killing of another person.  And whether something is against the law is distinct from whether we regard it as immoral. 

> The only types of action which are always immoral are those, like murder, which are by definition unjustified.

Again, what do you mean by "immoral" as used in that sentence?  Also, you seem to be suggesting that "justification" can turn an "immoral" act into a "moral" one.  How did you arrive at that conclusion?   Again, in order to examine whether that is the case, we need to know what you mean by "immoral". 

 Coel Hellier 11 Apr 2020
In reply to Ratfeeder:

> Otherwise, "Ice cream is delicious" is not analogous to "killing is immoral" because "killing is immoral", though false, is not an expression of personal taste.

Well that depends entirely on what you mean by "... is immoral", and so far you haven't told us!     The only interpretation of "killing is immoral" I can come up with is "I dislike and deplore killing", and that is indeed an  expression of personal taste.  If the phrase means something else to you, what does it mean?

> If you genuinely think that judging the killing of someone to be justified or unjustified is just a matter of personal taste like the taste of ice cream, then I suggest there's something wrong with you. Sorry to put it that bluntly.  

Well ok, but I do!   "Justice" is indeed "in the eye of the beholder", in the sense that it is not an external property of the universe, it is a value judgement that we make.  There is no external and absolute scale of whether something is "fair" or "just", these are feelings that we humans have.  And people can disagree on whether something is fair or just.  (That is why we have a range of political opinion.)

Some people might think that if a husband catches their wife in flagrante delicto with another man then they are "justified" in pulling out a pistol and shooting him.  Others would not (it's a bit of an old-fashioned attitude these days). But there is no truth of the matter as to whether that killing is or is not "justified" -- all there is is different people's opinion on the matter. 

 Coel Hellier 11 Apr 2020
In reply to Ratfeeder:

> Tom believes that the earth is a sphere. Tom's belief is a property of Tom's brain. Does that mean the only truth-apt statement to be made here is "Tom believes the earth is a sphere"? Of course not. Tom's belief that the earth is a sphere is true or false independently of the truth or falsehood of "Tom believes the earth is a sphere".

Yes, agreed.  That's because there is an external world, and thus there is a fact of the matter as to the nature of that external world.  Thus a belief about that external world can be true or false.

> The belief itself is truth-apt, because it has cognitive content (the earth being a sphere). Same applies to moral beliefs.

OK, but then what property of the external world are these "moral beliefs" about? 

I can re-phase the terms "Earth" and "sphere" in other terms, such that it is clear what I mean by "Earth" and "sphere" and thus by "Earth is a sphere".  

So what does it mean to say that "action X is immoral"?  Can you re-phrase the "... is immoral" so that I know what you mean?    (Because, in the myriad times I've asked moral realists this question, I've never got a straightforward answer.)

The usual attempt is to reply: "By saying it's "immoral" I mean you should not do it".  But then when I ask "ok, but why shouldn't I do it?" they can only attempt a rather sheepish "... because it's immoral", which just goes round in circles.

> They're another sort of belief with their own cognitive content, concerning the morality or otherwise of actions or possible actions - according to cognitivists.

OK, so a "moral" belief is one about the "morality" of actions. Well ok, yes, but now explain what you mean by the label "moral" or "immoral". 

 Coel Hellier 11 Apr 2020
In reply to Jon Stewart:

> Coel has acquired a dose of this pathology, but it's only infected his thinking on morality - he's perfectly happy to use common sense and abductive reason the rest of the time. I'll try to expand more on this in my response to Coel, which I will get round to.

I look forward to it.

OP Jon Stewart 11 Apr 2020
In reply to TobyA:

> Have you studied philosophy formally Jon? At A level or university? Or read the people whose ideas we discussing directly? 

No, readers will be unsurprised to learn. I have done a load of online lecture courses that are just lectures for fun, and listened to hundreds of podcasts. For contemporary philosophers, it's likely I'll have heard them summarise their position themselves and for the likes of Kant, Hume, Bentham etc. it's what philosophy professors say about them that I'm relying on. So I've got sufficient knowledge of what the terms mean but no deep understanding of any given viewpoint.

If I was well-read in this stuff I might take a different view, but...

> I do wonder if your perspective comes from your interest in evolutionary psychology - so your thoughts to Gordon, that ethics HAS to be based on evolutionary psychology, is a reflection of that.

I simply can't see how discussions of morality that don't account for knowing what a human being is and how human behaviour comes about can be relevant. It's a fascinating area of the history of ideas, but since we've learnt more about the world through science, we have to update our understanding of morality with that knowledge.

A discussion of morality that does not take into full consideration evolutionary psychology really is a complete waste of time in my opinion. A discussion of morality is a discussion of human behaviour, what motivates that behaviour and how humans feel it - how can you talk about if you don't understand how human behaviour works and what motivates it?

So, if I was going to discuss the intricacies of Kant and Hume and Bentham, I'd better go away and read them. But while we've got to refer to them because they did all the leg work of defining the problems and devised the language, in the context of defending an ethical theory that uses reason to come up with the best answers, they're not going to help me out much. Just as you can't come up with meaningful ideas about the philosophy of cosmology without understanding physics, you can't discuss morality without understanding human behaviour.

Post edited at 10:50
 Coel Hellier 11 Apr 2020
In reply to Jon Stewart:

> A discussion of morality that does not take into full consideration evolutionary psychology really is a complete waste of time in my opinion.

This is one of those rare times where I agree with you entirely!    To understand morality you have to start from the perspective of evolution and the way that evolution has programmed us to live together as social animals.

The problem with academic philosophy is that it doesn't do this, it sees philosophy as distinct from science, and certainly sees meta-ethics as distinct from science (though it will accept that applied ethics has some relation to our evolved human natures). That means that the discussion is all in the abstract, and, as you say, is primarily in terms of what previous philosophers have written, back to before Darwin.  They are completely handicapping themselves by that approach. 

OP Jon Stewart 11 Apr 2020
In reply to TobyA:

> Surely this is just Bentham and Mill's argument over "higher and lower pleasures"? Push pin or poetry (although by the sounds of it a cocaine blowjob might give poetry more of a run for its money than skittles...)?

Not quite the same, my view is more like negative utilitarianism. Rather than some sort of cerebral "higher pleasure" (the upmarket version of a cocaine blowjob), I think that a huge contributor to wellbeing is the absence of anxiety about the future and for others. Take away the war, malaria and famine, and you'll make a difference not only to those who are actually suffering them. Without these threats to our wellbeing, we also get rid of the anxiety that we or loved ones may suffer, and the empathy we feel for those who do. Then you would be able to experience that "deep" pleasure of feeling good about the world, which is really an absence of anxiety. You can read some poetry if you like, but that's not going to have any impact on what I consider the "deep" pleasures we should be aiming to maximise - so isn't much better than a cocaine blowjob, just more socially acceptable.

Post edited at 11:27
 TobyA 11 Apr 2020
In reply to Coel Hellier:

> OK, but then what property of the external world are these "moral beliefs" about?

Human actions surely?

Don't moral realists, or at least some flavours of them, point to numbers as parallel? Isn't "two" a thing (when applied to that thing and this thing) in the same way that "very bad" is (when applied to someone killing someone else in a certain situation)?

I think Ratfeeder might need to help me out with this... I might be confused between naturalist and non-naturalist forms of ethical realism. I'll need to go re-read that bit of the text book I think!

 Coel Hellier 11 Apr 2020
In reply to TobyA:

>> OK, but then what property of the external world are these "moral beliefs" about?

> Human actions surely?

OK, but what property of human actions make them "moral" as opposed to "immoral"? 

If we take the "moral fact" that "Action X is immoral", can you supply a re-phrase of "... is immoral" that means the same thing (and thus elucidates what "... is immoral" means)? 

> Don't moral realists, or at least some flavours of them, point to numbers as parallel? Isn't "two" a thing (when applied to that thing and this thing) ...

Yes, numbers are useful to describe the world.  Saying "there are two sheep in the field" is describing a real property of how the world is. 

> ... in the same way that "very bad" is (when applied to someone killing someone else in a certain situation)?

So what property of the world is the phrase "morally very bad" describing? 

My answer:  moral language reports human feelings about actions. So, if someone says that killing was "very bad" they mean they dislike it a lot. 

Your answer: ?

 1poundSOCKS 11 Apr 2020
In reply to Coel Hellier:

> My answer:  moral language reports human feelings about actions. So, if someone says that killing was "very bad" they mean they dislike it a lot. 

Not necessarily. You might just want to give that impression.

 TobyA 11 Apr 2020
In reply to Coel Hellier:

> OK, but what property of human actions make them "moral" as opposed to "immoral"?

For the moral realist, whether it was a good thing or a bad thing thing that was done.

> So what property of the world is the phrase "morally very bad" describing?

It's goodness or badness.

I'm not saying I'm convinced, but you're saying morality isn't a thing in the world while some philosophers say it is. You're argument seems to be "I can't see it" and "I don't understand what it could be so it can't exist" but I'm not sure if that does really prove it.

OP Jon Stewart 11 Apr 2020
In reply to Coel Hellier:

I hope I can tidy up where we got to and respond to the crucial points - if you think I've left something out that's important, it means I didn't see why it was important so do let me know.

I said some time back "I don't care how a philosopher would categorise my view, what I care about is whether it works" and I meant it! The rabbit hole you've taken us down is that of what a philosopher means by "objective" - for some reason, as scientist and wouldn't usually make a habit of allowing philosophers to define the language you use, here you make a special case. For example, if someone said, "it's true that the earth goes around the sun", you wouldn't (I presume) say that "no, it isn't true, it's merely the most useful description we have and relies upon a set of axioms we have to take as given such as the reliability of sense data and the very existence of the external world at all. The only thing that is true is I think therefore I am". If you did say that, you would be right, but it would be rather frustrating.

So, to respond to what I think are the crucial points as a way out of the rabbit hole:

There is no moral arbiter to give us abstract "oughts" and objective morality

The universe does not contain a moral arbiter which is separate from the thoughts and feelings of humans, and I so agree there is no "objective" morality, where "objective" means "exists regardless of sentient beings". But I'm not looking for anything that's objective by that definition. What I mean by "objective" here is a way of evaluating moral actions that can be shared, through the use of reason, to arrive at a consensus - the answer is the same independent of the subject. This is in contrast to opinions that are "subjective". There is no way of using reason to reach a consensus on whether strawberry or chocolate ice cream is better.

Does it concern you that you have no better reasons for your opinion on racism than on your preferred flavour of ice cream? 

One can have strong preferences and advocate for them

You think you can escape the charge of being a wishy-washy anything-goes relativist with this, but you can't.

One can have strong preferences about the flavour of ice cream, but it's not possible to advocate for one over another. You can't advocate for a flavour of ice cream because you don't have good reasons for your preference. There isn't a consensus and so all the options are available - anything goes: this is the character of subjective preferences.

Now let's say it's discovered that the ingredient used to flavour strawberry ice cream caused bowel cancer. With this information, we now have good reasons that chocolate ice cream is better, and we can advocate not just for a preference for it, but we can lobby for a change in policy that strawberry ice cream should not be sold until a suitable alternative ingredient is found. Because we've got good reasons, that strawberry ice cream causes suffering, we're very likely to find a consensus. 

The chocolate ice cream isn't objectively better than the strawberry in the philosophical sense - it's better because people universally prefer their ice cream not to give them bowel cancer. But it is objectively better in the common sense definition, and that's the bar of objectivity we need for our ethical theory to work.

When you say you can "advocate" for your preferences what you mean is that you can try to persuade people of your view by giving reasons. Reasons are what crosses the threshold from subjectivity to (common sense) objectivity. You're trying to have you cake and eat it by holding to moral subjectivism but denying the relativism that necessarily goes along with it. You cannot advocate for something if you tell me that you don't even believe yourself that you have good reasons! What kind of advocacy is that?

(There's more to come btw...)

Post edited at 13:41
 Coel Hellier 11 Apr 2020
In reply to TobyA:

> For the moral realist, whether it was a good thing or a bad thing thing that was done.

What do you mean by "good" or "bad" as used there? 

By "good" or "bad" are you referring to human value judgments, and thus to whether a human likes or dislikes, approves or disapproves?    If so, then that's not moral realism, it's a morality rooted in human desires and feelings, and thus a subjective morality. 

So, the moral realist has to mean something else by those terms.  And that is ...?

> I'm not saying I'm convinced, but you're saying morality isn't a thing in the world while some philosophers say it is.

Yes, but if you ask them what they mean by these basic terms such as "morally bad" or "morally good", they cannot tell you.

> You're argument seems to be "I can't see it" and "I don't understand what it could be so it can't exist" but I'm not sure if that does really prove it.

The onus is on the moral realist to (a) state clearly what they mean and what they are advocating, and (b) then present the evidence for the scheme.   Prior to that the claim of moral realism falls to Occam, especially since an account of morality as subjective (= being about human desires and feelings), and as deriving from our evolutionary heritage, explains all that needs explaining. 

OP Jon Stewart 11 Apr 2020
In reply to Coel Hellier:

You can't construct a moral scheme out of reason alone

I agree (I've changed my mind a bit on this recently). But that's not what I want to do - I want to construct an ethical theory which, in its application, uses reason alone to evaluate moral choices (or rather believe in one that someone else has constructed). Just in case you missed it, I'm not trying to achieve a philosophical goal of an ethical theory that meets a philosophical definition of objectivity. I want ethical choices to be made for good reasons, not merely subjective preferences.

So, in such a theory there has to be some foundation, fundamental "ought" from which every other ought is derived. An axiom, which is added to the amoral rules of reason to allow us to use reason to come to consensus on moral questions. This axiom will be no surprise: suffering is bad and wellbeing is good. By definition of "good" and "bad", things that are "bad" you ought not to increase, while things that are "good" you ought to. 

This axiom is exactly like the axioms on which we base the rest of our understanding of the world so that we can live our lives. We take as given that there is an objective world, and that causation is an intrinsic feature it, that there are other minds, etc. None of these are beyond doubt and none are required by the rules of reason. But they are all so obvious we don't give them a second's thought - and we don't claim that anyone is departing from reason when they rely on these axioms.

I'm inviting you to take on this totally obvious, intuitive axiom that allows you to get on in the world, like all the other axioms you're happy to go with. Once you accept it, you can have good reasons for your moral choices. You'll be able to advocate instead of just saying "because I like it". And who knows, if you apply reason rigorously, you might even come to some sensible conclusions!

Post edited at 14:18
 Coel Hellier 11 Apr 2020
In reply to Jon Stewart:

> I said some time back "I don't care how a philosopher would categorise my view, what I care about is whether it works" and I meant it! The rabbit hole you've taken us down is that of what a philosopher means by "objective" ...

OK, well let's try to not get hung up on terms, but instead see how close to agreement we are.

> The universe does not contain a moral arbiter which is separate from the thoughts and feelings of humans, and I so agree there is no "objective" morality, where "objective" means "exists regardless of sentient beings".

Then we are at least 90% in agreement!     To me, what you've just stated is the essential point that settles a long-running debate. 

Over most of history, most people would have said that there *was* a moral arbiter separate from human thoughts and feelings.  Thus, an action could be "morally wrong" regardless of what humans thought about it or how humans evaluated it, if God said so. 

After God went out of fashion in intellectual circles, however, many philosophers recoiled from the notion that human thoughts and feelings are all there is.  So they asked, is there some guidance as to what we "should do" that is distinct from and independent of human thoughts and feelings? And they (Kant, Utilitarians such as Mill, etc) sought those "moral oughts", not from God, but instead from reason. 

That is a different question from "can reason and facts help us think through what we want, and thus influence what we want?", which of course they can.      Again, the moral-realist question is, are there "objective oughts" that hold quite independently of human evaluation of those oughts?   A clear answer "no" settles a debate which has rumbled on since Euthyphro (and continues today in academic philosophy, where roughly half are moral realists).

 Coel Hellier 11 Apr 2020
In reply to Jon Stewart:

> What I mean by "objective" here is a way of evaluating moral actions that can be shared, through the use of reason, to arrive at a consensus - the answer is the same independent of the subject. This is in contrast to opinions that are "subjective". There is no way of using reason to reach a consensus on whether strawberry or chocolate ice cream is better.

And I would submit that, no, there is no way that reason can guide us to a complete consensus about values.    That's because we're all different, and so we have different values.  At some level our values derive (at least in part) from our genes, and we all have different genes.

Thus, two people can disagree on what is "fair", and there is no arbiter to tell us which of them is "right", since there is no objective standard of fairness.  That is why we have politics.  Differences of opinion are not solely about facts and reason, they can be about our nature, our values. 

Thus, to give an example, Person A can think it "fair" if the highest rate of tax is 40%. Person B would say that 80% would be entirely fair, whereas Person A would say that's completely unfair.    There is no sense in which one is right and the other wrong, they simply feel differently about the sort of society they want to live in. 

Having said all that, while we are all genetically different, we are also very similar genetically, so -- de facto -- we can indeed reach a large degree of consensus that nearly everyone is happy to go along with, with a bit of compromise.  And that is what we de facto do. 

In a similar way, the kids of the world can reach a large degree of consensus on whether chocolate ice-cream tastes nice! 

> Does it concern you that you have no better reasons for your opinion on racism than on your preferred flavour of ice cream? 

We have to deal with the world as it is.  It might be nice if there were a Sky Daddy who told all of us humans to play nicely with each other, and threatened to take us to the wood-shed if we didn't.  It might be nice if such Sky Daddy stopped a Hitler in his tracks, or cured cancer.  

Yes, it might be nice if we had authoritative, objective backing for whatever our own opinion is (that's why people invent gods in their own image). But wanting it won't make it so, and whether or not I like the standing that moral codes have won't actually change that standing. 

 Coel Hellier 11 Apr 2020
In reply to Jon Stewart:

> One can have strong preferences about the flavour of ice cream, but it's not possible to advocate for one over another. 

Actually it is.  It's certainly possible for someone to advocate that classical music is aesthetically superior to Metallica! Plenty of people have done so (it's just a bit silly to do so).

> You can't advocate for a flavour of ice cream because you don't have good reasons for your preference.

No, that's not the reason.  The main reason we don't is that it is easy for everyone to have their personal choice, because someone having their personal choice doesn't impinge on someone else.   Where it does impinge -- say, a long coach journey, where music could be played over the stereo system -- people's aesthetic choices would matter to others.

The reason that "moral" choices are generally different, is that the actions we regard as morally salient are generally ones that affect other people.  We thus cannot all have our individual preference.  For example, one person cannot have their wish that everyone pay tax at 80% while another person has their wish that everyone pays 20% tax. 

So, for morally salient issues we have to seek societal consensus and compromise, rather than allowing each their own choice.

> When you say you can "advocate" for your preferences what you mean is that you can try to persuade people of your view by giving reasons.

... or by appeals to emotion or whatever other rhetorical devices you choose to employ.

> Reasons are what crosses the threshold from subjectivity to (common sense) objectivity.

I disagree, since the "reason" can only act on someone's feelings and values.  Reason by itself does nothing.  For example: "How would you like it if someone did that to you?" is an appeal to empathy.  Reason can affect us, yes, but only because we are already moral creatures stuffed full of feelings and values. 

Post edited at 15:05
 Coel Hellier 11 Apr 2020
In reply to Jon Stewart:

> But that's not what I want to do - I want to construct an ethical theory which, in its application, uses reason alone to evaluate moral choices ...

I don't think that's possible, since at the root of any moral evaluation has to be values, not reason. Reason itself is a-moral, so cannot do moral evaluation.  Only value-laden humans can do that. 

> So, in such a theory there has to be some foundation, fundamental "ought" from which every other ought is derived.

Agreed. That's what you need. You need to bridge the gap between an "is" and an "ought", and the only way you can do that is with an axiom.  But, the problem is that such an axiom only has standing owing to your advocacy of it (and that of other people).

You cannot derive that axiom itself from reason (as your statement admits), and thus that axiom is your choice -- your subjective choice, because the only way you can choose that axiom is based on your values. 

> This axiom is exactly like the axioms on which we base the rest of our understanding of the world so that we can live our lives. We take as given that there is an objective world, and that causation is an intrinsic feature it, that there are other minds, etc.

But those are "is" axioms not "ought" axioms.  "Is" axioms we can test by seeing how well they map to an external, factual world.

Your "ought" axiom is your declaration, based on your values, of how you want things to be.   Now, suppose someone says, ok, but I have a slightly different axiom, and I prefer mine to yours.   You're then right back to the subjective advocacy of your choices based on your values.

> I'm inviting you to take on this totally obvious, intuitive axiom that allows you to get on in the world, like all the other axioms you're happy to go with. Once you accept it, you can have good reasons for your moral choices. You'll be able to advocate instead of just saying "because I like it".

So what's your choice of axiom based on, other than "because I like it"?    By your own argument, it cannot be reason, so it must be a choice based on your subjective values, agreed? 

(PS, I can tell, you've been influenced by Sam Harris, who takes exactly this approach )

 Ratfeeder 11 Apr 2020
In reply to Coel Hellier:

> No, not necessarily.     I maintain that facts, reason, experience can indeed influence and change our feelings and values, but only by acting on a brain that already is full of feelings and values.

I wouldn't dispute that. I'm not claiming that when Tom experiences the plight of the pigs he is devoid of feelings and values. Of course he isn't. But what matters is how his feelings and values after the experience differ from those before it. What is it about the situation Tom experiences that causes the change? Whatever it is, it's a feature of the situation itself, not just a feature of Tom's brain. And that is an objective, morally relevant feature. It's bit of a truism, isn't it, to say that there need to be feelings and values if reason and experience are to change them. But what is it about these feelings and values that makes them susceptible to change? Or does the brain have a sort of repertoire of pre-existing feelings and values that pop up when triggered? In which case they would have to be immutable (but are exchangeable).

> But this does not require that some of the values be "primary" or immutable.  One can conceive of:

So the brain doesn't have a pre-existing repertoire of feelings and values that pop up when triggered? A more plausible account, to my mind, would be to say that human beings have a pre-existing capacity to feel emotions and form values, which evolve in tandem with our cognitive states - our experience and reason. But on a Humean account, which holds that desires are strictly non-cognitive, it's difficult to see how desires can be influenced by beliefs, reason or even experience. A person either likes the taste of strawberries or she doesn't; not much in the way of reasoning or experience is going to alter that. The pertinent question is, are desires necessarily like that (i.e. strictly non-cognitive)?

> {Facts, reason, experience} acting on {value A} lead to {value B}, but then {facts, reason, experience} acting on {value B} lead to rejecting {value A} and adopting {value not-A}.

> The point is that human brains are neural networks, and no part of a neural network is primary or immutable.  All parts of the network can be changed under the influence of the rest of the neural network.

Yes, but a sharp Humean distinction between facts and values, cognitive beliefs and non-cognitive desires, is not really compatible with this account. Cognitive states will affect other cognitive states, but non-cognitive states will remain unaffected by a change in cognitive states. Only if desires can themselves be cognitive states, can they be influenced by other cognitive states. Pain, for example, is not influenced by believing one thing rather than another.

> But -- I assert -- the experience would have no effect if it were not operating on a human that already had feelings and values in their neural-network brain, feelings that would be triggered by the experience.

Again, that is a truism in the sense that if a human had no feelings and values, there would be no feelings and values for experience to have an effect on. The point is, though, that the feelings and values being changed by the experience are different feelings and values before the experience from those after it. Is that because one set of pre-existing feelings is popping down out of the way and being replaced by a different set of pre-existing feelings and values popping up, or that the original set of feelings and values has mutated into a different set, such that the original set no longer exists? 

> Thus, you could create a robot with any amount of factual knowledge and reasoning ability, and then expose that robot to any number of experiences.  But, that alone would never lead to the robot experiencing revulsion at the plight of the pigs.   Only if the robot already were a feeling, valuing entity, with capability to feel revulsion, would such feelings be aroused.

But I'm not denying that, as human beings, we have the innate capacity to experience emotions and to evaluate. That is part of our evolutionary heritage. Of course the robot would need the capacity to experience emotions and values in order to have them. But is giving the robot a pre-existing set of emotions and values a fair representation of how the human brain operates?

So far, I've gone along with your lumping emotions and values together as subjective mental states. The assumption has been made that values are basically emotive and hence non-cognitive. But that, again, is a Humean assumption. I think there is an important distinction to be made here. It's much more difficult to make sense of the idea of the brain being full of pre-existing values than of emotions. That's because it makes little sense to abstract the evaluation of something from the thing or state of affairs that is being evaluated. The former is not independently intelligible from the latter. An emotion, such as anger, say, is much more conceivable in the abstract. You could see someone get angry and not even know what they're getting angry about. So you could imagine anger as a pre-existing entity in the brain's evolved repertoire of emotions. That's much more difficult in the case of values. How would you know what value I place on something without knowing what it is I'm placing value on? The value is attached to the object, person, action or state of affairs. Hence, we tend to talk about things as sometimes having intrinsic or universal value, not just value for Tom or you or me. We could talk about the intrinsic value of justice, for example. When value is attached to a person or the action of a person, then we're talking about moral values. The quality of a person's character, the rightness or wrongness of an action. When Tom experiences the situation of the suffering pigs, he may well feel the emotion of revulsion. But that is an independently intelligible mental state. You could abstract the revulsion from the state of affairs that causes it. But that leaves the value, or rather disvalue, that Tom attaches to the state of affairs, as a separate entity. Unlike the feeling of revulsion, the disvalue that Tom perceives is a property of the state of affairs itself. It's difficult to see this as a pre-existing disvalue in the brain's repertoire of pre-existing states.  

> Thus there is no logical path from facts or reason alone to feelings and values.  Hume was right on that.  Us arriving at moral judgements depends on us being social animals, programmed by evolution to feel pity or revulsion or sympathy or anger or all sorts of other emotions.   Though, yes, experience, facts and reason can strongly influence whether and how those emotions are triggered.

Indeed there is no logical (deductive) path between the two, but then neither is there a logical (deductive) path between cause and effect (another Humean insight). But if (contra Hume) feelings and values are intrinsic aspects of experiences of how things are in the world, no such path is needed. Similarly, if causes and effects are not really totally separate events, but are really aspects of the same event, then the need for a logical path between them becomes redundant. I do't dispute the evolutionary story. We are "programmed" to understand the world in terms of cause and effect. But that doesn't mean statements about cause and effect are not objective statements about the world. It just means, in Kantian terms, that they are objective in empirical reality, but not in reality as it is in itself.

> So, as an aside, your above sentence might be bettered phrased: "Someone who doesn't care about the welfare of sentient creatures that are not suffering in front of him may well start to care when witnessing the suffering in person".

I was making the rather stronger claim that someone who doesn't care about animal welfare at all may start to care when they actually witness animals suffering. An experience of reality can change a mind.

> (In fact, most humans are like that, we may not care very much about a child starving to death thousands of miles away in Africa, but would care a lot if that same child were suffering in our living room.  Utilitarians don't like this aspect of our evolved natures.)

I think it's difficult to generalize. People are different. I don't think most people don't care, it's just that they already have responsibilities for family and friends and can't be responsible for everyone in the world. They probably feel a bit powerless. Perhaps they'll donate to charities as compensation for that shortcoming. But I agree, we do live in an outrageously unequal world and it's something that morality requires us collectively to address. 

> Whereas I would say the "sense of the wrongness of this situation" is simply a dislike, a revulsion.  Indeed, likely evolution took existing aesthetic sentiments (e.g. revulsion at putrid meat) and re-purposed the mechanism to produce revulsion about another sentient-being suffering. So moral sentiments are basically a sub-category of aesthetic sentiments.

I don't think that does any justice whatsoever to how we actually experience moral values. To borrow a phrase of Jon Stewart's, this is incredibly wishy-washy. Moral judgements sometimes have to be made in life or death situations. I hardly think that's analogous to aesthetic sentiments. I don't think you can reduce ethics to this level of sentimentality, and that's one of the problems with Humean ethics.

> But what does that mean?  I can understand what it means if the declaration that something "is morally wrong" is a declaration made by a human, and what they are saying is equivalent to them expressing a revulsion and a desire that it stop.   That's a value judgement, and value judgements require someone to make that judgement based on their feelings and values.  Necessarily, such value judgements are subjective (being properties of their brain state).

The belief that people are dying from Covid-19 is also a property of peoples' "brain states" (it's a mental state); it requires someone to have that belief; It's a subjective state. Nevertheless it's either true or false, because it's a cognitive mental state. Either people are dying from Covid-19 or they're not.

> But, if that is not the correct interpretation, then what does "X ...  is in fact morally wrong" even mean?   Can you re-phrase it to elucidate what you mean by "morally wrong"?

It means it's something we shouldn't do, irrespective of what we may otherwise want. Like, for example, going out at the present time and coughing in people's faces. You shouldn't do that even if you don't care about spreading Covid-19. "If you want to help to reduce the spread of coronavirus, then you shouldn't cough in people's faces" just doesn't cut it for me I'm afraid. It's totally inadequate.  

> So, could a robot, programmed with facts and reason (but having no pre-existing feelings, desires or values) also come to that apprehension?

If it was programmed to have the capacity to feel emotions and have desires, then yes.

 Coel Hellier 11 Apr 2020
In reply to Ratfeeder:

> It's bit of a truism, isn't it, to say that there need to be feelings and values if reason and experience are to change them. But what is it about these feelings and values that makes them susceptible to change?  [...] But on a Humean account, which holds that desires are strictly non-cognitive, it's difficult to see how desires can be influenced by beliefs, reason or even experience.

These things can all affect each other, since human brains are neural networks, and in a neural network all the different influences are inter-twined.

Anyway, most of this analysis is irrelevant to the central point: so long as moral judgements are nothing but brain states, then moral realism is false.   For moral realism to hold, there need to be "moral facts" that are independent of the state of our brains.  

> The belief that people are dying from Covid-19 is also a property of peoples' "brain states" (it's a mental state); it requires someone to have that belief; It's a subjective state. Nevertheless it's either true or false, because it's a cognitive mental state. Either people are dying from Covid-19 or they're not.

Yes, agreed, the belief "people are dying from Covid-19" is cognitive because there is an external fact "people are dying from Covid-19" about the external world that is either true or false, and remains true or false regardless of our belief about it.  

So, what property of the external world is a "moral fact" about, such that the "moral fact" is either true or false, regardless of our belief about it?

Post edited at 16:51
 Coel Hellier 11 Apr 2020
In reply to Ratfeeder:

>> But, if that is not the correct interpretation, then what does "X ...  is in fact morally wrong" even mean?   Can you re-phrase it to elucidate what you mean by "morally wrong"?

> It means it's something we shouldn't do, irrespective of what we may otherwise want.

And what does "we shouldn't do it" mean? Why shouldn't we do it?  (And a response "because it's morally wrong" just goes round in circles.)

> Like, for example, going out at the present time and coughing in people's faces. You shouldn't do that even if you don't care about spreading Covid-19. "If you want to help to reduce the spread of coronavirus, then you shouldn't cough in people's faces" just doesn't cut it for me I'm afraid. It's totally inadequate.  

So the "shouldn't do it" is not just an instrumental should (deriving from "in order to reduce the spread ...").     So, by "you shouldn't do it" do you mean "I don't want you to do it"?  

If not, what do you mean? 

OP Jon Stewart 11 Apr 2020
In reply to Coel Hellier:

> And I would submit that, no, there is no way that reason can guide us to a complete consensus about values.    That's because we're all different, and so we have different values.  At some level our values derive (at least in part) from our genes, and we all have different genes.

What I'm proposing is that reason (plus the universally acceptable axiom "suffering is bad") is precisely what we need to cut across the diversity of values. This approach allows us to determine not only which moral acts are preferable, but which sets of values are the best to hold. Sure we have different genes and (culturally) evolved instincts, but we also have reason, which can allow us to consider our values critically and to change them.

> Thus, to give an example, Person A can think it "fair" if the highest rate of tax is 40%. Person B would say that 80% would be entirely fair, whereas Person A would say that's completely unfair.    There is no sense in which one is right and the other wrong, they simply feel differently about the sort of society they want to live in. 

Of course there's a way. You consider the consequences as best you can and choose the option which by your reasoned analysis entails the least suffering. At this level of policy, this type of consequentialism works a treat (where it breaks down is at the individual level where the answer it gives you may clash with your emotions).

> It might be nice if there were a Sky Daddy who told all of us humans to play nicely with each other, and threatened to take us to the wood-shed if we didn't. 

I have explained that no Sky Daddy is needed. All you need to do is accept - because it's obvious - is that suffering is bad, and then you can reason your way to an answer.

> No, that's not the reason.  The main reason we don't is that it is easy for everyone to have their personal choice,

Here I simply think you're wrong. Everyone can easily have their personal choice about whether to eat junk food or a balanced diet, and yet we advocate for the latter. We have good reasons to advocate for a balanced diet, and they boil down in the end to less suffering. Advocating for a subjective personal opinion that has no grounding in ideas of "good" and "bad" that we universally agree on isn't advocating, it's just saying "I like it". Advocating involves giving reasons, and what separates reasons from mere opinion is that they appeal to an underlying consensus about the world.

> The reason that "moral" choices are generally different, is that the actions we regard as morally salient are generally ones that affect other people.  We thus cannot all have our individual preference. So, for morally salient issues we have to seek societal consensus and compromise, rather than allowing each their own choice.

Again, this is just wrong. We consider a woman putting a cat into a wheelie bin as a serious moral transgression - what makes us consider an issue as moral not a lack of consensus about preferences, it's a bunch of evolved drives like whether they cause harm, whether they trigger disgust, contravene norms of social cohesion etc (see Haidt).

I agree that reasons have to act on values - they only work where people believe in reason. Reason has to compete with the small-minded values of religious adherence, nationalism, and self-promotion through consumerism.

> that axiom is your choice -- your subjective choice, because the only way you can choose that axiom is based on your values...But those are "is" axioms not "ought" axioms.  "Is" axioms we can test by seeing how well they map to an external, factual world...So what's your choice of axiom based on, other than "because I like it"

The reason it's correct to assume the moral axiom "suffering is bad" is because it's universal - it's in its application that people differ. To quote Sam Harris, just put someone's hand on a hot stove, and you'll soon see whether or not they agree with it. And I'm afraid you can't test the axiom that the external world exists by consulting the external world. You just take it as a given because it allows you to live in the world.

(My view is slightly different from Harris - he argues that you actually can get an ought from an is, and thinks the hot stove example proves it. I just think that shows it's universally held and therefore works as an axiom. I heard him discuss this with Sean Carroll, and agree with Carroll that what Sam is doing is inserting an additional axiom that's not already there in reason - but I'm perfectly happy to add the axiom because just like assuming the existence of the external world, it works).

Post edited at 17:04
 Coel Hellier 11 Apr 2020
In reply to Jon Stewart:

> What I'm proposing is that reason (plus the universally acceptable axiom "suffering is bad") is precisely what we need to cut across the diversity of values.

However, the axiom "suffering is morally bad and we should minimise suffering" does not get you very far at all.  It immediately runs into all the usual problems of utilitarianism. 

For example, how do we aggregate across all different people? Does everyone count equally? Am I supposed to rate the suffering of someone in a country I've never been to equally with that of my own child?   How do I evaluate the suffering of different people, since all I've got is their subjective report?   Et cetera.    

So, if by adopting a "minimise suffering" axiom you are implying that we should care no more for our own child than we do about children thousands of miles away that we've never met, then no, this is not a "universally acceptable" axiom because most people would disagree with it.      

At that point, since the axiom is merely your personal suggestion, why do we give it any more status than any other person's advocacy of how they want society to be?

 Coel Hellier 11 Apr 2020
In reply to Jon Stewart:

> The reason it's correct to assume the moral axiom "suffering is bad" is because it's universal - it's in its application that people differ. To quote Sam Harris, just put someone's hand on a hot stove, and you'll soon see whether or not they agree with it.

OK, so "suffering is bad". By "bad" do you mean "something we dislike"?  In which case, suffering is indeed something we dislike, but that's a tautology (in that it would not be "suffering" unless we disliked it).  So if that's what "bad" means, then "suffering is bad" is not an axiom, it's a tautology.

So maybe the axiom is "suffering is morally bad", in which case this axiom is basically defining what we mean by "morally bad" (with the implication that we "should" minimise it), agreed? 

But if so, then the hot-stove example is clearly irrelevant, since removing ones hand is because we dislike the sensation.   So Sam Harris's hot-stove "suffering is bad" example seems pretty much an empty tautology to me. 

So if the axiom is instead "we should miminise suffering", then ok, but then that's just a declaration of how you would like things to be.   Which, you, of course, are entirely free to advocate, but equally other people are free to advocate something different. 

 Pefa 11 Apr 2020
In reply to Jon Stewart:

> Is there more to morality than our evolved instincts 

What makes you think morality evolved?

Ps. I haven't read the thread BTW as I can see it is full of philosophy old and new that I am unacquainted with. 

OP Jon Stewart 11 Apr 2020
In reply to Coel Hellier:

> However, the axiom "suffering is morally bad and we should minimise suffering" does not get you very far at all.  It immediately runs into all the usual problems of utilitarianism. 

I think that "usual problems of utilitarianism" are vastly overstated. Mainly, they're bad arguments that fail to make a good fist of considering the consequences.

> For example, how do we aggregate across all different people? Does everyone count equally? Am I supposed to rate the suffering of someone in a country I've never been to equally with that of my own child?   How do I evaluate the suffering of different people, since all I've got is their subjective report?   Et cetera.    

I agree that the application of utilitarianism is where all the debate and discussion lies. I'm not pretending that there are easy answers to these questions. The approach I advocate is accepting the principle of utilitarianism, and then when you have a moral choice, using rational arguments to help make your decision. Reason will take you as far as you can go, but you'll get stuck because you won't have the data. Utilitarianism isn't an answer to every moral question, it's an approach to moral reasoning that puts outcomes before emotions.

> So, if by adopting a "minimise suffering" axiom you are implying that we should care no more for our own child than we do about children thousands of miles away that we've never met, then no, this is not a "universally acceptable" axiom because most people would disagree with it.      

The axiom is "suffering is bad", and is universal. Where people start to disagree is when they're asked to apply it and it starts to conflict with their evolved instincts. They're not disagreeing with the axiom, they're being moral hypocrites. We are all moral hypocrites, because there was no adaptive pressure for evolution to make us morally consistent. As well as moral instincts, we also ended up with reason, and we should not expect the two to reach the same answers.

Utilitarianism is generally impractical to implement in your day-to-day life. We have evolved to be moral hypocrites and looking for an ethical theory that is both consistent and works in practice is a total non-starter. Had philosophers known about evolution, they wouldn't have spent so much time on this fruitless endeavour.

The reason I promote utilitarianism is that it works very well in the context of policy, where actions can be considered without reference to the individual - policies have to work in general.

> At that point, since the axiom is merely your personal suggestion, why do we give it any more status than any other person's advocacy of how they want society to be?

The axiom is universal. Just try the hot stove experiment to see for yourself.

Edit: crossed messages there. The axiom "suffering should be avoided" works and the hot stove example will prove it: should I move my hand away?

It's harder than you think to come up with a moral axiom than you're making out. It's a basic axiom, not a statement of how society should work. It has to work using the rules of reason to generate consistent, morally meaningful statements in all cases (in principle). If you're convinced that any old axiom will do, and the utilitarian axiom is just a personal preference, let's hear a few more examples that would do the job just as well.

Post edited at 19:01
OP Jon Stewart 11 Apr 2020
In reply to Pefa:

> What makes you think morality evolved?

Same reason I think feet and the ability for abstract thought developed: it's a common aspect of human beings, and evolution is what gave human beings their characteristics.

 Coel Hellier 11 Apr 2020
In reply to Jon Stewart:

> Edit: crossed messages there. The axiom "suffering should be avoided" works and the hot stove example will prove it: should I move my hand away?

The hot-stove example only shows that people do (not "should") want to minimise their own suffering.  It does not show that they "should" minimise suffering of people in general, and does not show that they "should" attempt to minimse the suffering of all people equally. 

Similarly, "suffering is bad" does not entail "therefore I need to attempt to minimise it". It is entirely possible for someone to agree "suffering is bad" but then "however, it's not my problem, unless it's me or my family".   It certainly does not entail "everyone's suffering counts equally".

So your axiom simply doesn't imply what you claim. 

Indeed, I will go as far as to say that an over-riding principle of: "we should minimise the suffering of everyone equally" is a grossly *immoral* axiom.  As I see it, every child deserves parents (or at least surrogate parents) who love them far more than they love unrelated strangers and who care about their suffering far more than that of unrelated strangers.  

So your axiom is simply not universal, since I (and many others) dissent from it. 

So people who do not want to minimise the suffering of everyone equally are not "moral hypocrites" since they never signed up to that in the first place.

Which illustrates that the only standing your axiom has is the advocacy of those who choose to advocate it -- and that makes it just as subjective as any other moral scheme. 

Post edited at 19:43
OP Jon Stewart 11 Apr 2020
In reply to Coel Hellier:

> The hot-stove example only shows that people do (not "should") want to minimise their own suffering.  It does not show that they "should" minimise suffering of people in general, and does not show that they "should" attempt to minimse the suffering of all people equally. 

It shows that the avoidance of suffering is a goal of every person, and that's all it needs to show. "Shoulds" and "oughts" logically only relate to goals. As you say, there are no goals inherent in reason alone. The axiom is catch-all statement of human goals.

> Similarly, "suffering is bad" does not entail "therefore I need to attempt to minimise it". It is entirely possible for someone to agree "suffering is bad" but then "however, it's not my problem, unless it's me or my family".   It certainly does not entail "everyone's suffering counts equally".

The game is to use only reason and the axiom to come up with moral propositions.

So you came up with a moral proposition that only the suffering of me and my family matters. We've agreed on the axiom, that all people have the goal of avoiding suffering, so you now have to reason your way from there to "only the suffering of me and family matters". You can't do it, you just pulled "me and my family are more important" out of nowhere. 

The axiom implies that no one person's suffering is more important than any other, because that's the default position in the absence of good reasons that one person's suffering is more important than another.

> Indeed, I will go as far as to say that an over-riding principle of: "we should minimise the suffering of everyone equally" is a grossly *immoral* axiom. As I see it, every child deserves parents (or at least surrogate parents) who love them far more than they love unrelated strangers and who care about their suffering far more than that of unrelated strangers.  

That's not the principle. The principle is that suffering should be reduced, and if you're going to choose whose suffering should be reduced, then reason will steer you to look for the most effective way possible, so probably aiming for the person whose suffering is most severe, rather than trying to reduce a little but for everyone. And then you're giving a utilitarian account of why reducing suffering in one distribution would cause suffering elsewhere as a reason it's not the right thing to do! I'm 100% on board with that type of argument.

> So your axiom is simply not universal, since I (and many others) dissent from it. 

You only dissent from it when you change it into something which it is not. The axiom is merely a statement of the universal human goal to avoid suffering.

> So people who do not want to minimise the suffering of everyone equally are not "moral hypocrites" since they never signed up to that in the first place.

You changed the axiom. The axiom is just "suffering should be avoided". All it does is state the universal human goal. Reason dictates that you can't just declare yourself more important than everyone else without good reason, that's not in the axiom. 

No matter what moral scheme you think's correct, I guarantee you're a hypocrite. Human beings are simply not endowed with the behavioural motivations to maintain moral consistency.

Post edited at 21:11
 Coel Hellier 11 Apr 2020
In reply to Jon Stewart:

> It shows that the avoidance of suffering is a goal of every person, and that's all it needs to show.

It shows that the avoidance of their own suffering is a goal of every person.   It shows no more than that. 

> You can't do it, you just pulled "me and my family are more important" out of nowhere. 

I didn't pull it out of nowhere, I pulled it out of human nature. 

> The axiom implies that no one person's suffering is more important than any other, because that's the default position in the absence of good reasons that one person's suffering is more important than another.

But, so far: I don't agree with the axiom, as stated, and I don't agree that we should base our morality on axioms plus reasoning, and I don't agree that a human needs "reasons" to care more about their family than about unrelated strangers. 

The last is based on their human nature and feelings, not on axioms and reason. But so what?, that just suggests that an axiom-plus-reasons account of morality is incorrect and not warranted.

> You only dissent from it when you change it into something which it is not. The axiom is merely a statement of the universal human goal to avoid suffering.

... their own suffering ... and (often) that of their family and loved ones.  De facto, it is not a universal human goal to avoid the suffering of distant, unrelated strangers.

> Reason dictates that you can't just declare yourself more important than everyone else without good reason, that's not in the axiom. 

If it's not in the axiom then people are not going to sign up to the axiom.    People are more important to themselves than other people are.

OP Jon Stewart 11 Apr 2020
In reply to Coel Hellier:

We're talking past here a little. I'm not trying to show that utilitarianism is an ethical theory that works for individuals' moral choices. I'm fully aware that at the individual level, the statements don't chime with human nature and so the theory isn't useful here. You seem to be trying to tell me something I've already been clear I understand.

I clearly haven't come up with a form of words for the axiom that's satisfactory, even though it's very obvious what the content is. Let's try this:

"all other things being equal, a world with less suffering is preferable to a world with more suffering".

It really is a universal axiom. I think it's a waste of time trying to argue against a truism, that suffering is bad and we should avoid it where we can.

So again, the reason I promote utilitarianism is that at the level of policy it gives good answers, through rational consideration of evidence. The reason it works for policy is that its shortfall - the fact of human nature that people feel that they are more important than others just by virtue of being themselves - does not apply because policy decisions have to be fair rather than favouring the policy maker.

My position is that the best available ethical theory is utilitarianism because it uses reason and evidence, and it works for making decisions where many people have to be treated fairly - but unfortunately it breaks down at the level of the individual. The reason this happens is that human beings have evolved to serve the interests of their genes, and that's simply not compatible with ethical theories built on reason.

I'm still interested in your position. 

As I understand it, it says that morality is a subjective preference that isn't any different to a preference for one flavour of ice cream to another. You can try to advocate for your preference, but in doing so, you're not appealing to reason, you're simply using whatever devices you have to change someone's mind. There aren't moral ideas that are supported by good reasons, and moral ideas that aren't - they all have the same standing because there is no objective (in the common sense, consensus sense) way to compare them. The reason that Martin Luther King was successful in changing people's minds was that he was a great orator and convinced people through his influencing skills to agree with him. Richard Spencer, on the hand, simply isn't as good at changing people's minds, and this is why his ideas are unpopular despite having equal value as just one choice of the many available.

Have I got all of that right?

 Ratfeeder 11 Apr 2020
In reply to Coel Hellier:

> OK, but what does that mean?  Why should you or shouldn't you do it?  As I see it: "You should not do it" could mean "I would dislike you doing it". (That gets you to emotivism.)  Or it could mean "doing it would attain the opposite of your aim".  (Thus all "shoulds" are instrumental, being of the form "in order to attain X you should do Y", which is then equivalent to "doing Y will attain X".)

> But neither of those will get you to "moral facts" and moral realism. 

> Well yes, but that's not helpful unless the cognitivist tells us what he means by "moral" properties of actions.  That, indeed, being the entire question here!

Yes, that is the central question, and its about time I answered it. The answer is quite simple. Moral properties are attributed to, and help to describe, actions. For example, a woman in the library drops a pile of papers on the floor, and looks dispirited at the prospect of having to pick them all up. A man sitting nearby gets up and helps the woman to gather up the papers. The man's action is appropriately described as being helpful. The term 'helpful' itself has a moral implication. What sort of a property is 'helpfulness'? It's a positive moral property (of actions like the one described). Was the man's action helpful? Yes it was. Is it true that the man's action was helpful. Yes it is. Was the man's action in fact helpful? Yes it was. It is a moral fact about an event in the "external world". The action could also be appropriately described as kind. That's another positive moral property, to which all the above applies. So actions can have all sorts of moral properties, positive and negative. Some actions can have both positive and negative moral properties at the same time. And they give reasons why some actions might be morally good or right and others not. Why was the man's action good? Because it was helpful and kind.

A subtlety that particularists like to point out (see below), is that properties which are positive in one set of circumstances might be negative in another. For example, if I were to help a terrorist carry out an attack, my helpfulness would be morally negative. It would count against the action, not for it. So the negativity or positivity of moral properties is always dependent on context and circumstances.

> Actually, it's defined as the unlawful killing of another person.  And whether something is against the law is distinct from whether we regard it as immoral. 

Yes, in law it's defined as unlawful killing, and yes, whether something is against the law is distinct from whether we regard it as immoral. That's why there's a philosophical definition - unjustified killing. If it is unjustified, it is necessarily immoral. Why is it unlawful? Because it is unjustified. What is the law based on - or rather, what should the law be based on - if not morality?

> Again, what do you mean by "immoral" as used in that sentence?  Also, you seem to be suggesting that "justification" can turn an "immoral" act into a "moral" one.  How did you arrive at that conclusion?   Again, in order to examine whether that is the case, we need to know what you mean by "immoral". 

Absolutely! An act can be immoral in one set of circumstances yet moral in another. An action, or rather a type of action, e.g. killing or stealing, is not moral or immoral in some general, absolute sense, but according the circumstances in which it's carried out, which either justify it or not. Shooting a terrorist who is about to blow up a classroom full of children is a moral act, for obvious reasons. Stealing a loaf of bread to feed a starving family is, if no alternative source of food is available and the bread would otherwise not have fed starving people, a morally justified act. And we can explain why such acts are moral or immoral by citing the moral properties which appropriately describe them, in relation to the circumstances in which they are carried out. 

 freeflyer 12 Apr 2020
In reply to Pefa:

> it is full of philosophy old and new that I am unacquainted with.

It is, and your post is quite reserved and avoids any judgement on that discussion, I feel

Fortunately the East can save us, as they too had endless discussions and debates about the right way to be. Occasionally a great teacher would cut to through to the chase and identify a way forward; for example, Siddharta Gautama, Lao Tzu, and someone less well-known: Bankei.

Bankei taught that we are all endowed from birth with the ability to understand how to act, but we lose touch with that knowledge, and therefore our efforts should be to regain it, in simple ways. A classic tale is told where a Zen master told a story of a miracle and challenged Bankei to surpass it. Bankei's response was: "My miracle is that when I feel hungry I eat, and when I feel thirsty I drink."

 Ratfeeder 12 Apr 2020
In reply to Coel Hellier:

> By "This fire engine is red", do you mean:  (A) "This fire engine preferentially reflects light of a particular range of wavelengths" (a statement about the external world, with a truth value) or do you mean (B) "The light from this fire engine triggers the qualia "experiencing red" in Tom"?     (That also has a truth value, but is similar to my "Tom is experiencing red".

This is a logical error. In ordinary language, if someone says "This fire engine is red" they are referring to (or have in mind) the qualia of redness. They may not even know about the preferential reflection of light of a particular range of wavelengths. But that doesn't mean the subject of reference is really the speaker and not the fire engine. We can translate ordinary language statements like this into scientific analysis language statements without altering the subjects of the respective sentences. Hence:

"This fire engine is red" = "This fire engine preferentially reflects light of x range of wavelengths"

(The two sentences have the same truth-conditions.)

Whereas;

"Tom is experiencing red" = "Tom has x neuro-physiological state triggered by the preferential reflection of light of y wavelengths"

(These two sentences have the same truth-conditions.)

If you were a witness to a hit and run incident and a policeman asked you what colour the relevant car was (let's say it was red), you wouldn't answer in terms of the preferential reflection of light of a particular range of wavelengths, would you? If you did, you'd need to remember what the particular range of wavelengths was, which you may not, even though you do remember that the relevant car was red. So in the context, it's a lot easier just to say that the relevant car was red. The policeman would be saved the task of going away to find out what colour qualia x range of wavelengths is equivalent to.

Similarly, when moral realists talk about moral properties (see my previous post about what that means), they are taking about the properties of events in the "external world", not about people's subjective brain states, even though such properties, as described in ordinary language, are mind-dependent. That was the point of the analogy.

Morality is about practical reasoning. It just isn't practical to translate ordinary language into scientifically analysed statements for our everyday purposes.

 Coel Hellier 12 Apr 2020
In reply to Jon Stewart:

> Let's try this:

> "all other things being equal, a world with less suffering is preferable to a world with more suffering".

> It really is a universal axiom. I think it's a waste of time trying to argue against a truism, that suffering is bad and we should avoid it where we can.

OK, yes. I accept your axiom.  A world with less suffering is indeed preferable.  But, it's one of a large number of preferences people have.  So yes, your axiom is true, but what I don't agree with is elevating that one statement to being primary, and  identifying that statement with "morality".   I certainly don't accept any implication that, owing to that axiom, a given human "should" care about the suffering of all humans equally. 

Here is another statement: "All other things being equal, a world in which all children have parents (or surrogate parents) who love them and care about them way more than they care about unrelated strangers, is vastly preferable to a world in which all adults cared no more for their own children and family than they do about unrelated strangers in distant lands."

I think that statement is also obviously true. It's hard to think through the consequences if parents didn't care about their children more than about unrelated strangers, but I don't think it would be a happy world.    It would amount to children being provided for by "the state", rather than by loving families, and sounds to me abhorrent and disastrous. 

I'm also at bit surprised that you, after emphasising the importance of our evolutionary nature in understanding morality, would then advocate a "utopia" so alien to human nature. 

I suggest this comes from an -- entirely wrong! -- intuitive feeling that there is something badly wrong if morality is "merely" subjective, and that there needs to be some way in which it is "objective", and thus pursuing that end, no matter how alien the end result.    

The cure is to realise that, actually, there is nothing at all wrong with morality being subjective and about our feelings and values. That is not a second-rate morality.  It is the only morality that there is!

> So again, the reason I promote utilitarianism is that at the level of policy it gives good answers, ...

OK, so you may not be advocating the axiom as being the central "moral" axiom, applying everywhere to everyone, and instead just want policy makers and governments to adopt it.  And yes, you're right, *governments* should not play favourites, and prefer one family's children over another's.  (Families, of course, *should* play favourites and prefer their own children.)  Yes, we can agree that that's a good principle. 

But again, realise that the only standing that such an axiom has is your advocacy of it (and the advocacy of anyone else who wants to advocate it).  And that means that someone else is equally free to advocate something else. 

If you were to accept that all there is to morality is human feelings, desires and advocacy, then we're pretty much in agreement. 

> ... but unfortunately it breaks down at the level of the individual. The reason this happens is that human beings have evolved to serve the interests of their genes, and that's simply not compatible with ethical theories built on reason.

So let's, then, reject the misconceived idea of building ethical theories "on reason", and go with ethical theories built on human nature! 

Really, trying to a build a utopia that is alien to human nature doesn't work and gives disastrous results -- as the communists, for one, discovered. 

Post edited at 08:53
2
 Coel Hellier 12 Apr 2020
In reply to Jon Stewart:

> As I understand it, [your position] says that morality is a subjective preference that isn't any different to a preference for one flavour of ice cream to another.

Yes.  (As an aside, there is a tendency to be aghast at such a suggestion, and to regard it as a second-rate account of morality, amounting to morality not matter much.  This is simply wrong: our feelings, desires, values (our qualia) are actually the *only* thing that matter to us!  This account roots morality in our deepest human nature, as opposed to in abstract, a-moral theoretical edifices.)

> You can try to advocate for your preference, but in doing so, you're not appealing to reason, you're simply using whatever devices you have to change someone's mind.

Yes, although "reason" is one of the things you can use in your appeal, along with appeals to people's values, empathy, et cetera.

> There aren't moral ideas that are supported by good reasons, and moral ideas that aren't - they all have the same standing because there is no objective (in the common sense, consensus sense) way to compare them.

Moral ideas cannot have *objective* standing, and yes, there is no *objective* way to compare them (only humans can compare them, based on their human values).

But that doesn't mean they "all have the same standing".  Some moral ideas are advocated by vast numbers of people, whereas other moral ideas are rejected by most people and advocated only by a fringe.    Humans can and do compare and rate moral ideas all the time.  And, based on typical human opinion, different moral ideas have very different standings.

> The reason that Martin Luther King was successful in changing people's minds was that he was a great orator and convinced people through his influencing skills to agree with him.

Yes, basically.  Of course "great oratory" has to resonate with human nature, with emotional appeals, appeals to empathy, appeals to notions of fairness, appeals to human nature. 

> Richard Spencer, on the hand, simply isn't as good at changing people's minds, and this is why his ideas are unpopular ...

Yes!     Whereas (unfortunately) Hitler *was* successful with persuading people to follow him, and as a result did great damage (<= that's my value judgement in case you were wondering). 

> ... despite having equal value as just one choice of the many available.

No, no, no!  You were doing so well!   You'd nearly got it right!   But, as an intuitive moral realist you just can't help falling back on notions of *objective* value.

No, Richard Spencer's ideas do not have "equal value", since the only value judgements are those made by people.  (You can't have an objective value, it's the nonsensical notion of a value without a valuer.)  And, no, people in general do *not* place equal value on Spencer's ideas compared to those of MLK.  As you just said,  Spencer's ideas are unpopular. 

Post edited at 09:20
 Pefa 12 Apr 2020
In reply to freeflyer:

I had not heard of this Zen master called Bankei until i read your post.Having now read some of his teachings on this cold grey Sunday morning I see parallels to many others myself included who found many of the traditional forms of practice to have lots of extraneous elements that focus too much on worship of an enlightened being rather than you gaining the same enlightenment. Reading his teachings is that unmistakable Satori, an even more direct path than the directness of Zen already was but as he acknowledged it is down to much effort in Zazen. In this birthless deathless state of ultimate reality there is no male or female so it was particularly heart warming to see him speak up against the religious orthodoxy in the 17th century that women could not achieve enlightenment which must have given such immeasurable hope to so many Buddhist women then and thereafter.

Thanks for sharing that little gem of information. 

 Coel Hellier 12 Apr 2020
In reply to Ratfeeder:

> Some actions can have both positive and negative moral properties at the same time. And they give reasons why some actions might be morally good or right and others not. Why was the man's action good? Because it was helpful and kind.

> A subtlety that particularists like to point out (see below), is that properties which are positive in one set of circumstances might be negative in another. For example, if I were to help a terrorist carry out an attack, my helpfulness would be morally negative.

Yep, sure. And all of these things, such as  "helpfulness" are evaluations made by humans.  They depend on what humans want, how they evaluate outcomes, et cetera.   So "moral" language amounts to labels of approval or disapproval by humans.   We approve of someone helping a woman trying to feed her starving children, so call it "moral", and we disapprove of helping someone to carry out a terrorist attack, so call that "immoral".

All that is fine. And that all gives you a subjective morality based on human feelings, desires and values. 

It does not give you "moral facts", it does not make moral statements cognitive (because they report a human brain state, a human evaluation of something; they are not truth-apt statements about external facts) and it does not give you moral realism. 

"Moral Realism (or Moral Objectivism) is the meta-ethical view that there exist such things as moral facts and moral values, and that these are objective and independent of our perception of them or our beliefs, feelings or other attitudes towards them."

 Coel Hellier 12 Apr 2020
In reply to Ratfeeder:

> In ordinary language, if someone says "This fire engine is red" they are referring to (or have in mind) the qualia of redness. They may not even know about the preferential reflection of light of a particular range of wavelengths.

Or, rather, describing something as "red" could refer either to the qualia of redness, or to the external interaction with light -- it could be either, or a mixture of the two, depending on context. 

> Similarly, when moral realists talk about moral properties (see my previous post about what that means), they are taking about the properties of events in the "external world", not about people's subjective brain states, even though such properties, as described in ordinary language, are mind-dependent.

Yes, when moral realists talk about moral properties they need to be talking about the properties of events in the "external world", and not about people's subjective brain states.

But, your account of moral properties in your previous post was about people's subjective brain states. 

That's clear from the example of helping a woman trying to feed her starving children (we evaluate that as "moral") versus helping a terrorist carrying out an attack (we evaluate that as "immoral").  Necessarily, the language reports our own internal evaluations based on our values, desires and feelings.  Thus it is about people's subjective brain states.

Thus, everything you've said in recent posts is in line with my account that moral language is not cognitive, but is a report of human subjective values.  Nothing you've said convinces me that there are "moral facts" that hold independently of how humans feel about things. 

 Pefa 12 Apr 2020
In reply to Coel Hellier:

> So let's, then, reject the misconceived idea of building ethical theories "on reason", and go with ethical theories built on human nature! 

Human nature is to take care of all children and all people equally its just that some of us don't realize our true nature and certain economic systems are set up to perpetuate that. 

> Really, trying to a build a utopia that is alien to human nature doesn't work and gives disastrous results -- as the communists, for one, discovered. 

I'm a communist and I didn't discover any ' disastrous reults', from socialism and nor did millions of others in fact most people in the USSR wanted it to stay socialist but it was brought down from within(and with the usual attacks on the system from capitalist states) . Our true human nature is to care, share and live in peace, harmony and security. Not to be selfish, greedy and uncaring for others that are not related to us as there really is no such thing as being unrelated. 

Post edited at 10:18
 Coel Hellier 12 Apr 2020
In reply to Pefa:

> Human nature is to take care of all children and all people equally ...

I suggest that empirical reality is against you on that one. 

Like all communists, you are dreaming of a utopia in which human nature is very different from what it actually is.   E. O. Wilson summed up communism with: "Nice idea; wrong species".

(E. O. Wilson is of course a leading expert in eusocial insects, where the eusocial reproductive system would lead to very different evolved natures.)

OP Jon Stewart 12 Apr 2020
In reply to Coel Hellier:

> OK, yes. I accept your axiom.  A world with less suffering is indeed preferable.  But, it's one of a large number of preferences people have. 

All preferences boil down to a preference for pleasurable conscious states rather than painful ones. There really is nothing else other than the quality of conscious states that can matter. That's what makes you prefer one flavour of ice cream to another, and what makes you prefer Martin Luther King to Richard Spencer. It's not arbitrarily granted primacy, it is, as a matter of fact, the mechanism which evolution endowed us with to motivate our behaviour. 

Unless you're going to go off the wall and start invoking platonic moral realms or god, you haven't got a choice - that's where human behaviour comes from, and so that's where morality comes from. If you're going to stick with a naturalist/materialist worldview, you haven't got any choice but to accept the centrality of the axiom as the root cause of all human behaviour.

> I certainly don't accept any implication that, owing to that axiom, a given human "should" care about the suffering of all humans equally. 

You don't have to. You're perfectly free to come up with reasons why you or any other human should care more about one or another. Then we can have a rational debate about whether your reasons are any good or not.

> Here is another statement: "All other things being equal, a world in which all children have parents (or surrogate parents) who love them and care about them way more than they care about unrelated strangers, is vastly preferable to a world in which all adults cared no more for their own children and family than they do about unrelated strangers in distant lands."

> I think that statement is also obviously true. It's hard to think through the consequences if parents didn't care about their children more than about unrelated strangers, but I don't think it would be a happy world.    It would amount to children being provided for by "the state", rather than by loving families, and sounds to me abhorrent and disastrous. 

Again, you're making a utilitarian argument that the world with the least suffering is one in which parents prioritise the care of their own children. I agree with you entirely, and it looks like you're getting the hang of considering consequences. Well done!

> I suggest this comes from an -- entirely wrong! -- intuitive feeling that there is something badly wrong if morality is "merely" subjective, and that there needs to be some way in which it is "objective", and thus pursuing that end, no matter how alien the end result.    

What's wrong with it is that makes moral progress impossible. It removes any reason that one should strive for a world with less suffering, and it grants power to anyone with the ability to persuade, regardless of the harm (which is an objective measurement of the amount of suffering) they might do. It is, all in all, an absolutely shit approach.

> OK, so you may not be advocating the axiom as being the central "moral" axiom, applying everywhere to everyone, and instead just want policy makers and governments to adopt it.  And yes, you're right, *governments* should not play favourites, and prefer one family's children over another's.  (Families, of course, *should* play favourites and prefer their own children.)  Yes, we can agree that that's a good principle. 

No, it is the central moral axiom, it's just very difficult to do the reasoning when, at the individual level, different people in the world have drastically different effects on the suffering of each other. My loved ones' suffering affects my suffering a lot, but a stranger's suffering has very little effect on mine. It's a practical problem of horribly complicated reasoning so the theory just doesn't help you make decisions.

> If you were to accept that all there is to morality is human feelings, desires and advocacy, then we're pretty much in agreement. 

I accept that entirely. The only thing that matters, that can matter, is the conscious states of people. But there are objective facts about how much suffering there is in the world, and a world with less suffering is better than one with more suffering, because, as we agree, suffering is bad.

> So let's, then, reject the misconceived idea of building ethical theories "on reason", and go with ethical theories built on human nature! 

Well that's about the worst idea I've ever heard. Let's give it a whirl, using Haidt's analysis of human beings' natural moral instincts.

Care: let's have some institutions to protect the vulnerable. Good start.

Fairness: let's punish people who harm others or cheat. Great stuff.

In-group loyalty: let's restrict access to our resources to people of our race or creed. Sounds dodgy.

Respect for authority: let's have a central authority from which one must not dissent. Oh dear.

Sanctity or purity: let's adhere to a set of rituals that symbolise our superiority. No comment.

Oh look, we've just reversed the Enlightenment. If you don't mind Coel, I think I'll give relying on human nature rather than reason a wide berth.

 Coel Hellier 12 Apr 2020
In reply to Jon Stewart:

> All preferences boil down to a preference for pleasurable conscious states rather than painful ones. There really is nothing else other than the quality of conscious states that can matter.

Agreed.   But that really is a preference for **our** pleasurable conscious states rather than painful ones.  We care a lot more about **our** conscious states than we do about the states of random other people. 

And it is a fact that people find immediate awareness of the suffering of their child or family member *vastly* worse than indirect awareness of the suffering of an unrelated stranger in a far-off land. 

> You're perfectly free to come up with reasons why you or any other human should care more about one or another.

We're not under any obligation to present "reasons" for caring more about our children or family members than we do about unrelated strangers!  Nothing requires us to find such reasons prior to caring more about our family members. 

Morality really is not rooted in reasons, it is rooted in values; and nearly all humans value their own children more than they value random other children. 

> What's wrong with it is that makes moral progress impossible. It removes any reason that one should strive for a world with less suffering, ...

No, it doesn't remove the one central reason -- we might want to!   (Yes, it does remove the "backup" to that, that we are objectively obligated to do so.)

> ... and it grants power to anyone with the ability to persuade, regardless of the harm

It does not "grant" anyone any power at all.  But, yes, de facto, anyone with the ability to persuade others to do harm does indeed have the ability to persuade others to do harm.     (And so, if we don't like that prospect, we need to be vigilant about such people.)

By the way, most of the worst demagogues in history (Hitler, Stalin, Mao, Pol Pot, etc) have been convinced that their system would be ultimately for the best, if only everyone accepted it, and indeed they've thus felt they had a moral duty to impose it, coupled with the ends justifying the means, and bugger the collateral damage along the way.   Being convinced of the superiority of ones own moral system, such that everyone else should adopt it, is actually quite dangerous.  

That's why utilitarianism is as dangerous as any other moral-realist theory that thinks it has a moral scheme that should apply to everyone. That can easily turn totalitarian.  

> I accept that entirely. The only thing that matters, that can matter, is the conscious states of people.

More basically, the only thing that matters, the only thing that can matter to me, is my own conscious state. 

And, since people care greatly about their children, awareness of their child's suffering will distress them.  Non-direct awareness of suffering of an unrelated child in a distant land will not distress them nearly as much. 

> But there are objective facts about how much suffering there is in the world, and a world with less suffering is better than one with more suffering, because, as we agree, suffering is bad.

Sure, and -- for most people -- a world in which their own child is happy and content, but 10 children in far-away India are suffering badly, is a better world than one in which the 10 children in India are happy and content, but their own child is suffering badly.

You can point to your axiom about everyone's suffering counting equally as often as you like, but that does not change the fact that (1) it is not in accord with human nature, and (2) the only standing  it has is the advocacy of anyone who chooses to advocate it.

> Oh look, we've just reversed the Enlightenment. If you don't mind Coel, I think I'll give relying on human nature rather than reason a wide berth.

Notable that you reject those ideas based on your evaluation of and dislike of them! 

Post edited at 13:35
OP Jon Stewart 12 Apr 2020
In reply to Coel Hellier:

> Agreed.   But that really is a preference for **our** pleasurable conscious states rather than painful ones.  We care a lot more about **our** conscious states than we do about the states of random other people. And it is a fact that people find immediate awareness of the suffering of their child or family member *vastly* worse than indirect awareness of the suffering of an unrelated stranger in a far-off land. 

True.

> We're not under any obligation to present "reasons" for caring more about our children or family members than we do about unrelated strangers!  Nothing requires us to find such reasons prior to caring more about our family members. Morality really is not rooted in reasons, it is rooted in values

Morality is just how people feel about human behaviour. All kinds of feelings about all kinds of behaviours exist in the world. Some people feel that it is morally right to mutilate the genitalia of young girls. Some people feel it wrong to harm another living creature, whether that's an insect or a person.

I'm proposing an ethical theory which does oblige you to give reasons for your actions - that obligation doesn't exist outside the ethical theory. I'm proposing this ethical theory because rather than endorsing female genital mutilation as perfectly fine if it fits with your values, I believe that there are good reasons it is wrong. These aren't philosophically objective reasons that exist without reference to subjective experience, they are reasons that are objective in the sense that anyone can follow the arguments and verify the facts that they relate to.

Anyone who will join me on the crucial first step, and agree that reason is the way to good answers, can then ditch whatever stupid values they have that allow them to justify mutilating their children, or denying people rights based on skin colour, or any other behaviour that results in unnecessary suffering. 

> No, it doesn't remove the one central reason -- we might want to!  

I'm proposing that using reason is a reliable way to reach a consensus that a world with less suffering is what we all want. I think relying on people's emotions only is a terrible strategy if you have any interest in improving the world.

> It does not "grant" anyone any power at all.  But, yes, de facto, anyone with the ability to persuade others to do harm does indeed have the ability to persuade others to do harm.     (And so, if we don't like that prospect, we need to be vigilant about such people.)

So when someone is trying to persuade us of their policies, shall we analyse what they're saying with reason, or shall we just rely on our emotions to tell us whether we should agree with what they propose?

> By the way, most of the worst demagogues in history (Hitler, Stalin, Mao, Pol Pot, etc) have been convinced that their system would be ultimately for the best

This is a bad argument. An ethical theory that aims to minimise suffering actually causes suffering because Hitler. You can do better than that.

> Sure, and -- for most people -- a world in which their own child is happy and content, but 10 children in far-away India are suffering badly, is a better world than one in which the 10 children in India are happy and content, but their own child is suffering badly.

True. And a world in which no children are suffering is better still.

> You can point to your axiom about everyone's suffering counting equally as often as you like, but that does not change the fact that (1) it is not in accord with human nature, and (2) the only standing  it has is the advocacy of anyone who chooses to advocate it.

You keep changing the axiom. The axiom is that suffering is bad and it's universal, it doesn't require anyone's advocacy. The idea that everyone's suffering counts equally would be a conclusion you would come to through reason. That does not necessarily imply that you have to put equal effort into reducing everyone's suffering - we could all care for ourselves and our loved ones and reduce suffering perfectly effectively that way. What we would avoid, though, is caring for ourselves and our loved ones in such a way that it causes others to suffer, because reason informs us that others' suffering is, in the grand scheme, just as important as our own.

> Notable that you reject those ideas based on your evaluation of and dislike of them! 

My reason for disliking them is that they lead to a world with unnecessary suffering.

The point is that while you say it's a good idea to allow human nature to take its course without being constrained by reason-based morality, that would actually go against your preferences and values.

Post edited at 14:56
In reply to Jon Stewart:

> Oh look, we've just reversed the Enlightenment. If you don't mind Coel, I think I'll give relying on human nature rather than reason a wide berth.

Great post, Jon.

 Ratfeeder 12 Apr 2020
In reply to Coel Hellier:

> But, your account of moral properties in your previous post was about people's subjective brain states.

Sorry, but no it wasn't. The term "helpful" does not describe a subjective brain-state. It describes a human action. To assert that it describes or refers to a subjective brain-state is simply to make a logical error.

I think a lot of the problem here is that you are using way we talk about objects and purely physical events (particularly in science) as the exclusive model for "objective" discourse about anything, including actions and persons. The sort of predicates which are appropriately ascribed to the latter imply intentionality and thus have moral connotations. But that doesn't prevent them from objectively describing actions and persons in the "external world". To describe an action as "helpful" (or "unhelpful") is a genuine description, not the expression of a personal feeling or "sentiment". An action is helpful if, and only if, it helps someone.

> Nothing you've said convinces me that there are "moral facts" that hold independently of how humans feel about things. 

Ok that's fine, that's your prerogative. But it does seem to me that you are determined not to be convinced.

 Coel Hellier 12 Apr 2020
In reply to Jon Stewart:

> I'm proposing an ethical theory which does oblige you to give reasons for your actions ...

Which is fine, so long as you accept that the standing of the theory (and thus the obligation to give reasons) derives solely from your advocacy.

> This is a bad argument. An ethical theory that aims to minimise suffering actually causes suffering because Hitler. You can do better than that.

But the communists did have the best of intentions, and had the intention of reducing suffering for everyone.  Just ask Pefa! 

Indeed the utopia they were heading for was so virtuous that the goal justified the means, even if that meant they had to "deal with" those morally defective enough to oppose them.

> The point is that while you say it's a good idea to allow human nature to take its course without being constrained by reason-based morality, ...

No, no, no!! I am *not* saying that.  My meta-ethical scheme is not prescriptive, it is descriptive.  Only humans can produce moral prescriptions. 

Thus my scheme does not say "it is a good ideal to allow" human nature to take its course, it says that human nature and human nature taking its course is all there is -- regardless of whether that is a "good idea" or not; regardless of whether we might want more than that.   

There is no such thing as "reason-based morality", there is only human-nature-based morality.   And again, that is simply a descriptive statement about how things are.      It is not saying "... and it's good that it is that way", nor "... and this will necessarily lead to good outcomes", or anything such. 

Thus my maintaining that this is the correct account of meta-ethics is not "because that's how I want things to be", but "because that's how things are, whether we like it or not".

Too many people (may I politely suggest, including yourself? ) design meta-ethical systems based on how they want things to be, not on how reality actually is.  

(For full clarity, it's entirely fine to design ethical systems based on how you want things to be; humans are moral agents, and that's what they do.  But meta-ethical accounts are descriptive accounts of how things are, and whether you like them or not is beside the point. Whether they have ethical implications that you want them to is irrelevant; nature is under no obligation to be as you'd like it to be.)

> ...  that would actually go against your preferences and values.

They might well do, yes!    If you think that's relevant, then you are letting your ethics cloud your thinking about meta-ethics.     

Indeed, you are designing your meta-ethics precisely in order to obtain backup for your ethics.  You wouldn't be the only one guilty of that. 

 Coel Hellier 12 Apr 2020
In reply to Ratfeeder:

>> > But, your account of moral properties in your previous post was about people's subjective brain states.

> Sorry, but no it wasn't. The term "helpful" does not describe a subjective brain-state. It describes a human action.

But, as you yourself said, whether something is "helpful" or not is insufficient to determine whether it is "moral" or not.   (We regard helping a woman feed her  starving children as "moral", but regard helping a terrorist as "immoral".)

So, the account of moral properties in your previous post was about people's subjective brain states.

The term "moral" is a report of an evaluation of a human action.  That evaluation is a subjective one, being an evaluation made by a human brain based on its feelings and values. 

In the same way, the term "beautiful" is a report of an evaluation of how something looks.  That evaluation is a subjective one, being an evaluation made by a human brain based on its feelings and values. 

Thus, calling something "moral" is subjective, just as calling something "beautiful" is subjective.

> To describe an action as "helpful" (or "unhelpful") is a genuine description, not the expression of a personal feeling or "sentiment". An action is helpful if, and only if, it helps someone.

Which would be fine so long as any and all instances of "helping" were "moral".  Obviously they are not, as your own posts stated.  While the "helpfulness" of an action could be assessed objectively, the morality of it cannot. 

OP Jon Stewart 12 Apr 2020
In reply to Coel Hellier:

There is suddenly some enormous confusion here.

I keep saying clearly that I am advocating for an ethical system, utilitarianism. By advocating for utilitarianism, I am implicitly committing to the metaethical position that underpins that system: that morality is concerned with the changes in conscious states caused to humans by their interactions. This differs from an alternative metaethics that, say morality is a matter of acting in accordance with god's will or something.

We were discussing whether or not utilitarianism was a good ethical system. You said.

> So let's, then, reject the misconceived idea of building ethical theories "on reason", and go with ethical theories built on human nature! 

And now you're saying

> No, no, no!! I am *not* saying that.  My meta-ethical scheme is not prescriptive, it is descriptive.  Only humans can produce moral prescriptions. 

> Thus my scheme does not say "it is a good ideal to allow" human nature to take its course, it says that human nature and human nature taking its course is all there is -- regardless of whether that is a "good idea" or not; regardless of whether we might want more than that.   

Which is utterly confusing.

I believe that human beings have evolved with a real mixed bag of moral intuitions that are the way they are because they're adaptive. They're not consistent, they don't form an ethical theory. If you like, you can say, "well who needs an ethical theory anyway, I know what I like and what I don't like, and that's enough for me". Sounds rather lazy to me, but each to their own.

I'm saying, OK, we didn't evolve an ethical system, but we did manage to evolve reason, and we can put that to work on ethical problem solving - which we need to do when we devise policy or are confronted with a moral dilemma. I'm making no claim that anyone is bound by the nature of reality into this reason-based ethical system, I'm just promoting it as a system you might wish to adopt since I believe it will lead to a better future than lazy relativism.

Post edited at 17:12
 Ratfeeder 12 Apr 2020
In reply to Coel Hellier:

> Yep, sure. And all of these things, such as  "helpfulness" are evaluations made by humans.  They depend on what humans want, how they evaluate outcomes, et cetera.   So "moral" language amounts to labels of approval or disapproval by humans.   We approve of someone helping a woman trying to feed her starving children, so call it "moral", and we disapprove of helping someone to carry out a terrorist attack, so call that "immoral".

"Helpfulness" has a value connotation, certainly, either positive or negative depending on the circumstances. And it is a quality, or not, of human action. But whether or not an action has the quality of being helpful is determined by the facts of the situation, not by subjective feelings about the action in question. If an action is helpful to someone then the helpfulness of that action is a fact about the action which 'exists' just as independently of anyone's perception, beliefs, feelings or other attitudes, as the fact that J.F. Kennedy was assassinated in Dallas in 1963.

But you may well ask, in what sense does a fact - any sort of fact - 'exist' at all. What metaphysical status does a fact have? A fact about a physical object is not a physical object. It's a sort of form that lends itself to linguistic expression in terms of subject and predicate, selected from the totality of all possible facts about the object. This implies human intentionality. So how can such a thing exist in the "external" world, independently of human perception and understanding? In an absolute (or Platonic) sense - the sense of reality as it is in itself - it doesn't. But in empirical reality it does, because that implies a subjective element to reality as a whole (as we understand it). But it's still reality (for us as human beings), and objectivity is still possible within it - empirically relative objectivity. Absolute objectivity is not possible for us.

> All that is fine. And that all gives you a subjective morality based on human feelings, desires and values.

> It does not give you "moral facts", it does not make moral statements cognitive (because they report a human brain state, a human evaluation of something; they are not truth-apt statements about external facts) and it does not give you moral realism. 

I disagree, for all the reasons given above.

> "Moral Realism (or Moral Objectivism) is the meta-ethical view that there exist such things as moral facts and moral values, and that these are objective and independent of our perception of them or our beliefs, feelings or other attitudes towards them."

Indeed, but not in a Platonic or "absolute" sense. Maybe some moral realists are Platonic, but most are Kantian.

 Coel Hellier 12 Apr 2020
In reply to Jon Stewart:

> I am implicitly committing to the metaethical position that underpins that system: that morality is concerned with the changes in conscious states caused to humans by their interactions.

Whereas my meta-ethical position is that morality is about and based on human values, feelings and desires.  So, yes, we're not far apart on that.

> Which is utterly confusing.

I'll attempt to clarify.  Meta-ethics is not ethics. Meta-ethics is descriptive. It is about the status of morality (whether there are "moral facts"; whether morality is subjective or objective, et cetera).  

Ethics (in contrast) is prescriptive.   I (as a moral agent, as all humans are) have opinions about how I'd like things to be.  Based on my ethics I might comment on the desirability of utilitarianism.

But, from a meta-ethical point of view, my stance is that there is no such thing as an ethic framework "based on reason", since any such thing would require axioms, and the only standing those axioms have is the advocacy of a human, based on that human's values.  Therefore (my meta-ethics says) there are no reason-based ethical systems, only human-value-based ones.

The last paragraph is entirely descriptive (it describes facts about what morality is).   That doesn't stop me (as a human) also having prescriptive opinions about how I want society to be.

> I believe that human beings have evolved with a real mixed bag of moral intuitions that are the way they are because they're adaptive. They're not consistent, ...

Agreed.

> ... they don't form an ethical theory.

Not if you require an "ethical theory" to be coherent and axiomatic, no. 

> If you like, you can say, "well who needs an ethical theory anyway, I know what I like and what I don't like, and that's enough for me". Sounds rather lazy to me, but each to their own.

I don't think there can be a coherent, axiomatic ethical theory that humans would want to adopt.   That's because human psychology is too complex and "tensioned" to be encapsulated in axioms.

I prefer "tensioned" there to "inconsistent". There are often multiple different "good things" that we want, that are in tension with each other and pull in different directions.  We therefore have to find a balance between tensioned "goods". 

> I'm making no claim that anyone is bound by the nature of reality into this reason-based ethical system, I'm just promoting it as a system you might wish to adopt ...

First, it's not a "reason-based ethical system" since it depends on axioms, and those axioms do not themselves derive from reason (you have accepted that), they derive from human values.   Therefore the meta-ethical pedant in me insists that it's a human-values-based system (like all the others!).

Second, if this ethical system is one whose standing derives from your promotion of it, and from the willing adoption of anyone who wishes (based on their values) -- and is not something to which we are "bound by the nature of reality" to -- then we're agreed on the meta-ethics.   We're basically agreed that: "morality is about and based on human values, feelings and desires".

So are we agreed on the meta-ethics? If so, then we've got a long way. 

At that point I'm also happy to discuss the ethics (as in, what moral prescriptions we want to recommend to society).

OP Jon Stewart 12 Apr 2020
In reply to Coel Hellier:

> But, from a meta-ethical point of view, my stance is that there is no such thing as an ethic framework "based on reason", since any such thing would require axioms, and the only standing those axioms have is the advocacy of a human, based on that human's values.  Therefore (my meta-ethics says) there are no reason-based ethical systems, only human-value-based ones.

As you say, we're not far apart on this dry academic issue of classification. I've put forward an ethical system that uses reason, and adds a single axiom, that suffering is bad. I'm happy to agree that this is a human value, that you can't have ethics without humans values, and so the system is not objective in the philosophical sense of existing independently of subjects.

We discussed the nature of the axiom, and after finding fault with a few different forms of words, you agreed that it's a true description of universal human goals - it doesn't require advocacy because everyone agrees with it merely by the fact of being human.

So you're committed to a rationalist/materialist world view that says reason gives reliable answers about reality; and you accept that the axiom is true in describing universal values and so doesn't require advocacy. There isn't any more to utilitarianism than this. If you want to get off the train now, you've got two options:

- you can change your mind about the axiom, and say that there isn't anything better about a world with less suffering than the same world but with more suffering; or

 - you can ditch reason, and say that while reason gives us the right answers when we talk about everything else, it doesn't work when you talk about human behaviour and feelings, because we're special or magic or something.

It's telling that when you gave critiques of utilitarianism, they were utilitarian arguments! So while nothing binds us all into utilitarianism as a fact of nature - we can believe in god, or in nothing at all - once you commit to a rationalist world view you're not left with much in the way of options.

> So are we agreed on the meta-ethics? If so, then we've got a long way. 

I said at the outset that I wasn't bothered by how a philosopher would classify the ethical system I advocate, only whether or not the system works, which is why I've concentrated on the ethics all along.

> At that point I'm also happy to discuss the ethics (as in, what moral prescriptions we want to recommend to society).

So let's return to our preferred flavours of ice cream and the apparently essentially identical issue of racism.

"I don't want blacks in my shop, and should not have to tolerate them, because I want it that way"

"Well I want it a different way, in which people have equal rights"

How should we decide which is the best policy, if not by employing reason?

 Coel Hellier 12 Apr 2020
In reply to Jon Stewart:

> I'm happy to agree that this is a human value, that you can't have ethics without humans values, and so the system is not objective in the philosophical sense of existing independently of subjects.

OK, so we're basically in agreement on the meta-ethics.

> We discussed the nature of the axiom, and after finding fault with a few different forms of words, you agreed that it's a true description of universal human goals - it doesn't require advocacy because everyone agrees with it merely by the fact of being human.

It is a true statement that PARTIALLY encapsulates human goals.  But it is not a complete account of human goals, and it doesn't necessarily take precedence over other statements of what humans want.  Thus, if you want it to override other human desires ("I care most about my children") then that does need advocacy. 

In particular the statement: "humans do care about all humans equally" is not factually true, and thus the axiom: "humans should care about all humans equally", needs to be argued for, and people can legitimately dissent.

> If you want to get off the train now, you've got two options:

There's a third option.  I can personalise the axiom.  I can agree that *my* suffering is bad, and I can also agree that other people's suffering is bad, but nothing requires me to rate these two as equally bad.  That's just a ludicrous denial of human nature.  So my axiom becomes: "If I'm suffering, that's really bad; the suffering of my family is also pretty bad; I do care about the suffering of other people, to some extent, but to a much lesser extent than I care about that of myself or my family. 

And that is not a rejection of your axiom, it's entirely compatible with "suffering is bad", it's just adding the further information that people don't all figure equally in my reckoning. 

> "I don't want blacks in my shop, and should not have to tolerate them, because I want it that way"

> "Well I want it a different way, in which people have equal rights"

> How should we decide which is the best policy, if not by employing reason?

By employing values and empathy? 

OP Jon Stewart 12 Apr 2020
In reply to Coel Hellier:

> It is a true statement that PARTIALLY encapsulates human goals.  But it is not a complete account of human goals, and it doesn't necessarily take precedence over other statements of what humans want.  Thus, if you want it to override other human desires ("I care most about my children") then that does need advocacy. 

I don't want it to override other human desires. When you say "I care most about my children" you're telling me what causes you the most suffering.

> In particular the statement: "humans do care about all humans equally" is not factually true, and thus the axiom: "humans should care about all humans equally", needs to be argued for, and people can legitimately dissent.

Such an axiom isn't a feature of utilitarianism and doesn't need to be argued for. If I hurt your child, that will also hurt you, but it won't hurt someone a thousand miles away who's got nothing to do with you or your child. We agree that that is how the world works.

> There's a third option.  I can personalise the axiom.

Sure you can, and you're still holding to the axiom and to reason. You still think that the worst possible misery for everyone is as bad as it gets, and the best possible happiness for everyone is as good as it gets. You're still a utilitarian, you haven't got off the train.

> By employing values and empathy? 

What makes values and empathy different to reason?

The value you might employ is that the importance of someone's wellbeing doesn't depend on the colour of their skin. That's just the conclusion you'd get by reasoning whether different people are equally or deferentially deserving of wellbeing. Where did that value come from if it wasn't from reason? Empathy is the emotional experience of understanding that someone else's pleasure and pain is equivalent to your own. How is that different to merely understanding through reason that other people are conscious and experience similar conscious states to you? What makes you say you're using the emotional rather than cognitive component?

 Coel Hellier 12 Apr 2020
In reply to Jon Stewart:

> Where did that value come from if it wasn't from reason?

From our nature, from our evolutionary heritage.

> Empathy is the emotional experience of understanding that someone else's pleasure and pain is equivalent to your own. How is that different to merely understanding through reason that other people are conscious and experience similar conscious states to you? What makes you say you're using the emotional rather than cognitive component?

Put it this way: a sadistic torturer can "understanding that someone else's pleasure and pain is equivalent to their own", and can "understand through reason that other people are conscious and experience similar conscious states to themselves", and yet decide to torture someone for enjoyment.    That's the difference between values and reason. 

In reply to Coel Hellier:

> Ethics (in contrast) is prescriptive.   I (as a moral agent, as all humans are) have opinions about how I'd like things to be.  Based on my ethics I might comment on the desirability of utilitarianism.

Just a point of correction. Ethics is not prescriptive. How can it ever be? It is simply (a synonym for) moral philosophy, which is the study of ethics. You're using the term interchangeably between ethics as used in the general, correct sense, and how it's often used nowadays in everyday life to refer to 'one's personal ethics', i.e. one's personal moral values.

The study of ethics has an exact parallel with another study of values: aesthetics. Which looks at the whole subject of beauty, and what that means. While ethics studies the whole subject of what we value in the way people treat each other. It can't prescribe anything simply because it has already stepped back to analyse and discuss the whole gamut of other peoples' prescriptions. It never itself prescribes, it simply studies very critically and deeply all ethical prescriptions.

Post edited at 20:55
OP Jon Stewart 12 Apr 2020
In reply to Coel Hellier:

> From our nature, from our evolutionary heritage.

That's absolutely not plausible! People were reliably racist for millennia until we developed sufficient reason to overcome the evolved instinct to value your own tribal in-group over others. Our evolutionary heritage is the *cause* of racism, and our development and application of reason (through policy and education) is the cure.

Are you sure you believe what you just said?

> Put it this way: a sadistic torturer can "understanding that someone else's pleasure and pain is equivalent to their own", and can "understand through reason that other people are conscious and experience similar conscious states to themselves", and yet decide to torture someone for enjoyment.    That's the difference between values and reason. 

No it isn't. The difference between a sadistic torturer and everyone else is that they gain pleasure from torturing people. The rest of us don't, so we've got no motivation to do it.

(You're still a utilitarian, by the way).

 Coel Hellier 12 Apr 2020
In reply to Ratfeeder:

Presuming you're happy to go with this definition:

>> "Moral Realism (or Moral Objectivism) is the meta-ethical view that there exist such things as moral facts and moral values, and that these are objective and independent of our perception of them or our beliefs, feelings or other attitudes towards them."

> Indeed, but not in a Platonic or "absolute" sense.

(And yes, I'm not advocating Platonism.)

Let's go back to your example of someone helping a terrorist to bomb a city.  To most of us, that is highly immoral.  To a fellow member of ISIS, however, it would be the opposite, to them it is moral.

So which is it?   Who is right?  To me, that's the wrong question, the labeling ("it's immoral") is a report of values, and we really do have different values from the ISIS terrorist.

But to the moral realist, there has to be a fact of matter as to whether "it is immoral", thus the claim "it is immoral" is cognitive and has a truth value (= there is a right answer).  

So, first question: what objective property of the world is the claim "it is immoral" referring to, that gives it the immoral-ness? It can't be the "helping" because all of us agree that the terrorist's co-conspirator helped.  That's agreed; that's not what the disagreement is about.  Instead, the disagreement is whether providing that help was "moral" or "immoral".

My answer, of course, is that "it is moral/immoral" refers to the approval/disapproval of the speaker. But the moral realist cannot give that answer, since under moral realism the statement "it is moral" needs to be true or false "independent of our beliefs and attitudes" about that fact. 

Second question: what does saying "it is immoral" mean?  Under moral realism it must, again, refer to some objective property of the world.   I get that one can answer "it means one shouldn't do it", but then I simply ask what "one shouldn't do it" means in this context. Why shouldn't the co-conspirator have done it? 

The obvious answer to that: "Because it's immoral" just leads back in a circle without explaining anything.

My answer, of course, would be: "The terrorists shouldn't do it, because we would dislike it", but that again amounts to moral language expressing disapproval. And again, that answer is not available to the moral realist, since they need an answer such that the "moral fact" (that "it is immoral") has a truth value that is "independent of our beliefs and attitudes" about that fact.  And, again, the approval/disapproval of the act is very much dependent on one's values and whether one supports ISIS.

So what is the answer?  I still don't know what "it is immoral" is even supposed to mean in the mouth of a moral realist, nor have I grasped any clear understanding of what feature of the external world the "immorality" of the act is supposed to refer to.   (It can't be the "helping", since people who agree on that can disagree on the immorality.)  

Of course this lack of grasping could be my fault and my limitation, but I've tried hard to understand what moral realism could mean.   I've asked multiple philosophically informed people about this over the last decade (including tenured professors of  philosophy) and not yet got a clear-cut and sensible answer. (And if one tries reading the academic philosophical literature on it, one just gets sucked into a quagmire, without ever encountering anything clear-cut.)

 Coel Hellier 12 Apr 2020
In reply to Gordon Stainforth:

> Just a point of correction. Ethics is not prescriptive. How can it ever be? It is simply (a synonym for) moral philosophy, which is the study of ethics.

Yes, it's the study of ethics. But, if one studies ethics and arrives at the conclusion "we find that doing X is morally mandated, objectively so", then that is prescriptive.  That's how it can be prescriptive. 

If moral-realism were true, that means there would be "moral facts", and if, as a result of study, one discerns moral facts, then that would be prescriptive since moral facts are prescriptive. 

From wikipedia:

"Three major areas of study within ethics recognized today are:

"Meta-ethics, concerning the theoretical meaning and reference of moral propositions, and how their truth values (if any) can be determined

"Normative ethics, concerning the practical means of determining a moral course of action

"Applied ethics, concerning what a person is obligated (or permitted) to do in a specific situation or a particular domain of action."

> The study of ethics has an exact parallel with another study of values: aesthetics. Which looks at the whole subject of beauty, and what that means.

Yes, but no-one (these days) would think that aesthetic judgments are objective, which would be the equivalent of morals being objective.

So, ok, if moral realism is false, then academic study of ethics is not prescriptive. But, roughly half of academic philosophers think that moral realism is true.    Certainly, plenty of them, such as Peter Singer, think that they're doing prescriptive ethics.

 Coel Hellier 12 Apr 2020
In reply to Jon Stewart:

> Are you sure you believe what you just said?

I'll re-phrase: "From our evolved heritage, acted upon by environment and experience and argument and evidence and the influence of others". 

As I've argued up-thread, reason and facts can't be primary in resulting in a value, but they can act on prior values and produce changed values. 

> The difference between a sadistic torturer and everyone else is that they gain pleasure from torturing people.

Isn't that more or less what I said, except using "pleasure" for values?  The sadistic torture can fully comprehend all the facts and all the reason of the situation, and yet be a sadistic torturer because or their values, because they enjoy it. 

> (You're still a utilitarian, by the way).

I think that utilitarianism is a pretty good -- but partial and incomplete -- description of how people think about morality, yes.  (Along with deontology and virtue ethics, which are also partial and incomplete descriptions of human psychology.)  That's why utilitarianism has long had a large following. 

But, utilitarianism (and the more-general consequentialism) are generally held to be cognitive, moral-realist accounts of morality, and as I've explained at length I reject those things.  (I do get that you're not coming to this from the point of view of formal philosophy.)    

If one could have a non-cognitivist, anti-realist, rule-of-thumb utilitarianism then I'd likely subscribe -- so long as I could dissent from it whenever I didn't like its implications. 

OP Jon Stewart 12 Apr 2020
In reply to Coel Hellier:

> But, utilitarianism (and the more-general consequentialism) are generally held to be cognitive, moral-realist accounts of morality, and as I've explained at length I reject those things.  (I do get that you're not coming to this from the point of view of formal philosophy.)    

> If one could have a non-cognitivist, anti-realist, rule-of-thumb utilitarianism then I'd likely subscribe -- so long as I could dissent from it whenever I didn't like its implications. 

Yes, I haven't quite worked out whether the utilitarianism I advocate is a cognitivist ethical theory or not. On the one hand, it's a theory that you allows you to put in nothing but facts about the world, crank the handle, and out come answers like FGM is wrong and social democracy is right. So that tells me that it's independent of culture and preferences, and deals with beliefs that are true or false. But on the other hand, I couldn't get it started without chucking in an axiom that depended on a human value judgement (albeit a universal one).

So does that make it a cognitivist/realist theory, or not? I remain sceptical that it matters either way.

I don't worry about the unsavoury implications that people think utilitarianism comes up with - I haven't seen a single example that isn't just a failure to consider the consequences correctly (or isn't something that on reflection I'm perfectly happy to accept). 

Interesting discussion anyway! 

 freeheel47 12 Apr 2020
In reply to Jon Stewart:

Just to second what Jon has said about Jonathan Haidt and The Righteous Mind. An excellent book on Moral Psychology- which is a great place to start reading if you want to think more about where morality comes from and why Good People Disagree about Politics and Religion.

If you don't want to buy his book have a quick read of "The emotional dog and its irrational tail" I

Having 'good intentions' isn't the beginning or end of morality. Hitler and the Nazis had (thought they had- actually they were sure they had) good intentions. Terrorists think they are doing good. 

There is very little (no) evidence that morality is an exercise in reason. If it was you'd think that people who practiced it a lot would be more moral. They aren't. Studying morality does not make you good. (from Eric Switzgebel); moral philosophers do not behave more morally than their peers in terms of; voting, giving to charity, calling their mothers, donating blood, donating organs, cleaning up after themselves in conference halls, returning books to libraries. See Schwitzgebl E 2009 "Do ethicists steal more books" Philosophical Psychology 22:711-25 Schwitzgaebel E and Rust J 2009 "Do Ethicists and Philosophers Vote More than other Professors?" Review of Philosophy and Psychology 1:189-99 and many others.

There is very little evidence to show that moral decisions are preceded by moral reasons, actually there is a lot of evidence to show that the decision comes first and then the reason.


Greene, J., & Haidt, J. (2002). How (and where) does moral judgment work? Trends in cognitive sciences, 6(12), 517-523. 

Greene, J. D. (2009). The cognitive neuroscience of moral judgment. The cognitive neurosciences, 4, 1-48. 


Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108. https://doi.org/10.1126/science.1062872 

Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychol Rev, 108(4), 814-834. 


Haidt, J., Rozin, P., Mccauley, C., & Imada, S. (1997). Body, Psyche, and Culture: The Relationship between Disgust and Morality. Psychology & Developing Societies, 9(1), 107-131. https://doi.org/10.1177/097133369700900105 



 

 Pefa 13 Apr 2020
In reply to Coel Hellier:

> I suggest that empirical reality is against you on that one. 

Yes and no. Do ordinary people even from capitalist States where people struggle not help develop other countries medically, educationally, technologically? And in times of emergency? Do poor people not give huge amounts to charity? Is it not in our 'human nature', to help others in every way? 

> Like all communists, you are dreaming of a utopia in which human nature is very different from what it actually is.   E. O. Wilson summed up communism with: "Nice idea; wrong species".

You are using tired and lazy old clichés about socialism which have no bearing on reality. Is the NHS a dreamed up utopia or a reality? And I'm very happy for you and E. O. Wilson but surely his anthropomorphism would be more suited to the UK in our everything to protect our Queen similarly to his eusocial insects. 

Edit-Socialism is not worker ruled by Queen or living in a utopia, its just living but with the welfare of the many put first over the riches of a few. Yes capitalism has its place in the stage of development just as the ego has its place in our personal stage of development but that is a place you develop from not end with. 

Anyway I won't derail this thread onto socialism v capitalism. 

Post edited at 08:04
 Coel Hellier 13 Apr 2020
In reply to Jon Stewart:

> On the one hand, it's a theory that you allows you to put in nothing but facts about the world, crank the handle, and out come answers like FGM is wrong and social democracy is right. So that tells me that it's independent of culture and preferences, and deals with beliefs that are true or false.

That in itself does not amount to much.  Here is an ethical theory that also has those properties:

Axiom: "Whatever Fred Blogs says is moral".    Then all we need do is put in nothing but facts ("Fred says X", "Fred says Y"), and hey presto, out come answers (X is moral, Y is moral)!

> But on the other hand, I couldn't get it started without chucking in an axiom that depended on a human value judgement (albeit a universal one).

Yep, it really does all come down to that axiom.  But, more than that, in practice, you'd have to do a vast amount of interpretation of the axiom, and stick in a vast amount of "value judgement" along the way. 

One big problem in basing policy on "outcomes" is that you have to make some evaluation and judgement about what will happen.  And you can't really do control experiments to find out. 

So, suppose Pefa asserts "we should turn into a socialist utopia, there would be vastly less suffering". So you could (or rather couldn't) split society into 10 identical parts, have half follow Pefa's plan, and half continue as is, and after 50 years assess the outcomes.    And if, instead, you reply: "well it didn't turn out that well under Stalin and Mao", Pefa will simply reply: "Well that wasn't real communism ...".   So, in practice, your scheme is going to be pretty useless, lacking a large dose omniscience.  

Then your other big problem is how do you actually measure the "suffering" you wish to minimise?   Given an objective suffering-ometer, with a read out of "Alan is currently suffering 6.9 in objective units" you might be on to something.  But all you've got is people's subjective reports.  And we have no real way of comparing the suffering of two different people. 

Just for example, is "man flu" really worse than what women experience, or are men just being big babies?   If, in a divorce, the women asserts: "If I don't have custody of the children, I'll suffer 30.8 units of suffering, but if he doesn't get custody he'd only suffer 5.2 units", what would you say?   

This of course, is utterly silly, but if you've adopted a maxim of "whatever minimises suffering is the right thing to do", then you'd need to think like this, and in order to minimise a quantity you need to be able to measure it, in order to know where the minimum is. 

What if some religious person declares: "If my religion and my holy book are openly disrespected, I suffer vastly worse than if my hand were held to a hot stove".  What would you reply? "No you don't!" perhaps?  Or "well stop believing it, and the suffering will go away"? 

I suggest that, in practice, your scheme would be pretty much utterly useless, and would -- in the end -- turn into much the same sort of web of value judgements, compromises and tensioning between different "goods" that we seek, that is how society does actually operate as it is now. 

 Coel Hellier 13 Apr 2020
In reply to Jon Stewart:

PS I actually have a large amount of sympathy with utilitarianism.   After ditching religious ways of thinking about morality around the age of 11 or 12, I then read Mill and others around the age of 13 and 14, and was immediately enamoured with utilitarianism, which struck me as obviously correct.

By late teens I'd then realised that it wasn't that simple, although I would periodically ponder it and try to make utilitarianism work.

Then, as an adult, I ended up thinking that any account of morality in terms of axioms, reason, logical analysis and truth values (and thus moral realism overall) is just not how nature is.  

What we are is emotion-filled and value-laden animals, seeking our way in the world, living socially, and thus negotiating our interactions with other members of our species, who are also emotion-filled and value-laden animals.  And that involves inevitable tension and compromise between what one person wants and what another person wants.   And out of those interactions and compromises emerges "society".

Further, human psychology is hugely complex and tensioned.  There's no way that it can be modeled by a formal, axiomatic, logical system akin to mathematics.  Seeking anything such for morality is a non-starter.  And since there ain't nobody here but us chickens -- nothing external to humans to tell us what we "should" do -- seeking moral instruction from a reason-based, axiomatic system is as illusory as seeking moral instruction from a god. 

As I often say to religious people: "People do not get their morals from religion; rather, religions get their morals from people". 

Similarly, you do not get your morals from utilitarianism; rather, utilitarianism gets its moral from you.

 1poundSOCKS 13 Apr 2020
In reply to Jon Stewart:

> But on the other hand, I couldn't get it started without chucking in an axiom that depended on a human value judgement (albeit a universal one).

I'm not sure it's universal, or even all that common. Maybe if you ask people the vast majority will agree in principle but what people say and what people do are often two different things.

 Pefa 13 Apr 2020
In reply to Coel Hellier:

O/10 troll

Must do better. 🙂

As I said I don't want to derail this thread onto a socialist capitalist one so I will bite my tongue and leave it there although I will correct you on one point which is that I wouldn't say for one minute that Stalin's USSR was not socialism. 

Post edited at 10:08
OP Jon Stewart 13 Apr 2020
In reply to Coel Hellier:

> That in itself does not amount to much.  Here is an ethical theory that also has those properties:

> Axiom: "Whatever Fred Blogs says is moral".    Then all we need do is put in nothing but facts ("Fred says X", "Fred says Y"), and hey presto, out come answers (X is moral, Y is moral)!

The theory can take in any set of facts and doesn't require any culturally contingent judgement (because the axiom is universal).

> One big problem in basing policy on "outcomes" is that you have to make some evaluation and judgement about what will happen.  And you can't really do control experiments to find out. 

All you're saying is that policy is difficult. Let's take an example: there is a need to reduce the government deficit following a deep recession. One option would be to make judicious progressive tax rises in areas which you predict would not suppress demand, while cutting spending on pensions (with some means-tested relief for those dependent on the state pension). Another option would be to run a campaign to demonise those on welfare and cut welfare spending for the unemployed, low-paid and disabled. 

I think that one of these policies is preferable to the other, and I have good reasons for this preference. For some policies, you do need to run pilots to see what the consequences are going to be. For others, it's just f*cking obvious.

> This of course, is utterly silly, but if you've adopted a maxim of "whatever minimises suffering is the right thing to do", then you'd need to think like this, and in order to minimise a quantity you need to be able to measure it, in order to know where the minimum is. 

A long way upthread I raised the issue that utilitarianism is a quantitative theory but concerns conscious states which are qualitative. I think this is a real problem (particularly if you don't sign up to identity-theory materialism, which I don't) and actually makes the hedonic calculus impossible in principle, not just in practice. That's among the reasons I'd never suggest that it's possible to actually live by utilitarianism. 

What I maintain though, is that utilitarianism provides a universal rational framework for evaluating policy options. Let's try another one, for fun:

We're at the early stages of a pandemic of an extremely virulent disease with no cure or vaccine and a high mortality rate. We could let it run its course without intervention, aiming for 'herd immunity', but this would totally overwhelm the health service to the point where there were bodies piling up and a large number of healthcare workers would die. Or we could enforce a national 'lockdown' that would do significant economic damage, but would slow the rate of hospital cases so that they could be dealt with in a controlled manner that would not leave healthcare workers and survivors traumatised. Which is the better policy? 

> What if some religious person declares: "If my religion and my holy book are openly disrespected, I suffer vastly worse than if my hand were held to a hot stove".  What would you reply? "No you don't!" perhaps?  Or "well stop believing it, and the suffering will go away"? 

As much as you we don't have the sufferometer to give us the quantitative answer, as human beings we do have some in-built bullshit detecting skills and abilities to analyse evidence. We know how people behave when they're suffering, versus how they behave when they feel angry and insulted, and they don't have the same outward signs. If some cartoons, which were actually published in a magazine seemed to cause to huge spike in suicides and people seeking help for anxiety and depression then we might have reason to think that publishing those cartoons was the wrong thing to do. If we just see a load of people shouting and looking angry, we've got reason to call bullshit.

> I suggest that, in practice, your scheme would be pretty much utterly useless

I suggest that, in practice, it is what we actually use to evaluate policy options.

Post edited at 12:25
OP Jon Stewart 13 Apr 2020
In reply to 1poundSOCKS:

> I'm not sure it's universal, or even all that common. Maybe if you ask people the vast majority will agree in principle but what people say and what people do are often two different things.

I really think you'd struggle to find anyone would disagree that all other things being equal, a world with more suffering is not worse than a world with less. Or, that literally anything is a better than the worst possible misery for everyone. It really is universal.

You're absolutely right of course that not a single person on the planet actually behaves in the way the best minimises total suffering, because our behavioural motivations evolved to help our genes reproduce, not with the goal of reducing overall suffering.

 1poundSOCKS 13 Apr 2020
In reply to Jon Stewart:

> I really think you'd struggle to find anyone would disagree that all other things being equal, a world with more suffering is not worse than a world with less. Or, that literally anything is a better than the worst possible misery for everyone. It really is universal.

That's what I was saying. Most people would say they agree with the concept of minimal suffering. Because the cost benefit of what you say is different to the cost benefit of what you do.

 Coel Hellier 13 Apr 2020
In reply to Jon Stewart:

> I think that one of these policies is preferable to the other, and I have good reasons for this preference. For some policies, you do need to run pilots to see what the consequences are going to be. For others, it's just f*cking obvious.

Well yes, but one doesn't need utilitarianism in order to reason as you just did.   Of course consequences matter, and of course when evaluating the morality of a policy we should consider the consequences.   Do you think that, before Bentham and Mill came along and proposed Utilitarianism, that people never thought about consequences and supposed that it simply didn't matter that under one policy everyone starved to death while under an alternative policy everyone was happy?

When Bentham and Mill attempted to do (having rejected a Divine Command account of morality) is turn the obvious and common-sense thinking about consequences that we all do into a formal, moral-realist, prescriptive system complete with "moral facts" and truth values. 

It is that latter that people generally mean by "Utilitarianism", and you have (it seems to me) more or less accepted that that can't be done.

From there you seem to be falling back on the common-sense: "hey guys, you know, thinking about consequences and whether policies lead to happiness or suffering is actually quite a good way of thinking about what is moral and what we want to do".  Well yes, it indeed is.  And in other news the Pope is Catholic and bears pooh in the woods (unless they are confined to their caves owing to coronavirus lockdown).

OP Jon Stewart 13 Apr 2020
In reply to Coel Hellier:

> Well yes, but one doesn't need utilitarianism in order to reason as you just did.   Of course consequences matter, and of course when evaluating the morality of a policy we should consider the consequences.   Do you think that, before Bentham and Mill came along and proposed Utilitarianism, that people never thought about consequences and supposed that it simply didn't matter that under one policy everyone starved to death while under an alternative policy everyone was happy?

That seems to be the way religious morality works.

I agree that there is a common intuition that consequences matter. However, what I'm defending is not the common sense notion that consequences matter, but the commitment that it is only consequences that matter. This is in contrast to a wishy-washy mash-mashy way of thinking about policies where you start to pull "rights" and "principles" out of the bag when you feel like it.

For example, "OK I can see that it's not going to be very nice for the disabled if you cut their benefits, but you just can't cut pensions. People have a "right" to that money, they paid their taxes their whole lives." This is the kind of commonly practised fallacious reasoning that people use all the time so they can ignore consequences when it's convenient for them to do so.

> It is that latter that people generally mean by "Utilitarianism", and you have (it seems to me) more or less accepted that that can't be done.

It's Sam Harris' trumped-up utilitarianism, The Moral Landscape that I'm defending, not the hedonic calculus or rule utilitarianism (the reason I'm using the term utilitarianism is to be honest about what it really is, it's a rehash of old philosophy). This says that there are facts about people's conscious states that reduce to brain states, and what we consider to be "values" actually reduce to facts about conscious states and therefore brain states. As such, there is a possible physical universe which results in the worst possible misery for everyone, corresponding to the lowest possible point on the moral landscape. Morality is a matter of navigating away from valleys to some higher point on the landscape, of which there are a vast array of possibilities. Which is just another way to say that it really is better to live in Sweden than Afghanistan, it's more than a matter of opinion.

Post edited at 13:16
 Ratfeeder 13 Apr 2020
In reply to Coel Hellier:

> Presuming you're happy to go with this definition:

> >> "Moral Realism (or Moral Objectivism) is the meta-ethical view that there exist such things as moral facts and moral values, and that these are objective and independent of our perception of them or our beliefs, feelings or other attitudes towards them."

> (And yes, I'm not advocating Platonism.)

> Let's go back to your example of someone helping a terrorist to bomb a city.  To most of us, that is highly immoral.  To a fellow member of ISIS, however, it would be the opposite, to them it is moral.

> So which is it?   Who is right?  To me, that's the wrong question, the labeling ("it's immoral") is a report of values, and we really do have different values from the ISIS terrorist.

> But to the moral realist, there has to be a fact of matter as to whether "it is immoral", thus the claim "it is immoral" is cognitive and has a truth value (= there is a right answer).  

> So, first question: what objective property of the world is the claim "it is immoral" referring to, that gives it the immoral-ness? It can't be the "helping" because all of us agree that the terrorist's co-conspirator helped.  That's agreed; that's not what the disagreement is about.  Instead, the disagreement is whether providing that help was "moral" or "immoral".

> My answer, of course, is that "it is moral/immoral" refers to the approval/disapproval of the speaker. But the moral realist cannot give that answer, since under moral realism the statement "it is moral" needs to be true or false "independent of our beliefs and attitudes" about that fact. 

> Second question: what does saying "it is immoral" mean?  Under moral realism it must, again, refer to some objective property of the world.   I get that one can answer "it means one shouldn't do it", but then I simply ask what "one shouldn't do it" means in this context. Why shouldn't the co-conspirator have done it? 

> The obvious answer to that: "Because it's immoral" just leads back in a circle without explaining anything.

> My answer, of course, would be: "The terrorists shouldn't do it, because we would dislike it", but that again amounts to moral language expressing disapproval. And again, that answer is not available to the moral realist, since they need an answer such that the "moral fact" (that "it is immoral") has a truth value that is "independent of our beliefs and attitudes" about that fact.  And, again, the approval/disapproval of the act is very much dependent on one's values and whether one supports ISIS.

> So what is the answer?  I still don't know what "it is immoral" is even supposed to mean in the mouth of a moral realist, nor have I grasped any clear understanding of what feature of the external world the "immorality" of the act is supposed to refer to.   (It can't be the "helping", since people who agree on that can disagree on the immorality.)  

> Of course this lack of grasping could be my fault and my limitation, but I've tried hard to understand what moral realism could mean.   I've asked multiple philosophically informed people about this over the last decade (including tenured professors of  philosophy) and not yet got a clear-cut and sensible answer. (And if one tries reading the academic philosophical literature on it, one just gets sucked into a quagmire, without ever encountering anything clear-cut.)

Thanks for this, it's very considered and considerate. You've obviously given the subject some thought and I apologise if I've taken you to be casually dismissive (some of your comments gave me that impression). I know I'm advocating a position that is notoriously difficult to defend against scepticism, but that's partly why I'm doing it - it's a challenge! I don't want to give up on it if I think the particular arguments being levelled against it can be answered, and so far, I think they can. As for the definition of moral realism, I generally prefer characterisations to definitions and the best characterisation I've found is in the Oxford Companion to Philosophy. Though relatively brief, it would still make this post a bit too long if I quoted it now, so I'll quote it in a separate post.

So, to address the main point you make in your previous post to me, you say (to paraphrase), the moral value of an action would be objectively assessable if its having some objectively assessable property (e.g."helpfulness") guaranteed its moral value ("rightness"). On that we agree. But (unlike the particularists), you assume (like Kant) that such a property can guarantee moral value only if it does so in all possible circumstances. That's where we disagree. I say that, in the case of the woman in the library (case 1), the helpfulness of the action guarantees its rightness, while in the case of the terrorist (case 2), it guarantees its wrongness. That's because the rightness or wrongness consists in the helpfulness, without remainder. There is no moral value over and above or independently intelligible of the helpfulness. In case 1, what makes (or causes) the helpfulness to be "right", as opposed to "wrong", is the specific context - the objectively assessable circumstances surrounding the action. The circumstances in case 2 are such that helpfulness counts against the action rather than for it. It is through the objective assessment of the situation, and all the reasons the agent has for acting in that situation, that the agent is provided with a salient reason for acting in a certain way. This is not to say that no desires or emotions are present, of course they're bound to be, it's just that any independently intelligible desires or emotions do not determine the reason the agent ought to act in one way rather than another.

Now I agree that there are still problems with this account. The agent may have fully assessed the situation in case 2 and still decide to help the terrorist because, say, he's a fellow member of ISIS. And he may believe he's right to act as he does. But the observer can examine all the background beliefs that cause the agent to believe he's right to act in the way he does and decide cognitively whether those beliefs are justified by the facts. Hence, there is potentially a fact of the matter to be discovered concerning the rightness or wrongness of the act. The terrorist's belief that he is right can be questioned by reasoning and potentially be shown to be false.

You may still insist on a non-cognitive account, and really I'm not trying to "convert" you. I just want to make a "good fist", as Jon Stewart would say, of defending moral realism. If you're still convinced by non-cognitvism then I fully respect that and I can see your reasons for doing so. I just think that moral realism in the hands of a philosopher like Jonathan Dancy is a more interesting and substantial way of thinking about morality than reducing it to the level of tastes and sentiments. 

 Ratfeeder 13 Apr 2020
In reply to Coel Hellier:

> Presuming you're happy to go with this definition:

> >> "Moral Realism (or Moral Objectivism) is the meta-ethical view that there exist such things as moral facts and moral values, and that these are objective and independent of our perception of them or our beliefs, feelings or other attitudes towards them."

Here's the entry on moral realism from the Oxford Companion to Philosophy (1995):

moral realism. The view that moral beliefs and judgements can be true or false, that there exist moral properties to which moral agents are attentive or inattentive, sensitive or insensitive, that moral values are discovered, not willed into existence nor constituted by emotional reactions. Far from being a function of wishes, wants and desires, moral demands furnish reasons for acting, reasons that take precedence over any other reasons. Debate centres on the nature and credentials of moral properties as the moral realist understands them. In what sense are they 'real'? Real, as irreducible to discrete affective experiences of individuals. In this and other respects they share characteristics of 'secondary qualities' of our life-world: filtered by our mentality, but not on that account illusory. They can be well-founded, making a real difference to situations and individuals that possess (or lack) them. Moral realists are arguably justified in displaying the inadequacies of subjectivist moral theories; but less successful so far in developing a convincing positive account of the reality of values.

R.W. Hepburn, University of Edinburgh

Sums up the whole discussion on here really, doesn't it? I know I've picked a difficult task, but that's the challenge of it.

Post edited at 15:11
1
 Coel Hellier 13 Apr 2020
In reply to Jon Stewart:

> I agree that there is a common intuition that consequences matter. However, what I'm defending is not the common sense notion that consequences matter, but the commitment that it is only consequences that matter.

You're also proposing the axiom that all people's suffering counts equally (or, alternatively, the axiom that someone has to supply a reason why not, otherwise they need to "default" to all people's suffering counting equally). And that's the sticking point here. 

Other than that, most people will indeed be happy to accept that suffering is bad (which is a tautology) and that consequences are what matter.

> For example, "OK I can see that it's not going to be very nice for the disabled if you cut their benefits, but you just can't cut pensions. People have a "right" to that money, they paid their taxes their whole lives." This is the kind of commonly practised fallacious reasoning that people use all the time so they can ignore consequences when it's convenient for them to do so.

But it's not entirely fallacious. To show that, I can translate it into your consequentialist language.  Thus: "You can't just cut their pensions, that would be reneging on a social contract, and that alone would make them unhappy. Further, it would destroy trust in government and in social contracts more generally, and the consequences of damaging that trust would be bad for society in general, since we need such social contracts".   

Indeed, I think that all deontological and virtue-ethics approaches can be translated into consequentialist terms. So yes, let's agree with you that consequences are what matter.***  Again, that is not the sticking point!

***Edit to add: Except that the only way of evaluating consequences is in terms of feelings and values. So what ultimately matters is how we feel about things.   

> It's Sam Harris' trumped-up utilitarianism, The Moral Landscape that I'm defending, ...

Sam Harris sees his scheme as moral realist, and has stuck to that claim under criticism by philosophers.  (As you're likely aware, he initiated an essay competition inviting people to critique him on that point, and got 2000 submissions all explaining why that claim fails.)

> As such, there is a possible physical universe which results in the worst possible misery for everyone, corresponding to the lowest possible point on the moral landscape. Morality is a matter of navigating away from valleys to some higher point on the landscape, of which there are a vast array of possibilities.

If that were the extent of the claim, then, yes; and the Pope is Catholic.    But more relevantly, we then have to pick one of the array of higher points on the landscape.  And, as outlined in a recent comment, your axiom is not going to be much use, you're going to need to do a heck of a lot of making value judgements along the way.

> Which is just another way to say that it really is better to live in Sweden than Afghanistan, it's more than a matter of opinion.

How can you say it's "more than a matter of opinion" when talking about which is "better"?  How can you even define "better" except by asking people their preference?   Of course it's a matter of opinion!  (And there is nothing wrong with it being so!)   And note that your axiom is also a matter if opinion (even if that opinion is shared universally, that doesn't stop it being derived from opinion).

PS Feel free to explain why radical Islamist jihadis living in Sweden might move to Afghanistan to fight the Great Satan, if it's not a matter of opinion whether living there is "better". 

Post edited at 17:11
 Coel Hellier 13 Apr 2020
In reply to Ratfeeder:

From your quote:

"Moral realists are arguably justified in displaying the inadequacies of subjectivist moral theories; ..."

The only "inadequacies" are that it is counter-intuitive.  People are intuitive moral realists, and get rather unhappy if there there is no objective right and wrong. They really want there to be objective right and wrong (preferably one that backs up their own opinion ).  But that unhappiness is not actually an argument, and so they've not really "displayed the inadequacies" of subjectivist moral theories, they've only displayed the counter-intuitive nature of those theories. 

But so what, quantum mechanics is counter-intuitive, Darwinian evolution is counter-intuitive, Einsteinian relativity is counter-intuitive, the idea that a merely material brain can be a thinking, conscious entity is counter-intuitive.

Classical mechanics, Cartesian dualism, vitalism, are all far more intuitive but are wrong.  Intuition is an unreliable tool.

" ... but less successful so far in developing a convincing positive account of the reality of values."

Exactly. Moral realists have no idea what moral realism actually is, despite the fact that they (intuitively!) subscribe to it.   I've asked the questions I asked up-thread (21:15 Sun) enough times of enough people (including several professors of philosophy), and never got any sort of sensible answer, that I am now pretty confident that there are no answers. 

OP Jon Stewart 13 Apr 2020
In reply to Coel Hellier:

> You're also proposing the axiom that all people's suffering counts equally (or, alternatively, the axiom that someone has to supply a reason why not, otherwise they need to "default" to all people's suffering counting equally). And that's the sticking point here. 

We've been through this already. I'm not proposing any such axiom. It's just a matter of simple logic that in the absence of reasons that one person (and therefore their suffering) is more important than another, they are all equally important. You can state a fact about the world that you care more about your own suffering and your family's than that of strangers, and that's fine, that's how humans work. There's no sticking point.

> Other than that, most people will indeed be happy to accept that suffering is bad (which is a tautology) and that consequences are what matter.

> But it's not entirely fallacious. To show that, I can translate it into your consequentialist language.  Thus: "You can't just cut their pensions, that would be reneging on a social contract, and that alone would make them unhappy. Further, it would destroy trust in government and in social contracts more generally, and the consequences of damaging that trust would be bad for society in general, since we need such social contracts".   

Well then we'd be talking the same language and having a worthwhile discussion about which is the best policy. I would respond that although it's not desirable to renege on that social contract, and that is a valid negative consequence to be considered, it is preferable to cutting benefits for the disabled. I would back this up by saying that I would expect to see suicides of disabled people who simply don't have their basic needs fulfilled, whereas with the alternative I'd only expect to see some disgruntlement of perfectly well off pensioners and that the "breakdown of social contract" would have no wider effect.

My argument would be that by getting the public finances into better shape with the minimum of pain, I would be navigating uphill in the moral landscape. However, by causing the deep suffering of vulnerable people, and by employing the despicable manipulation of public attitudes towards those at the bottom of the socio-economic ladder, David Cameron's policy represents a descent into a deep valley. I would, of course, have the much stronger argument in this case.

> Indeed, I think that all deontological and virtue-ethics approaches can be translated into consequentialist terms. So yes, let's agree with you that consequences are what matter.

> So what ultimately matters is how we feel about things.   

Indeed it is. It's all about the conscious states of people, in particular the degree of suffering and wellbeing.

> Sam Harris sees his scheme as moral realist, and has stuck to that claim under criticism by philosophers.  

And I'm differing slightly by saying that the axiom is a value judgement that's needed to get it off the ground. While I can't answer the criticism that theory isn't self-justifying without any need for the value judgement, I will make the move of saying that other beliefs we do accept as being objective suffer the same problem of requiring a "leap of faith" assumption of one or more axioms. Science has to brush the problem of induction under the carpet, and assume the existence of an external reality. So the claim that a policy which we can show will increase wellbeing - say the creation of the NHS - is morally right is as good as any other factual claim about the world.

> If that were the extent of the claim, then, yes; and the Pope is Catholic.  

That is the claim, and yes, the Pope is Catholic.

>  But more relevantly, we then have to pick one of the array of higher points on the landscape.

The policy example above illustrates how we use the theory in practice: from where we are now, does the choice lead us uphill or downhill?

> How can you say it's "more than a matter of opinion" when talking about which is "better"?  How can you even define "better" except by asking people their preference?   Of course it's a matter of opinion!

The better set of policies represents a higher point on the moral landscape. That is to say that the fact about the conscious states of the people living in Sweden is that they experience more wellbeing and less suffering than those in Afghanistan. If when we examined the conscious states of the people of Afghanistan, we found that they were actually suffering much less on average than those in Sweden, then I would be wrong and Afghanistan would have found the best set of policies. Of course we can't actually measure people's conscious states, but we can measure proxies for them: Steven Pinker will tell you all about that.

> PS Feel free to explain why radical Islamist jihadis living in Sweden might move to Afghanistan to fight the Great Satan, if it's not a matter of opinion whether living there is "better". 

Not everyone's conscious state is affected in the same way by the policies of social democracy vs whatever disintegrating theocratic insanity is happening in any given region of Afghanistan. It's perfectly possible that if you're a jihadi, you won't like Sweden one bit, and the social democracy will do your head in. But you're an outlier - the peaks and valleys on the moral landscape represent the aggregated wellbeing in different circumstances. The jihadi is outlier whose suffering under Swedish policy is bringing the height of that peak down a tiny amount. 

Post edited at 22:13
 Ratfeeder 14 Apr 2020
In reply to Jon Stewart:

> I really think you'd struggle to find anyone would disagree that all other things being equal, a world with more suffering is not worse than a world with less. Or, that literally anything is a better than the worst possible misery for everyone. It really is universal.

I'd like to think that were true, but I'm not so sure. Think of far-right isolationist politics and economics in America. It basically says "let's do what's advantageous to us and f*** everyone else", and evidently gets quite a following. They don't care how much suffering there is in the world, just as long as it's not they who are suffering.

 Coel Hellier 14 Apr 2020
In reply to Jon Stewart:

> We've been through this already. I'm not proposing any such axiom. It's just a matter of simple logic that in the absence of reasons that one person (and therefore their suffering) is more important than another, they are all equally important. 

No, because "importance" is a value judgement, and value judgements don't derive from logic, they derive from human values and human nature.

And this *is* your axiom.  "Suffering is bad" is not an axiom, it's a tautology. (If it wasn't "bad" it wouldn't be "suffering".)   So, really, the axiom at the root of your system is "everyone's suffering counts equally".   But that is alien to human nature, since to nearly everyone their own suffering and that of their family matters way more than that of distant strangers. 

And you can't get your axiom from logic. You have no basis for a demand that humans "should" value everyone else equally unless they present good reasons otherwise. 

> I would respond that although it's not desirable to renege on that social contract, and that is a valid negative consequence to be considered, it is preferable to cutting benefits for the disabled.

And by doing that you'd be getting into the usual political discussions that continue all the time.  Which is exactly what people do.  People advocate how they want society to be, based on their values and how they see things, and we all try to influence each other and push society in the direction we want.

Though you might be attempting: "My particular set of values and trade-offs is not merely my opinion, it has backup as being the objectively right thing to do!".  But then large numbers of other people think that about their set of values and trade-offs.   And these claims are illusory if, as is the case, they depend on axioms whose only standing is the advocacy of the person advocating them. 

One thing I've learned over the years is that those with distinctly-left politics often have trouble dealing with the fact that people have different values.  "Why aren't people agreeing with this? Why aren't they voting for us in droves? I've explained it quite clearly three times. Which bit don't they get? Are they just stupid? Are they just wicked?",  say the Corbynites.

No, they just have somewhat different values. But the hard-left can't cope with that (perhaps partly owing to blank-slatism that says that people don't have intrinsic values or intrinsic variation in values?), so resort to conspiracy theories: "they've all been lied to by the media; they've all been duped by the Russkies".   

Centrists and those on the right, it seems to me, are much better at accepting that people will have a range of values and opinions, and agreeing to accept that and seek necessary compromises.

For all the lauding of "diversity" from the left, they have a really hard time accepting diversity of opinion and diversity of values.  And any moral system where you need only plug in the facts, and out comes the "right" answer that people "should" adopt, is intrinsically authoritarian.

Ooops, I seem to have gone all political.  I put it down to cabin fever!

 Coel Hellier 14 Apr 2020
In reply to Jon Stewart:

> Science has to brush the problem of induction under the carpet, and assume the existence of an external reality.

This is rather off topic, but no, I don't see those as problems, so long as science is not claiming certainty (which it is not).

Sure, we can't formally prove that induction will hold tomorrow, but it is overwhelmingly probable.  Similarly, the "external world" model is vastly more parsimonious than any alternative and thus is overwhelmingly probable.  (I can expound the arguments on request.)

These things cannot be proved formally from first principles, agreed, but science is not like that, it proves things in probabilistic terms based on empirical evidence, and those things are verified parts of the scientific world model, just as other parts are.  

OP Jon Stewart 14 Apr 2020
In reply to Coel Hellier:

> No, because "importance" is a value judgement, and value judgements don't derive from logic, they derive from human values and human nature.

> And this *is* your axiom.  "Suffering is bad" is not an axiom, it's a tautology. (If it wasn't "bad" it wouldn't be "suffering".)   So, really, the axiom at the root of your system is "everyone's suffering counts equally".

I'm surprised that you can't see the difference between these two statements:

"I care more about my own suffering than that of others" and

"My suffering is more important than that of others".

Maybe just read them a couple more times and see if you can work it out?

And I'll repeat: The axiom is "all other things being equal, a world with less suffering is better than a world with more", and you've agreed that it's universal. When I say "suffering is bad", you're right that it's a tautology, I just can't be bothered to write it all out, because ages ago we agreed what it meant and that while it is a value judgement, it's universal, not requiring advocacy.

> Though you might be attempting: "My particular set of values and trade-offs is not merely my opinion, it has backup as being the objectively right thing to do!". 

That's exactly what I'm attempting. I'm saying that once you take a consequentialist approach you have a common language in which to have a rational discussion about the strengths of different policies.

> But then large numbers of other people think that about their set of values and trade-offs.   And these claims are illusory if, as is the case, they depend on axioms whose only standing is the advocacy of the person advocating them. 

And whole point is that the axiom "all other things being equal, a world with less suffering is better than a world with more" is universal. Which is how we can cut across these differences in values and use reason to persuade people that a social democracy really is a better way to organise a society than a repressive theocracy. We agree that the we want to produce the least suffering, and we demonstrate using reason and evidence how that is best done.

We have a problem when we're arguing with someone who won't accept reason, which is pretty close to what you're doing yourself here!

> No, they just have somewhat different values.

In my mind, there are two different types of political disagreements. There is the worthwhile, rational type, where we agree that what we want is a world with less suffering, but we disagree about how to get there. And the more usual type, in which people are just pulling meaningless "rights" and "principles" out of their backsides that they happen to think are magic trump cards, and no progress is possible.

I'm arguing that with careful reasoning on both sides, the former type is always possible; and you're arguing that only the latter type is possible.

> And any moral system where you need only plug in the facts, and out comes the "right" answer that people "should" adopt, is intrinsically authoritarian.

Depends what you mean by "authoritarian". In the sense that science is "authoritarian", then yes. But in the sense that it leads to an authoritarian style of governance, absolutely not. If the right answers that pop out decree that a liberal style of governance, or indeed anarchy, is preferable, then that's what the right answer is. What such an ethical system entails is that moral progress is possible. 

What do you think of the case Steven Pinker makes for reason and progress?

OP Jon Stewart 14 Apr 2020
In reply to Coel Hellier:

> This is rather off topic, but no, I don't see those as problems, so long as science is not claiming certainty (which it is not).

I agree entirely. There is nothing wrong with binning certainty and deductive truth, and taking certain axioms as given - you can do this with morality just as you can do it with science. By doing so, you don't get moral realism (moral truths independent of how we feel), but you do get a type of common-sense objectivity that's the equivalent to that achieved by science.

OP Jon Stewart 14 Apr 2020
In reply to Ratfeeder:

> I'd like to think that were true, but I'm not so sure. Think of far-right isolationist politics and economics in America. It basically says "let's do what's advantageous to us and f*** everyone else", and evidently gets quite a following. They don't care how much suffering there is in the world, just as long as it's not they who are suffering.

It'd be interesting to ask such a person to find out. If they really don't care about the suffering outside their immediate ingroup, how would they delineate their "circle of concern"? Would it include all Americans, I wonder? What about immigrants? What about Americans living overseas? When you pressed them, I think they'd find it easier to agree that if it's not going to affect them at all, it is actually better if someone unrelated to them isn't suffering, and that the worst possible misery for everyone really is a bad state of affairs.

 Ratfeeder 14 Apr 2020
In reply to Coel Hellier:

> Yes, it's the study of ethics. But, if one studies ethics and arrives at the conclusion "we find that doing X is morally mandated, objectively so", then that is prescriptive.  That's how it can be prescriptive. 

> If moral-realism were true, that means there would be "moral facts", and if, as a result of study, one discerns moral facts, then that would be prescriptive since moral facts are prescriptive.

Strictly speaking, you're wrong on this. Arguing in favour of moral realism need only advocate the idea that there are moral facts and properties to be discovered in our experience of the human world. It needn't specify what those facts are. Similarly, one can advocate the theory that it's the facts of the world (and nothing else) that are the truth-makers of beliefs and propositions, without specifying what those facts are and thereby advocating particular beliefs. You don't need to prescribe any particular course of action in order to propose that the rightness or wrongness of an action is determined by the morally relevant facts of the situation rather than the agent's desires and feelings (and I can say that without prescribing moral realism). In practice, however, it's difficult to avoid giving examples that do not appeal to our intuitions one way or the other. But that applies in the analogous case as well. Tarski's recursive theory of truth: "Swans are white" is true if and only if swans are white. Joe's realist theory of ethics: "Killing is wrong" is a moral fact if, and only if, the circumstances of a particular instance of killing fail to justify it.

> So, ok, if moral realism is false, then academic study of ethics is not prescriptive. But, roughly half of academic philosophers think that moral realism is true.    Certainly, plenty of them, such as Peter Singer, think that they're doing prescriptive ethics.

Not necessarily, for the reasons given above.

 Ratfeeder 14 Apr 2020
In reply to Jon Stewart

Yes, I don't see how anyone could disagree with "the worst possible misery for everyone is a bad state of affairs". Coel would probably say it's a tautology (suffering is bad). But then, if it was, it would be a factual value judgement! Crikey, we can't have that. Actually I don't think it is a tautology, since "bad" is not semantically identical with "suffering". Is suffering always bad? Not necessarily. It's all about specific circumstances.

I think you're right that very few people want to see others suffer for the sake of it; but some can be very ready to cause the suffering of others for the sake of their own gain, even if it's only a small gain. Others, however, are very different. It all depends how morally sensitive they are.

 Ratfeeder 14 Apr 2020
In reply to Coel Hellier:

> From your quote:

> "Moral realists are arguably justified in displaying the inadequacies of subjectivist moral theories; ..."

> The only "inadequacies" are that it is counter-intuitive. 

I don't see subjectivism as counter-intuitive. I think most people would assume that values are subjective. I think you said something along those lines with reference to aesthetics in another post - these days we don't think of "beauty" as something objective. I think a lot of people find moral realism a bit counter-intuitive; it's difficult to conceive of objectivity that doesn't consist in objects, so what are we even talking about?

I think Logical Positivism has a very intuitive appeal to many students who first encounter it - as it did for me! (They soon learn to replace it with Popper's falsifiability as a philosophy of science, as I'm sure you did.) But it was only when I started studying the later Wittgenstein that I understood what was really wrong with Logical Positivism (that it was based on a false theory of meaning, of how language operates and how meaning is even possible).   

> " ... but less successful so far in developing a convincing positive account of the reality of values."

> Exactly. Moral realists have no idea what moral realism actually is, despite the fact that they (intuitively!) subscribe to it.   I've asked the questions I asked up-thread (21:15 Sun) enough times of enough people (including several professors of philosophy), and never got any sort of sensible answer, that I am now pretty confident that there are no answers.

I think I can now provide an answer (well I know I can actually) if you'll indulge my temerity. It's been a lot of years since I tackled philosophy with any seriousness, and I'd forgotten my later Wittgenstein! If the professor's you asked weren't familiar with this, I can understand why they couldn't give you an answer, because I think it's only this that can give an answer. It was thinking about aesthetics that reminded me of this, so I'll begin with the example you used, "beauty".

As you say, in an important respect aesthetics and ethics are in the same boat, so what one says about the one has bearing on what one says about the other. For Logical Positivists, who are basically Humean, both are intractably subjective. For Hume all talk about them consists of expressions of taste, sentiment and emotion. For Positivists, such talk is essentially meaningless, unless the subjective states in question can be analysed in terms of brain states, since then you have physically existing objects of reference. This is a problem, since ordinarily we don't have access to brain states, and even if we did, we'd need an adequate mind-brain identity theory, which would be an extremely difficult task to achieve. So it looks as if all talk about ethics and aesthetics is meaningless.

Now here's the irony. Logical Positivism was based entirely on one key philosophical text - Wittgenstein's Tractatus! This sets out a theory of meaning - the picture theory of meaning. The Positivists might have expected Wittgenstein to be pleased that a whole philosophical movement had resulted from his work, but actually he was distraught. The message of it was the exact reverse of their interpretation of it. It wasn't a work about scientific verification, it was a work about value; about the fact that values, as found in ethics and aesthetics, transcend the ability of language to express. "Whereof we cannot speak, thereof we must pass over in silence." It was value that was valuable, not the verifiable world of material objects.

But there was something unsatisfactory about not being able to talk about these values at all. After all, people do it all the time, and it seems to mean something. Is it really completely meaningless? Wittgenstein realised there was something wrong with his theory of meaning in the Tractatus. So he set about investigating how language actually operates in the real human world, instead of in some idealized ivory tower.

A word like "beauty", then. How does that work? If I use the word "beautiful" to describe, say, a scene in the Lake District, my primary concern in choosing that word is whether the choice is appropriate. And that really consists in whether I believe it's usual for speakers of the language to use the same word in the same context; and that I can only gain from experience of how other speakers of the language use the word. If it seems to me that the scene would warrant the use of the word by other people, then I'll use that word. So two things have to be taken into consideration. The nature of the scene and the use of the word. If I conclude that the scene warrants to use of the adjective "beautiful", then I'll use it. I might have heard aestheticians trying to analyse what it is about the scene that warrants the use of "beautiful" - a certain symmetry, perhaps? It may well be that there is no single factor that is common to all instances of the word's common usage, but that there's a network of interlinking threads.

But one thing is certain. The word "beautiful" has to refer to things in the "external world" (as opposed to what?), otherwise it would be impossible to learn how the word is commonly used by other speakers of the language. And the same goes for moral language like "good", "better", "right", "should". Thus subjectivism and Humean scepticism is refuted, falsified, once and for all. And such words could never have referred to brain states, because speakers of the language commonly never have access to brain states.

Post edited at 20:36
In reply to Ratfeeder:

Lovely summary of Wittgenstein. More or less how I remember it, being very rusty philosophically, and not having studied him for at least 35 years.

 Coel Hellier 14 Apr 2020
In reply to Jon Stewart:

> I'm surprised that you can't see the difference between these two statements:

> "I care more about my own suffering than that of others" and

> "My suffering is more important than that of others".

> Maybe just read them a couple more times and see if you can work it out?

That tone is unlike you Jon!     "Importance" is, again, a value judgement, a judgement made by a person.  There is no objective "importance". So that sentence should be: "My suffering is more important to me than that of others", which makes it the same as the first sentence.    Of course my suffering is not more important to other people. 

> And I'll repeat: The axiom is "all other things being equal, a world with less suffering is better than a world with more", and you've agreed that it's universal.

Yes, I agree with that statement.  But, again, it's pretty much a tautology ("suffering" is tautologically "bad"; less suffering is tautologically "better"). 

It's also pretty vague, too vague to amount to much as stated.  Yes, suffering is "bad". So were the Star Wars prequels!    The latter does not motivate me to do anything about it. 

Reducing suffering is good, in a vague, overall sense, but by assenting to that vague statement, I am not assenting to treating all people's suffering equally, and wanting to minimise it equally. 

If a terrorist accidently sets of his bomb early, and dies in agony as a result, I respond to his suffering with "aw diddums".  I would certainly care vastly less than I would about the identical suffering of an innocent child, caught in the blast if the bomb had gone off as intended.  And it would be a billion times less than if that child were my nephew.

So to firm your axiom up and make it concrete (and normative!) you need something like:

Axiom: We should minimise overall suffering, counting each human equally in the aggregate.

And, at that point, your axiom would no longer gain universal assent.

> In my mind, there are two different types of political disagreements. There is the worthwhile, rational type, where we agree that what we want is a world with less suffering, but we disagree about how to get there. And the more usual type, in which people are just pulling meaningless "rights" and "principles" out of their backsides that they happen to think are magic trump cards, and no progress is possible.

I think you're being unfair to those promoting "rights" and "principles". They do so precisely because they think that adopting them will reduce overall suffering in the world and make it a better place. 

Even in your scheme we'd need these heuristics, we couldn't calculate everything from first principles each time.

> But in the sense that it leads to an authoritarian style of governance, absolutely not

The reason I suggest that your approach has an authoritarian streak is that it consists of -- by your own account -- an axiom that you regard as universal, and facts (which are necessarily universal). It therefore brooks no difference in values.  If someone says they want something different then your system would respond "well you're just wrong".  And "I'm morally right, you're morally wrong, objectively so", is the starting point for most authoritarianism.   Because, if you really think "I'm morally right, you're morally wrong, objectively so", then you feel a moral obligation to correct them and impose the right answer.  The next step is re-education camps.  We already have a wave of this utter intolerance of dissenting opinion sweeping American college campuses. 

Accepting that your moral opinions are just your own opinion, accepting that other people are other people, with their own ideas, with their own (perhaps slightly different) moral values, and accepting that we all need to compromise and get along, is a vastly more tolerant attitude.    That doesn't stop you seeking to promote your own values and seeking to persuade other people -- but it does stop you imposing your own will as morally obligatory.

 Coel Hellier 14 Apr 2020
In reply to Ratfeeder:

> I don't see subjectivism as counter-intuitive. I think most people would assume that values are subjective.

Yes, they'd see "values" as subjective, but would see morality as objective.

> I think Logical Positivism has a very intuitive appeal to many students who first encounter it - as it did for me!

Ditto!

> (They soon learn to replace it with Popper's falsifiability as a philosophy of science, as I'm sure you did.)

Well Popperian falsifiability is rather over-rated as an account of science.  It's easy to poke holes in it.  It's sort of true in a rule-of-thumb way, but not much more than that. There are better accounts of how science works.  And as for Logical Positivism, I think I can make a pretty good stab at defending it.

I'll respond to the rest of this comment in a bit ...

 Coel Hellier 14 Apr 2020
In reply to Ratfeeder:

OK, here goes.  I must say that I have a hard time with Wittgenstein, especially when he plays all these "language games" in his works.  One can spend hours debating things like the "private language argument", of which 95% is spent trying to figure out what the argument actually is. 

> But one thing is certain. The word "beautiful" has to refer to things in the "external world" (as opposed to what?), otherwise it would be impossible to learn how the word is commonly used by other speakers of the language. [...] such words could never have referred to brain states, because speakers of the language commonly never have access to brain states.

So, let me ask. How about if I say: "I have a headache".  Does the word "headache" refer to something in the external world, or can it refer to a brain state (namely my own)?  If it refers to something in the external world, what on earth would that be? 

How about if I say "I am angry" or "I am happy". Does the "angry" or "happy" refer to something in the external world, or to my brain state? 

Anyhow, speakers of the language do commonly have access to brain states -- namely their own.  Admittedly, they might be making a bit of an assumption if they assume that other people have brain states similar to their own, but it's a fair assumption given how similar we all are, genetically and in most other ways.

And doesn't that give sufficient traction to enable us to develop language about our brains states?  If other people are using such words about their own brain states, and children grow up hearing such words and presuming that other people are much like them -- a fair assumption -- then the children learn to use the words about their own brain states.  

I'm not convinced there's any practical difficulty here.  But then I never was convinced that the "private language argument" contained any "argument".

 Coel Hellier 15 Apr 2020
In reply to Ratfeeder:

> I think I can now provide an answer (well I know I can actually) if you'll indulge my temerity.

Just on a point of pedantry, by the way, your post does not actually provide an answer to my questions (of 21: 15 Sun), it just argues that there must be one (if you accept Wittgenstein's arguments).

 Ratfeeder 15 Apr 2020
In reply to Coel Hellier:

> OK, here goes.  I must say that I have a hard time with Wittgenstein, especially when he plays all these "language games" in his works.  One can spend hours debating things like the "private language argument", of which 95% is spent trying to figure out what the argument actually is. 

> So, let me ask. How about if I say: "I have a headache".  Does the word "headache" refer to something in the external world, or can it refer to a brain state (namely my own)?  If it refers to something in the external world, what on earth would that be?

Wittenstein's answer to this is beautiful (see paragraphs below). "I have a headache" (first-personal ascription of "headache" is an expression of the qualia of having a headache. It has no truth-value. It is not a reference to anything, including a brain state (how can the sufferer know what his own brain state looks like?) But "Coel has a headache" (third-personal ascription of "headache" is a reference to Coel's behaviour (publicly accessible), which includes Coel's expression "I have a headache", and has a truth-value. Key concept: the epistemic asymmetry between the first and third-personal ascription of mental state terms. 

> How about if I say "I am angry" or "I am happy". Does the "angry" or "happy" refer to something in the external world, or to my brain state?

See above. 

> Anyhow, speakers of the language do commonly have access to brain states -- namely their own.  Admittedly, they might be making a bit of an assumption if they assume that other people have brain states similar to their own, but it's a fair assumption given how similar we all are, genetically and in most other ways.

In an important sense, though, people are their own brain states, or at least their consciousness is. The key question is, how can a person know what his or her own brain state looks like at time t? Answer, obviously, they can't.

> And doesn't that give sufficient traction to enable us to develop language about our brains states?  If other people are using such words about their own brain states, and children grow up hearing such words and presuming that other people are much like them -- a fair assumption -- then the children learn to use the words about their own brain states.

They cant talk about their own brain states because they can't refer to them. That's why children can't learn to use mental state terms by people referring to their own brain states. It can't be done.

> I'm not convinced there's any practical difficulty here.  But then I never was convinced that the "private language argument" contained any "argument".

It's a matter of understanding it sufficiently. Don't forget that the position you are holding was Wittgenstein's own until he himself rejected it.

It certainly helps that you have some familiarity with this stuff; makes it easier to discuss. For the sake of clarity, I'll give an account of what the private language argument is about (to the best of my ability). It's an argument against the possibility of a logically private, meaningful language. A publicly accessible sphere is required for the possibility of meaningful language. The difficult point W is trying to make is that if mental state terms like "pain" or "happy" or "angry" really did exclusively refer to "inner states" (that only the possessor of has access to), then the "meaning" of such terms would be impossible to learn by others, since others would have no means of knowing what the speaker means by the use of his terms.  W is not saying, as the logical behaviourist does, that use of a mental state term is exclusively a reference to what is publicly accessible (i.e. the sort of behaviour associated with it). If the logical behaviourist were right, there would be no means of associating the behaviour with how the mental state is experienced as qualia. So for W, the associated behaviour provides a publicly accessible criterion for the occurrence of the qualia.

The important concept that comes into play here is the epistemic asymmetry between the first and third-personal ascription of mental state terms. The first-personal ascription of the word "pain" is an expression of the qualia - "I'm in pain". Such first-personal expressions have no truth-value, since it's not possible for the sufferer to be mistaken that he is experiencing the qualia. And they are not references the brain states, because there is no distance between the sufferer and the brain state for reference to take place - the sufferer has no means of knowing what his own brain looks like while he's suffering the pain. The third-personal ascription of the word "pain" to the same individual (Tom, say), on the other hand, is a genuine reference, because it refers to the behaviour of the sufferer, which includes his linguistic expression "I'm in pain". These are the publicly accessible criteria that an observer has for the ascription of the word "pain" to Tom at time t. So "Tom is in pain" has a truth-value, but only because a publicly accessible criterion is available to the observer.

OP Jon Stewart 15 Apr 2020
In reply to Coel Hellier:

> That tone is unlike you Jon!

Haha.

> "Importance" is, again, a value judgement, a judgement made by a person.  There is no objective "importance". So that sentence should be: "My suffering is more important to me than that of others", which makes it the same as the first sentence.    Of course my suffering is not more important to other people. 

There is no difference between your position "there is no objective importance" and how that translates when you apply the theory - there is no difference in importance between different people's suffering. We're both saying the same thing.

> It's also pretty vague, too vague to amount to much as stated.  Yes, suffering is "bad". So were the Star Wars prequels!    The latter does not motivate me to do anything about it. 

The combination of a hot stove and your evolved empathy will motivate you - but that's not really the point, as what I'm defending is how this ethical theory works in policy as opposed to motivating individual ethical choices.

> Reducing suffering is good, in a vague, overall sense, but by assenting to that vague statement, I am not assenting to treating all people's suffering equally, and wanting to minimise it equally. 

I'm not asking you to. I'm asking you to agree that the principle of reducing suffering is the universal foundation of ethics. For a rational person, there is no reason, other than to reduce suffering (or increase wellbeing) for us to make moral choices. There is no requirement for you as an individual to treat all people's suffering equally. But, if you're going to make policy choices that affect a whole population, then there is a requirement that you don't differentiate the value of people's suffering by the colour of their skin, or their socio-economic status, or any other factor that you can't give good reasons for.

> If a terrorist accidently sets of his bomb early, and dies in agony as a result, I respond to his suffering with "aw diddums".  I would certainly care vastly less than I would about the identical suffering of an innocent child, caught in the blast if the bomb had gone off as intended.  And it would be a billion times less than if that child were my nephew.

Me too.

> So to firm your axiom up and make it concrete (and normative!) you need something like:

> Axiom: We should minimise overall suffering, counting each human equally in the aggregate.

> And, at that point, your axiom would no longer gain universal assent.

Here's what you misunderstand: when you aggregate up the wellbeing, you don't need to start differentiating those who "deserve" wellbeing, like innocent children, and those who don't, like terrorists. If you give a terrorist a life sentence, then you cause that person suffering, and it does count as much as anyone else's in the aggregate. But, by doing so, you're making a positive contribution by saving all the suffering of potential victims, of increasing public confidence that the government has a handle on terrorism, plus all the other consequences. This all just follows logically from agreeing that suffering is bad, and from not making any further value judgements about whose suffering matters most.

This ethical theory is all about not making any value judgements except for the axiom. It's consistent with a deterministic view of humanity, where we've chucked free will and moral responsibility out, and we're just dealing with the world as it pans out, trying to reduce the amount of suffering as it does. The terrorist is only a terrorist because of the chain of causes that ended in them being a terrorist. The innocent child is only innocent because they haven't yet had chance to do anything with bad consequences for others.

I appreciate that this is quite radical, but I'm sure you'd agree that there is no way to make free will (and therefore moral responsibility) consistent with the laws of physics. We are not magic, we are part of the causal chain of the natural universe.

> I think you're being unfair to those promoting "rights" and "principles". They do so precisely because they think that adopting them will reduce overall suffering in the world and make it a better place. 

Well that's fine if you agree that the bottom line is suffering - yes, some such heuristics will be reliable.

> The reason I suggest that your approach has an authoritarian streak is that it consists of -- by your own account -- an axiom that you regard as universal, and facts (which are necessarily universal). It therefore brooks no difference in values.  If someone says they want something different then your system would respond "well you're just wrong".   The next step is re-education camps.

So the type of education "camps" I'd support are teaching the secular humanist values I describe here in schools. If you follow these values, you can see precisely why tearing apart people's communities and trampling on their existing values and systems of rituals is the wrong thing to do.

> Accepting that your moral opinions are just your own opinion, accepting that other people are other people, with their own ideas, with their own (perhaps slightly different) moral values, and accepting that we all need to compromise and get along, is a vastly more tolerant attitude.    That doesn't stop you seeking to promote your own values and seeking to persuade other people -- but it does stop you imposing your own will as morally obligatory.

It stops you giving reasons to justify your values - all you can say is "I like it that way" and puts your views and Wahhabism on identical footing. You throw out moral progress.

What do you think of the case Steven Pinker makes for reason and progress?

Post edited at 13:50
 Coel Hellier 15 Apr 2020
In reply to Jon Stewart:

> There is no difference between your position "there is no objective importance" and how that translates when you apply the theory - there is no difference in importance between different people's suffering.

There is from the viewpoint of any given person.  For any given person, some people are more important to them than others. And the only viewpoints are those of individual people. 

> The combination of a hot stove and your evolved empathy will motivate you ...

My hand on a hot stove will motivate me to remove it.   Ali's hand on a hot stove, somewhere in Iraq, won't motivate me to do anything. (Though I may have a degree of sympathy, if I knew about it.)

> I'm asking you to agree that the principle of reducing suffering is the universal foundation of ethics.

But I don't agree. The "foundation" of ethics is that we are all value-laden creatures owing to our evolved natures.  You are suggesting that there is a reason-based "universal ethics" that is distinct from that, but I don't agree.

Essentially, my agreement with your axiom is the *product* of my values.  It is not the case that my values derive *from* your axiom.  (And I don't think that yours do either!)  So, no, the axiom is not a "foundation" of ethics.  The best it can be is a *commentary* about ethics.  

> For a rational person, there is no reason, other than to reduce suffering (or increase wellbeing) for us to make moral choices.

We are not rational creatures, we are value-laden creatures. 

> But, if you're going to make policy choices that affect a whole population, then there is a requirement that you don't differentiate the value of people's suffering by the colour of their skin, or their socio-economic status, or any other factor that you can't give good reasons for.

That is a value declaration that you declare (and one I might agree with). But it is not from reason, it is from your values, how you want things to be.  For starters, whether a reason otherwise is "good" or not is again a value judgement that a human has to make.  

> This ethical theory is all about not making any value judgements except for the axiom.

It is therefore not one that people will subscribe to, since we all have lots of values other than your axiom.  If you declare your axiom to be primary, trumping all other values people might have, then you are again doing that based on your values. 

Just because I agree that "reducing suffering would be good" does not mean that I agree that it is the only consideration when determining behaviour I would regard as "moral".

> I appreciate that this is quite radical, but I'm sure you'd agree that there is no way to make free will (and therefore moral responsibility) consistent with the laws of physics.

That would depend on how one defines "free will" and "moral responsibility".  There is a long tradition of "compatibilism" that makes sense of those concepts in a deterministic world.

> It stops you giving reasons to justify your values - all you can say is "I like it that way" and puts your views and Wahhabism on identical footing. You throw out moral progress.

Not at all.  Values are not fixed, they can be influenced by experience, argument, reason, etc   (As I've said, I'm only denying that values can be derived solely from reason; I'm not denying that reason and facts can influence our values.)  Thus we can seek to persuade each other.

> What do you think of the case Steven Pinker makes for reason and progress?

I think it's true -- we do make progress as judged by ourselves.  One can judge that "progress" from the concept of "what would someone choose if they had experienced both, and were then asked to choose one for the rest of their lives?".

 Coel Hellier 15 Apr 2020
In reply to Ratfeeder:

> Wittenstein's answer to this is beautiful (see paragraphs below). "I have a headache" (first-personal ascription of "headache" is an expression of the qualia of having a headache. It has no truth-value.

So why can't I say the same about "beautiful" in "it is beautiful", where "beautiful" is an expression of the qualia of experiencing beauty?

And why can't I say the same about "morally wrong" in "it is morally wrong", where "wrong" is an expression of the qualia of feeling dislike?

> It is not a reference to anything, including a brain state (how can the sufferer know what his own brain state looks like?)

One can know about ones brain state by *experiencing* it,  rather than by looking at it, can't one?

(Indeed I could quote Descartes that that is the only thing one can directly "know".)

> The key question is, how can a person know what his or her own brain state looks like at time t? Answer, obviously, they can't.

Why do they have to "look" at it, rather than know about it from experiencing it?

> They cant talk about their own brain states because they can't refer to them. That's why children can't learn to use mental state terms by people referring to their own brain states. It can't be done.

I'm not convinced by this claim.  So Sam is helping his father with the DIY.   His father hits his thumb with a hammer and dances around yelling "ow, ow, it hurts!".   Five minutes later Sam does the same thing.  

Wittgenstein is trying to argue that the language "hurts" is not referring to any internal state or qualia (like hurting), and that Sam cannot have learned to use language referring to his own qualia, and that the language is only about the dancing around?

So, for example, "red" cannot refer to the qualia of experiencing "redness", even though lots of philosophers use such words to refer to the qualia of experiencing "redness".   (For example, they use the phrase "the qualia of experiencing redness" to refer to the the qualia of experiencing "redness" -- do we think we know what we mean by this phrase?)

> The first-personal ascription of the word "pain" is an expression of the qualia - "I'm in pain". Such first-personal expressions have no truth-value, since it's not possible for the sufferer to be mistaken that he is experiencing the qualia. And they are not references the brain states, because there is no distance between the sufferer and the brain state for reference to take place - the sufferer has no means of knowing what his own brain looks like while he's suffering the pain.

I can tell, this is going to turn into an a discussion of what "reference" means, isn't it?  Philosophical discussions always go like this, turning into discussions of the most abstract terms.

Why does there need to be "distance between the sufferer and the brain state" for reference to take place?  Why does there need to be a visual ("looks like") component?

And, even if these are needed for the language to "have reference to" the brain state (whatever "reference" means), can't the language still be "about" the brain state?  After all, the sufferer knows about it because he's experiencing it.

And, though the sufferer cannot be mistaken about whether they are in pain, they could be lying.  Isn't that sufficient to give the statement "I'm in pain" a truth value?

Couldn't someone with a sufficiently advanced brain scanner in-principle determine whether someone was actually in pain?

 Coel Hellier 15 Apr 2020
In reply to Jon Stewart:

Just two addenda:

> There is no requirement for you as an individual to treat all people's suffering equally. But, if you're going to make policy choices that affect a whole population, then there is a requirement ...

Where in your axiom is this distinction between what the individual should do and what a policy maker should do?  Either the axiom is the universal foundation of morality or it isn't.

Also:

> This all just follows logically from agreeing that suffering is bad, ... I appreciate that this is quite radical,  ...

If the axiom is "quite radical" in its morality then it can't also be a mundane and universal statement that no-one would disagree with.  

OP Jon Stewart 15 Apr 2020
In reply to Coel Hellier:

> There is from the viewpoint of any given person.  For any given person, some people are more important to them than others. And the only viewpoints are those of individual people. 

When you apply the theory to policy, you are forced to take a wider view. How much you personally care about certain people loses its relevance entirely. I've lost count of how many times I've made this point - each time, you just repeat the obvious fact that individuals care differentially about other individuals. You're just attempting to evade the point that in the policy context I'm defending, individuals' viewpoints become irrelevant - they have to be aggregated, and if you're following reason (as opposed to a different value, say, racism) then each individual is equally important unless you can demonstrate otherwise.

> You are suggesting that there is a reason-based "universal ethics" that is distinct from that, but I don't agree.

And I haven't seen any additional values that don't reduce to minimising suffering that I think are worthwhile or desirable. The reason I think this reason-based ethical system works is because I don't see any advantage from stuffing in other unjustified values.

Maybe you can convince me by giving an example of a value that you want to hold to that isn't ultimately justified by increasing wellbeing - and that's necessary to get out answers that look reasonable about how we organise society?

> Essentially, my agreement with your axiom is the *product* of my values. 

Yes, the axiom is a value judgement.

> We are not rational creatures, we are value-laden creatures. 

When we act as individuals, no ethical theory is really going to work, because they don't account for our evolved motivations. But we're talking about policy.

> It is therefore not one that people will subscribe to, since we all have lots of values other than your axiom.

Individuals aren't going to be interested in subscribing to an ethical theory full stop, they're going to act on their evolved instincts. But there might still be objectively better and worse ways to organise society - the better ways being those that result in less suffering.

> Just because I agree that "reducing suffering would be good" does not mean that I agree that it is the only consideration when determining behaviour I would regard as "moral".

So you say, but I think that morality naturally reduces to this. This is why I'd like to see some examples of values you hold that stand independent of wellbeing, and don't reduce to it under a "why regress".

> That would depend on how one defines "free will" and "moral responsibility".  There is a long tradition of "compatibilism" that makes sense of those concepts in a deterministic world.

There is a long tradition of redefining "free will" to mean "choices you make without someone putting a gun to your head"; and they don't give compelling reasons to think that you can be held responsible for actions that are caused by causally sufficient factors you have no control over.

> Not at all.  Values are not fixed, they can be influenced by experience, argument, reason, etc   (As I've said, I'm only denying that values can be derived solely from reason; I'm not denying that reason and facts can influence our values.)  Thus we can seek to persuade each other.

So I'll follow you when you're using reason, but the minute you pull one of your unjustified "values" out of your backside that I don't happen to agree with, you've no hope of persuading me, because by your own admission you've departed from reason and you're just telling me about a preference you have.

You can't have your unjustified preference cake and eat it with persuasive custard on. Ain't gonna work.

> I think it's true -- we do make progress as judged by ourselves.  One can judge that "progress" from the concept of "what would someone choose if they had experienced both, and were then asked to choose one for the rest of their lives?".

I can't see what that choice could possibly be based on other than higher wellbeing. It can't be "values" this time, because we're talking about contrasting societies based on different values and preferring one to another. You can't have your moral progress cake and eat it off your relativist plate I'm afraid.

Post edited at 20:06
 Ratfeeder 15 Apr 2020
In reply to Coel Hellier:

> So why can't I say the same about "beautiful" in "it is beautiful", where "beautiful" is an expression of the qualia of experiencing beauty?

Because the predicate "beautiful" in "it is beautiful" doesn't have me as the subject. It isn't a first-personal ascription. It's a description of "it", whatever "it" is. It would only be an expression of the qualia of experiencing beauty if the sentence were "I am experiencing beauty", in which the subject is "I" instead of "it". This is the same basic error of logic as the one you made regarding Tom and the red fire engine.

A further point to make is that it makes no sense to say "I am experiencing beauty" unless there is something in the "external" world that the experience of beauty is an experience of. The beauty is ascribed to the object, not the subject, even though the experience of it is the subject's. There's no independently intelligible "thing" called "beauty" that you can experience in its own right. Beauty is a quality of the scene, painting or whatever. 

> And why can't I say the same about "morally wrong" in "it is morally wrong", where "wrong" is an expression of the qualia of feeling dislike?

For the same reasons. The predicate "morally wrong" describes an action, not a mental state; it's a quality, or not, of an action. "It is morally wrong" is not a sentence. It doesn't make any sense unless you complete it by saying, for example, "It is morally wrong to lie". In which case "morally wrong" is a property or quality of lying. You could say "This is morally wrong", but you would need a context in which the indexical word "this" refers to something people are doing.

> One can know about ones brain state by *experiencing* it,  rather than by looking at it, can't one?

No. In order to know about one's brain states you would need to be able to describe them in neuro-physiological terms, in which case you would need to see them from the outside, which is obviously impossible. An adequate mind-brain identity theory might enable the identification of which neuro-physiologically described brain states are identical with which experienced qualia (token-token theory maybe), but the qualia of the experience is itself just that - qualia. It's just what it feels like to have that brain state, which doesn't give you knowledge of anything. Hence, the expression of pain as in "I'm in pain" has no truth-value. It's not something I can be mistaken about. Knowledge requires the possibility of error.

> (Indeed I could quote Descartes that that is the only thing one can directly "know".)

Yeah, well that's the nub of the issue isn't it? The whole empiricist tradition was founded on Cartesian assumptions, and Hume brought it to it's logical conclusion. Wittegenstein (post Tractatus) concludes that such subjective sensations and feelings are precisely what is logically ruled out as knowledge. Visual, and auditory sensations, as cognitively interpreted by the brain, give us knowledge of empirical reality, but in those cases the knowledge is of something outside the sensations, not the sensations themselves in so far as they are independently intelligible; they are epistemic enablers.

> Why do they have to "look" at it, rather than know about it from experiencing it?

Because the experience of it doesn't give knowledge of anything (see above).

> I'm not convinced by this claim.  So Sam is helping his father with the DIY.   His father hits his thumb with a hammer and dances around yelling "ow, ow, it hurts!".   Five minutes later Sam does the same thing.  

> Wittgenstein is trying to argue that the language "hurts" is not referring to any internal state or qualia (like hurting), and that Sam cannot have learned to use language referring to his own qualia, and that the language is only about the dancing around?

No, that's the Logical Behaviourist view. For W the "dancing around" is the publicly accessible criterion for the third-person ascription of "hurts". The first-personal ascription of "hurts" is an expression, not a reference.

> So, for example, "red" cannot refer to the qualia of experiencing "redness", even though lots of philosophers use such words to refer to the qualia of experiencing "redness".   (For example, they use the phrase "the qualia of experiencing redness" to refer to the the qualia of experiencing "redness" -- do we think we know what we mean by this phrase?)

We only know what we mean by these phrases because we refer to the publicly accessible criteria associated with them. As human beings we have the ability to make that association - the awareness that other's have minds (an idea that Descartes seemed to struggle with).

> I can tell, this is going to turn into an a discussion of what "reference" means, isn't it?  Philosophical discussions always go like this, turning into discussions of the most abstract terms.

It's a logically important distinction. Vital if you're going to get a handle on this. Expression comes from the person without reference to anything; the sensation expressed occupies the same space as the consciousness required to express it, so there's no distance. Reference to it isn't possible because reference is a publicly accessible act. A person refers to something in the world. Hence, distance is required.

> Why does there need to be "distance between the sufferer and the brain state" for reference to take place?  Why does there need to be a visual ("looks like") component?

If you want to refer to a brain state then you need to be able to describe it in neuro-physiological terms. The brain is a physical object with physical properties. So referring to a brain state is to refer to a physically describable state of the brain. Otherwise it's a state of consciousness which can't be referred to for the reasons given above.  

> And, even if these are needed for the language to "have reference to" the brain state (whatever "reference" means), can't the language still be "about" the brain state?  After all, the sufferer knows about it because he's experiencing it.

> And, though the sufferer cannot be mistaken about whether they are in pain, they could be lying.  Isn't that sufficient to give the statement "I'm in pain" a truth value?

Only from a third-personal perspective. It doesn't have a truth-value for the person who says it, whether he's lying or not. That's because he can't lie to himself. It isn't a question of believing or disbelieving whether of not he himself is in pain. He just is (if he is) or isn't (it he isn't). He cant fool himself.

Remember the key concept: the epistemic asymmetry between the first and third-personal ascription of mental state terms.

> Couldn't someone with a sufficiently advanced brain scanner in-principle determine whether someone was actually in pain?

Of course. Then the physically describable brain state takes the place of the publicly observable behaviour or linguistic expression (because the brain state is now publicly observable). So it's third-personal ascription, not first-personal.

Post edited at 20:18
 Coel Hellier 15 Apr 2020
In reply to Jon Stewart:

> You're just attempting to evade the point that in the policy context I'm defending, individuals' viewpoints become irrelevant

But if you're talking about a universal axiom that is the foundation of morals, then it would apply just as much to individuals, to families, to small groups, as to society-wide policy makers.

> Individuals aren't going to be interested in subscribing to an ethical theory full stop, they're going to act on their evolved instincts.

Really?? Most of the discussion of morality over the ages has been about the individual.  "How should I act?"  "Should I give to charity?"  "Is John a moral person?"  In philosophical terms, the whole "virtue ethics" approach to ethics is one centered on the person. 

> There is a long tradition of redefining "free will" to mean "choices you make without someone putting a gun to your head"; and they don't give compelling reasons to think that you can be held responsible for actions that are caused by causally sufficient factors you have no control over.

Under compatibilism, being held "morally responsible" is about deterrence.  We hold people responsible (= threaten to sanction them) as a means of influencing their behaviour.  This concepts works under determinism.

> You can't have your unjustified preference cake and eat it with persuasive custard on. Ain't gonna work.

De facto it does work!  People can indeed be influenced (as lots of people from MLK to Mao have demonstrated).

> I can't see what that choice could possibly be based on other than higher wellbeing.

An individual would make the choice based on their individual preference. That's a different concept from a society-wide aggregate of a "wellbeing" metric.

 Coel Hellier 15 Apr 2020
In reply to Ratfeeder:

> Because the predicate "beautiful" in "it is beautiful" doesn't have me as the subject. It isn't a first-personal ascription. It's a description of "it", whatever "it" is. It would only be an expression of the qualia of experiencing beauty if the sentence were "I am experiencing beauty", in which the subject is "I" instead of "it".

One rather endearing trait of philosophers is the way they way, way over-analyse language.  It's as if they think: we're not scientists, so we won't do observations and experiments, so what are we left with? We're left with intuition and language, so we'll analyse those and treat them as primary", even though they are both utterly fallible.

But, language is not a formal logical system that sustains such analysis.  It's a messy, inconsistent, cobbled together system that has evolved over time, and is full of "history" and the misconceptions of past generations. 

So we get the Frege-Geach "refutation" of non-cognitivism.    This goes: "The language we use to talk about morals is cognitivist; therefore non-cognitivism is false".    One academic philospher once tried to persuade me that this was a knock-down refutation of non-cognitivism!  

It's like arguing against Copernicus: "But we call it a sunset, so it is the *sun* that is "setting", so the sun must be the one moving around a stationary Earth".

It's like pointing to the use of "soul" in popular language as a refutation of materialism and a defence of Cartesian dualism. 

So, sorry, I don't buy your argument above.  A person *could* just as well say: "I am experiencing beauty" rather than "it is beautiful". It is easy to imagine a language in which that is indeed how it would be phrased (in fact I'd bet there are some).

Yes, we need to use language to communicate, but any argument about the real world based on mere linguistics should be ignored unless backed up by a proper argument!   The goes for many of the "language games" played by Wittgenstein and his ilk. 

> A further point to make is that it makes no sense to say "I am experiencing beauty" unless there is something in the "external" world that the experience of beauty is an experience of.

Well no.  There is indeed a thing in the real world that the photons are bouncing off, but the "experience of beauty" is an internal experience.

> The beauty is ascribed to the object, not the subject, ...

Not if beauty is "in the eye of the beholder" -- literally.  If a mother can regard her child as the most beautiful of all children, while others regard the child as rather plain and  perhaps a bit ugly, then the "beauty" is not really a property of the object, but more of the subject-object interaction (whatever language "ascribes" it to).

> There's no independently intelligible "thing" called "beauty" that you can experience in its own right.

Hmmm, by sticking electrodes into the right places in a brain, you might be able to stimulate it such that the brain experiences "beauty" without there being an external object being looked at.   Indeed, how about a dream involving a beautiful person?  Does that qualify as the brain generating "beauty" all on its own?

> Beauty is a quality of the scene, painting or whatever. 

It is, at the minimum, a given person's reaction to that scene or painting,  Dung beetles are going to find different things beautiful than we do.

> For the same reasons. The predicate "morally wrong" describes an action, not a mental state; it's a quality, or not, of an action. "It is morally wrong" is not a sentence. It doesn't make any sense unless you complete it by saying, for example, "It is morally wrong to lie". In which case "morally wrong" is a property or quality of lying.

Here you are being a philosopher's by analysing language.   Yes, I agree, the language we use to talk about morals presumes and cognitivism and moral realism.  That shows only  that, over the history when language has developed, that most people thought that morality was cognitivist and moral realist.  This, though, no more proves the underlying reality than talk of "souls" proves dualism.

> No. In order to know about one's brain states you would need to be able to describe them in neuro-physiological terms, in which case you would need to see them from the outside, which is obviously impossible.

Balderdash!  We're either misunderstanding what each other means by "brain state" or this is utter poppycock.  By "brain state" I mean only a state that a brain is in.  Being "happy" is a state my brain can be in.  I can know when I'm happy!    A 5-yr-old can report his brain state as "happy" or "sad" or all sorts of things without knowing what "neuro-physiological" means, never mind knowing any neuro-physiological terms. 

To be continued ...

 freeflyer 16 Apr 2020
In reply to thread:

Your colourless green ideas sleep furiously.

But thank you for a rare chance to puncture Chomsky's inflated ego, and an interesting thread. Long may it continue!

 Ratfeeder 16 Apr 2020
In reply to Coel Hellier:

> Balderdash!  We're either misunderstanding what each other means by "brain state" or this is utter poppycock.  By "brain state" I mean only a state that a brain is in.  Being "happy" is a state my brain can be in.  I can know when I'm happy!    A 5-yr-old can report his brain state as "happy" or "sad" or all sorts of things without knowing what "neuro-physiological" means, never mind knowing any neuro-physiological terms. 

I can see very well that what you mean by "brain state" is "mental state", whereas what I mean by "brain state" is a physical state of the physical brain, and what I mean by "mental state" is a state of consciousness. You make the leap from mental state to brain state without understanding that to do so you need a mind-brain identity theory. Why do you think mind-brain identity theories exist?

 Ratfeeder 16 Apr 2020
In reply to Coel Hellier:

> Just on a point of pedantry, by the way, your post does not actually provide an answer to my questions (of 21: 15 Sun), it just argues that there must be one (if you accept Wittgenstein's arguments).


Well actually I already had provided an answer in earlier posts. I said it was the circumstances (facts) of the case that determine whether (e.g.) being "helpful" is a moral or an immoral property of an action; that it's a moral fact that being helpful is moral in case 1 and immoral in case 2. You were unable to accept that because you're a Humean subjectivist. But if a sufficient understanding of Wittgenstein can show that Humean subjectivism is false (because it's incoherent and based on an error of logic), then you would have no grounds for not accepting it.

1
 Ratfeeder 16 Apr 2020
In reply to Coel Hellier:

> One rather endearing trait of philosophers is the way they way, way over-analyse language.  It's as if they think: we're not scientists, so we won't do observations and experiments, so what are we left with? We're left with intuition and language, so we'll analyse those and treat them as primary", even though they are both utterly fallible.

> But, language is not a formal logical system that sustains such analysis.  It's a messy, inconsistent, cobbled together system that has evolved over time, and is full of "history" and the misconceptions of past generations. 

> So we get the Frege-Geach "refutation" of non-cognitivism.    This goes: "The language we use to talk about morals is cognitivist; therefore non-cognitivism is false".    One academic philospher once tried to persuade me that this was a knock-down refutation of non-cognitivism!  

> It's like arguing against Copernicus: "But we call it a sunset, so it is the *sun* that is "setting", so the sun must be the one moving around a stationary Earth".

> It's like pointing to the use of "soul" in popular language as a refutation of materialism and a defence of Cartesian dualism. 

> So, sorry, I don't buy your argument above.  A person *could* just as well say: "I am experiencing beauty" rather than "it is beautiful". It is easy to imagine a language in which that is indeed how it would be phrased (in fact I'd bet there are some).

> Yes, we need to use language to communicate, but any argument about the real world based on mere linguistics should be ignored unless backed up by a proper argument!   The goes for many of the "language games" played by Wittgenstein and his ilk. 

> Well no.  There is indeed a thing in the real world that the photons are bouncing off, but the "experience of beauty" is an internal experience.

> Not if beauty is "in the eye of the beholder" -- literally.  If a mother can regard her child as the most beautiful of all children, while others regard the child as rather plain and  perhaps a bit ugly, then the "beauty" is not really a property of the object, but more of the subject-object interaction (whatever language "ascribes" it to).

> Hmmm, by sticking electrodes into the right places in a brain, you might be able to stimulate it such that the brain experiences "beauty" without there being an external object being looked at.   Indeed, how about a dream involving a beautiful person?  Does that qualify as the brain generating "beauty" all on its own?

> It is, at the minimum, a given person's reaction to that scene or painting,  Dung beetles are going to find different things beautiful than we do.

> Here you are being a philosopher's by analysing language.   Yes, I agree, the language we use to talk about morals presumes and cognitivism and moral realism.  That shows only  that, over the history when language has developed, that most people thought that morality was cognitivist and moral realist.  This, though, no more proves the underlying reality than talk of "souls" proves dualism.

Philosophers of the western, analytic tradition generally have the deepest respect for science, and are often scientists themselves. To say that science provides us with the most accurate and comprehensive  method of gaining knowledge of the world would be an understatement for them. As philosophers, they're not trying to provide an alternative to science, far from it. Analysing language in the way Wittgenstein does is a method of scrutinising the coherence or otherwise of philosophical positions, not of scientific positions, and this is a question of logic. Hume was a philosopher, not a scientist. It may well be that Hume's philosophy appeals to many scientists, but to scrutinise the coherence or otherwise of Hume's philosophy we need to treat it as philosophy, not as science. And that requires the methods of philosophy (i.e. the logical analysis of language), not the methods of science. This is the serious purpose behind Wittgenstein's concept of "language games". If any philosophical position, including Hume's, is found to be incoherent by philosophical methods, then scientists are better off not subscribing to it. They can easily abandon it. They don't need it.

 Coel Hellier 16 Apr 2020
In reply to Ratfeeder:

> Why do you think mind-brain identity theories exist?

As a hold-over from Cartesian dualism, I would guess?

> I can see very well that what you mean by "brain state" is "mental state", whereas what I mean by "brain state" is a physical state of the physical brain, and what I mean by "mental state" is a state of consciousness.

So one state the physical brain could be in is with sufficient alcohol in the blood stream that its physical functioning is affected. 

Would you agree that a typical human would "know" whether that is the case, simply through their awareness of their body? 

In the same way, a human would "know" aspects of the state of their stomach (hunger, bellyache, etc) through that awareness of the body.

This is not really any different from how we "know" anything else, through awareness of one's sensory data.     The reason I "know" there's a bird sitting on a tree singing, just outside my window, is from awareness of the state of my sense organs (eyes and ears).

If you're going to discount this stuff as "knowledge" then we're not capable of knowing anything.

 Coel Hellier 16 Apr 2020
In reply to Ratfeeder:

> Well actually I already had provided an answer in earlier posts. I said it was the circumstances (facts) of the case that determine whether (e.g.) being "helpful" is a moral or an immoral property of an action;

But that's a complete non-response!  Just saying that its the "facts" of the situation that determine whether the action was "moral" doesn't tell us anything at all.   

The moral realist needs to present a proper account such that a Martian scientist, who as yet knows little about human "morality", could use that account and analyse the facts and arrive at an independent verification of whether the action was "moral".

> ... that it's a moral fact that being helpful is moral in case 1 and immoral in case 2.

That's just an assertion of moral realism, that there must be such "facts". It does nothing to explain *why* the action in case 1 was moral, nor does it explain what "... is moral" even means.

> You were unable to accept that because you're a Humean subjectivist.

No, I didn't accept the answer because there's no answer there.  It's merely an assertion that there is an answer.

OP Jon Stewart 16 Apr 2020
In reply to Coel Hellier:

So how about those values that don't reduce to wellbeing?

> But if you're talking about a universal axiom that is the foundation of morals, then it would apply just as much to individuals, to families, to small groups, as to society-wide policy makers.

The classic thought experiments such as the trolley problem and Singer's shallow pond show us that individuals have moral instincts that contradict what they say they believe when asked about ethics. These thought experiments expose us as moral hypocrites, they don't show us to hold a diversity of noble values that help us decide between "competing goods". As such, you can't test an ethical theory by asking "does it chime with all our instincts?" - no theory will, because evolution did not give us consistent moral instincts. What we're trying to do with an ethical theory is to construct something that when we step back and analyse rationally, we say, "yes, that works".

I'm arguing that some ways to organise society are objectively better than others because they result in less suffering and more wellbeing. If it's the case that social democracy is better, as a matter fact, than Wahhabism, then we have strong evidence for objective morality. Does the theory we've used at the policy level work well at the individual level, so that we'd all sign up to living our lives that way? No, not really. But does the theory work in principle at all levels, even if we can't work out what it tells us to do, and we might not like the answers? Yes, I think it does.

So we can give up at the individual level, and see if we can show that social democracy really is better than Wahhabism, because that's going to give us an answer about whether there is such a thing as reasoning your way to answer about morality, rather than just stating your preference.

> Really?? Most of the discussion of morality over the ages has been about the individual.  "How should I act?"  "Should I give to charity?"  "Is John a moral person?"  In philosophical terms, the whole "virtue ethics" approach to ethics is one centered on the person. 

Yes, and as we agreed, when you don't know about evolution, you're wasting your time.

> Under compatibilism, being held "morally responsible" is about deterrence.  We hold people responsible (= threaten to sanction them) as a means of influencing their behaviour.  This concepts works under determinism.

In other words, compatibilists don't actually believe in moral responsibility or free will, they just redefine them. The brand of utilitarianism I'm arguing for is consistent with a deterministic world view and doesn't require us to judge people as responsible for actions that the had no say over the causes of.

> De facto it does work!  People can indeed be influenced (as lots of people from MLK to Mao have demonstrated).

Well yes, you can persuade people by appealing to their emotions, or you could bribe them, threaten them or trick them. All are ways of persuading people. I'm arguing that a superior way of persuading people is to use reason. Reason's only going to work if there is some objective answer that you can use reason to uncover - you can't use reason to show why someone else should align with your unjustified preference. 

So, when we persuade people with reason, rather than the demagoguery, we settle on ideas that - because we got there through reason - make for moral progress. Some ideas are objectively good ideas, e.g. civil rights. These ideas tend to stick. To pick another one from the left, Attlee and the NHS - this is an objectively good idea, and it's stuck. But also, capitalism has delivered better results than the alternatives, has contributed to moral progress, and has stuck. Shame about poor old Mao, isn't it?

> An individual would make the choice based on their individual preference. That's a different concept from a society-wide aggregate of a "wellbeing" metric.

No, it's the same concept. You agree with me and Pinker in the case for reason and progress. Pinker uses proxy measures of wellbeing to justify that progress, and you're trying to agree with him but use a justification based on preference. What justifies the claim of progress is not just the preference of a randomly picked person (given the chance to experience two alternative societies, say Wahhabist and social democratic). You'd have to repeat the preference experiment over and over until you had the statistical result that apart from a few outliers, people generally prefer living in the social democracy. What you've done here is precisely the same as aggregating wellbeing.

You cannot have your moral progress cake and eat it off your relativist plate.

Post edited at 11:16
 Coel Hellier 16 Apr 2020
In reply to Ratfeeder:

> Analysing language in the way Wittgenstein does is a method of scrutinising the coherence or otherwise of philosophical positions, not of scientific positions, and this is a question of logic.

OK, but the true nature of morality is a scientific position, not a "philosophical one".  Morality is about biology, it's about our evolved human nature.  It's a branch of evolutionary psychology.  Thus, being a scientific position, it should be assessed on scientific grounds.

And if, as a result, we conclude that human language is misleading (as is the term "sunset") then ok.

> Hume was a philosopher, not a scientist. It may well be that Hume's philosophy appeals to many scientists, but to scrutinise the coherence or otherwise of Hume's philosophy we need to treat it as philosophy, not as science.

I'm only pointing to Hume as a nod of respect to one of the first people to get the answer right.  But I'm not arriving at that conclusion from an assessment of Hume's arguments -- I arrive at the position essentially from a consideration of evolutionary biology, post Darwin. 

Thus, it doesn't actually matter to me whether Hume's account is fully watertight in terms of a philosophical analysis using philosophical tools ("the logical analysis of language"), whats matter is whether the account of morality is the truth about the way reality is -- and science and its tools are the best way of assessing that. 

Of course I'm willing to allow that philosophy can help with that process, but we need to allow for the possibility that human language (a cobbled together mess) is highly misleading on the topic. 

 Tom Walkington 16 Apr 2020
In reply to Jon Stewart:

I am enjoying this thread.It is helping me get through the 'lockdown'. I find some of it hardgoing,but I get the general gist.

 Coel Hellier 16 Apr 2020
In reply to Jon Stewart:

> So how about those values that don't reduce to wellbeing?

Again, you're pointing to tautologies.  Yes, the only thing that people care about are the things they care about. And you can take all the things that people care about and wrap them up in the term "wellbeing".  That's a tautology.

As I've said, that's not the axiom at the root of your system (tautologies cannot be). Your system requires other axioms than that such as "we should treat everybody's wellbeing as equally important".  

That does not follow logically from anything else (certainly not from anything that I've given assent to), it's an axiom.

> no theory will, because evolution did not give us consistent moral instincts.

Agreed.  Why is that a problem? 

> What we're trying to do with an ethical theory is to construct something that when we step back and analyse rationally, we say, "yes, that works".

It's what *you* are trying to do.  You are trying to construct a rational, coherent, self-consistent moral system. That's ok if that's what you want to do. 

I don't want to do that.  Because since, as you've just stated, "evolution did not give us consistent moral instincts", that means that your scheme will *not* align very well with what I want.  And I'd prefer to stick with what I want, rather than adopt your moral scheme (whose only standing is your advocacy).

> In other words, compatibilists don't actually believe in moral responsibility or free will, they just redefine them.

The compatibilist account of those terms has a pretty long history, so I don't necessarily accept that the dualist account of those terms is the primary one.

> Well yes, you can persuade people by appealing to their emotions, or you could bribe them, threaten them or trick them. All are ways of persuading people. I'm arguing that a superior way of persuading people is to use reason.

Better in the sense of being more effective?  Hmm, not so sure.  Anyhow, the most effective way is to use all of those including reason. Why deny yourself a tool?

> Reason's only going to work if there is some objective answer that you can use reason to uncover

You can use reason to lever on people's values and feelings.  That is, de facto, how it works.  No-one reasons from factual moral axioms, however much utilitarians wish they would. 

> - you can't use reason to show why someone else should align with your unjustified preference. 

Which is why you use reason in combination with appeals to value and to feelings.

OP Jon Stewart 16 Apr 2020
In reply to Tom Walkington:

I can't follow anything once Wittgenstein is involved. I quite recently sat through a couple of lectures by a really eloquent, engaging philosophy professor, aimed at the enthusiastic layperson just like myself.  Couldn't understand a word.

 Coel Hellier 16 Apr 2020
In reply to Jon Stewart:

> I can't follow anything once Wittgenstein is involved.

You're not missing anything -- unless it is the case that deep truths about the nature of reality can be discerned from an in-depth analysis of language. 

Followers of Wittgenstein just assume that that's the case (because that's a basic presumption of the philosophical method).

OP Jon Stewart 16 Apr 2020
In reply to Coel Hellier:

> Again, you're pointing to tautologies. 

I'm really not. I'm asking you to back up your claims:

"It is therefore not one that people will subscribe to, since we all have lots of values other than your axiom."

"Just because I agree that "reducing suffering would be good" does not mean that I agree that it is the only consideration when determining behaviour I would regard as "moral""

What are these additional considerations?

> As I've said, that's not the axiom at the root of your system (tautologies cannot be). Your system requires other axioms than that such as "we should treat everybody's wellbeing as equally important".  That does not follow logically from anything else (certainly not from anything that I've given assent to), it's an axiom.

I think that in absence of good reasons why one person's wellbeing is more important than another, it's just the case that each should be treated equally - to do otherwise would be unreasonable. You just don't think being unreasonable is a problem, since there are no reason-based ethics. That's your anything-goes relativism showing.

> Agreed.  Why is that a problem? 

It's a demonstration that we should look for evidence for objective morality at the policy, not the individual level.

> I don't want to do that.  Because since, as you've just stated, "evolution did not give us consistent moral instincts", that means that your scheme will *not* align very well with what I want.  And I'd prefer to stick with what I want, rather than adopt your moral scheme (whose only standing is your advocacy).

And that's absolutely fine if you're happy to be consistent and embrace the conclusions moral relativism entails. But you're not, you want to have your cake and eat it.

> Better in the sense of being more effective?  Hmm, not so sure.  Anyhow, the most effective way is to use all of those including reason. Why deny yourself a tool?

Better in the sense of identifying the objectively good ideas that when we uncover them and incorporate them into our society as policies and institutions, they result in moral progress (that is, they generate higher levels of wellbeing).

> You can use reason to lever on people's values and feelings.  That is, de facto, how it works.  No-one reasons from factual moral axioms, however much utilitarians wish they would. 

That's true - if you want to win an election, for god's sake don't stick to reason, you've got to get in there and stir up people's emotions. That's exactly why moral progress is so f*cking slow! That's why we get Cameron and Trump and Brexit. And conversely why at the moment, when the politicians with their emotion-based raison d'etre are taking a backseat to the technocrats, the government is actually making a pretty good job of dealing with a crisis. 

I don't think it's illogical or impossible to be a moral relativist. I just think that if that's your choice, then you've got to accept that every time your advocate for a policy all you're doing is telling me about a preference and attempting an appeal to emotion. That's probably not going to be very persuasive to anyone who believes in secular humanist values rooted in reason. If you don't want to persuade us, and you're content with demagoguery, then that's your choice. You can talk without any hope of persuading us if you want to.

But you can't hold a belief in moral progress that's consistent with this relativism - it's a contradiction.

Post edited at 12:37
 Coel Hellier 16 Apr 2020
In reply to Jon Stewart:

> What are these additional considerations?

One example would be: "I care more about my family than I do about unknown and unrelated strangers".

> I think that in absence of good reasons why one person's wellbeing is more important than another, it's just the case that each should be treated equally

No problem, if you think that, based on your values.   My point, however, is that you can't arrive at that from reason alone. 

> It's a demonstration that we should look for evidence for objective morality at the policy, not the individual level.

Why would there be objective morality at any level? What warrant do we have for supposing that there is?

Anyhow, societal policy is essentially the aggregate of all our individual influences.  You can't argue for an objective morality unless it also applies at the level of an individual. 

> And that's absolutely fine if you're happy to be consistent and embrace the conclusions moral relativism entails.

Can you define "moral relativism", in order for me to decide whether to "embrace" it?

> ... then you've got to accept that every time your advocate for a policy all you're doing is telling me about a preference and attempting an appeal to emotion.

Accepted!  

> That's probably not going to be very persuasive to anyone who believes in secular humanist values rooted in reason.

But then I don't, values are never (and can't be) rooted in reason.   (Humean is/ought divide.)

> If you don't want to persuade us, and you're content with demagoguery, then that's your choice. You can talk without any hope of persuading us if you want to.

Just above you accepted that it *was* possible to persuade people!  

 Ratfeeder 16 Apr 2020
In reply to Coel Hellier:

> As a hold-over from Cartesian dualism, I would guess?

No, because monists see the need for them too (Donald Davidson's anomalous monism is a token-token mind-brain identity theory). Berkeley and Hume are holdovers from Cartesian Dualism. Logical Positivists retain basic Humean assumptions, but just reject Hume's deductivism.

> So one state the physical brain could be in is with sufficient alcohol in the blood stream that its physical functioning is affected. 

> Would you agree that a typical human would "know" whether that is the case, simply through their awareness of their body? 

> In the same way, a human would "know" aspects of the state of their stomach (hunger, bellyache, etc) through that awareness of the body.

> This is not really any different from how we "know" anything else, through awareness of one's sensory data.     The reason I "know" there's a bird sitting on a tree singing, just outside my window, is from awareness of the state of my sense organs (eyes and ears).

Yes, all that is true. We can know which parts of our body are being affected by something through our sensations, but that is knowing something about parts of bodies outside our brains; the object of knowledge isn't our own brain. Similarly, our visual and auditory sensations furnish us with knowledge, because the knowledge is of something outside the brain; the sensations are not our objects of knowledge. Wittgenstein gives the analogy of the blind old man with the stick. He uses the stick to find out what objects are in his path. What is he concentrating on, the sensations in his hand, or the objects in his path? If he concentrated on the "inner" sensations he wouldn't know how to build up a picture of what's actually in front of him, but by concentrating "outwardly" he can use his stick as an exploratory tool.

> If you're going to discount this stuff as "knowledge" then we're not capable of knowing anything.

Quite the reverse. The sensations are not our objects of knowledge, they are our means of knowing about the world outside our brains. Descartes' idea that such sensations are the only certain objects of knowledge we can have, leads to idealism. Hence, Berkeley's dictum that nothing exists independently of what we perceive. For Berkeley, of course, God comes to the rescue, so that there's an "external world" after all. Hume is simply an atheist version of that. We can't really know if there's an external world or not, but we can't help behaving as if there is. 

1
OP Jon Stewart 16 Apr 2020
In reply to Coel Hellier:

> One example would be: "I care more about my family than I do about unknown and unrelated strangers".

That's reverting from the policy level to the individual level. We're looking for values that will motivate policy preferences that don't reduce down to wellbeing.

> No problem, if you think that, based on your values.   My point, however, is that you can't arrive at that from reason alone. 

And I've granted you that the axiom is a value judgement. 

> Why would there be objective morality at any level? What warrant do we have for supposing that there is?

Because we seem to see moral progress. And moral progress requires objective morality.

> Anyhow, societal policy is essentially the aggregate of all our individual influences.  You can't argue for an objective morality unless it also applies at the level of an individual. 

I agree, and covered this point:

Does the theory we've used at the policy level work well at the individual level, so that we'd all sign up to living our lives that way? No, not really. But does the theory work in principle at all levels, even if we can't work out what it tells us to do, and we might not like the answers? Yes, I think it does.

> Can you define "moral relativism", in order for me to decide whether to "embrace" it?

That no moral view is any better than any other, all are merely preferences of equal standing that cannot be compared by the use of reason So, if you accept moral relativism (I think "subjectivism" is a more accurate term?), then social democracy and Wahhabism are just different preferences people have about how society should be run, and there is no rational basis for attempting to persuade anyone that Wahhabist ideas about how to organise society are shite.

> But then I don't, values are never (and can't be) rooted in reason.   (Humean is/ought divide.)

"Rooted in reason" is shorthand for "accepting the axiom that a world with less suffering is preferable to the same world with more, and using reason".

> Just above you accepted that it *was* possible to persuade people!  

I accepted that it was possible to persuade the electorate with demagoguery. I don't accept that it's possible to persuade a person who's committed to reason. As Steven Pinker says, you're already committed to reason by the fact that you're engaging in this discussion and trying to give me reasons for your view.

 Pyreneenemec 16 Apr 2020
In reply to Tom Walkington:

> I am enjoying this thread.It is helping me get through the 'lockdown'. I find some of it hardgoing,but I get the general gist.

Not only is it helping to get  through these  long days of confinement but it has also favored  further  reading  in text-books that  haven't seen the light of  day  for over 40 years ! 

I guess there aren't many of us following the thread but no matter !

Have a like !

 Coel Hellier 16 Apr 2020
In reply to Jon Stewart:

> That's reverting from the policy level to the individual level.

But are you proposing a universal foundation for morality, or not? If you are, then it needs to apply always and everywhere and at all levels.  

If you're not doing that, and are instead discussing what policies we want societies to adopt, then we're effectively discussing politics.  Which is fine, but it's a different topic from the ultimate status of morality. 

> Because we seem to see moral progress. And moral progress requires objective morality.

I don't agree. "Progress" is a value judgement (it depends on an aim or desire being specified), and thus we can make "progress" according to someone's subjective judgement. 

In saying that society has made progress (on which I agree with Pinker), all we are doing is declaring that we like the changes that have occurred. 

> [defining moral relativism] ... That no moral view is any better than any other, ...

Well then I'm not a moral relativist.  "Better" is a value judgment.  People make value judgements. And no, people do not rank all moral views as equal.  For example, I like mine more than Stalin's.

> (I think "subjectivism" is a more accurate term?),...

Yep!

> then social democracy and Wahhabism are just different preferences people have about how society should be run ...

Which is a true statement ...

> ... and there is no rational basis for attempting to persuade anyone that Wahhabist ideas about how to organise society are shite.

There's no *rational* basis, but there is however a values-based basis.   I would seek to persuade based on my values.

Moral realists seem to be people who are unhappy that there isn't a Sky Daddy with a big booming voice, and a finger reaching down from heaven, pointing at someone and saying: "You, you are the one with the right opinion". 

 Coel Hellier 16 Apr 2020
In reply to Jon Stewart:

> "Rooted in reason" is shorthand for "accepting the axiom that a world with less suffering is preferable to the same world with more, and using reason".

Then you're using a wrong shorthand, since your axiom (as you've accepted) is not itself derivable from reason.

So your shorthand should be "rooted in my axiom, where my axiom is declared based on my values".

> I don't accept that it's possible to persuade a person who's committed to reason.

I'll treat that as a problem if I ever meet such a person!

> As Steven Pinker says, you're already committed to reason by the fact that you're engaging in this discussion and trying to give me reasons for your view.

I'm giving reasons for my factual statements about how the world is (and thus about meta-ethics).   But moral prescriptions cannot ultimately derive from reason, they must derive from values.  Of course reasons can both influence values, and be a one of the tools we use to persuade. 

 Coel Hellier 16 Apr 2020
In reply to Ratfeeder:

> We can know which parts of our body are being affected by something through our sensations, but that is knowing something about parts of bodies outside our brains; the object of knowledge isn't our own brain.

Why is it any different when the object of knowledge *is* our brain?  By accepting "yes, all that is true" you've accepted that self-awareness can give us "knowledge" of whether our brain function is being affected by alcohol. 

(Another thing that I've learned about philosophers is that they are very good at inventing distinctions and thence problems when there simply is no problem!)

> Wittgenstein gives the analogy of the blind old man with the stick. He uses the stick to find out what objects are in his path. What is he concentrating on, the sensations in his hand, or the objects in his path? If he concentrated on the "inner" sensations he wouldn't know how to build up a picture of what's actually in front of him, but by concentrating "outwardly" he can use his stick as an exploratory tool.

Why can't he be aware of all of those, of both the sensations in his hand AND of what it tells him about objects on the path?

Everything we know comes from a chain: e.g.: object -> photons ->  microscope -> photons -> eye -> brain. 

It seems perverse to say that this chain gives "knowledge" of the object (and perhaps of the microscope and perhaps of the photons?) but not the eye and not the brain. 

 TobyA 16 Apr 2020
In reply to Jon Stewart:

I'm very glad that Wittgenstein does not get onto the A level spec because what I remember of reading him and reading about him at university wasn't a happy experience! Trying to get your head around Kant enough in order to teach him to a 16-year old is hard work already! 

 TobyA 16 Apr 2020
In reply to Coel Hellier:

It seems you need to write off most of 20th century moral philosophy if you're not interested in language though. 

 Coel Hellier 16 Apr 2020
In reply to TobyA:

> It seems you need to write off most of 20th century moral philosophy if you're not interested in language though. 

Language as a topic in it's own right is interesting and worth studying.  But we should realise what it is, a mechanism that evolution has cobbled together to allow us to influence each other by communicating with each other.  Thus studying it is akin to studying bird song.   

And yes, I would indeed write off swathes of 20th century philosophy (and not just the moral stuff). Sorry! 

 Ratfeeder 16 Apr 2020
In reply to Coel Hellier:

> Why is it any different when the object of knowledge *is* our brain?  By accepting "yes, all that is true" you've accepted that self-awareness can give us "knowledge" of whether our brain function is being affected by alcohol.

I only agreed that all those cases give us knowledge of something. The object of knowledge in experience is never our own brain; how could it be? In the case of alcohol, the object of knowledge is how one is being affected by the alcohol - stumbling, slurred speech, losing balance, slower reactions (though often we're unaware of that and it takes someone else to spot it). Sensations like dizziness and nausea are not in themselves objects of knowledge to the person who feels that way; we're not aware of feeling dizzy, we just feel dizzy (the feeling is the state of consciousness).

> (Another thing that I've learned about philosophers is that they are very good at inventing distinctions and thence problems when there simply is no problem!).

I had to do a double take on this. Are you seriously suggesting that the distinction between minds and brains was invented by philosophers? In ordinary (non-philosophical and non-scientific) language, when we talk about our mental states we're not talking about our brains. We could talk about and express our mental states even if we didn't know we had brains. If we're going to talk about our brains, we need to talk in physical terms about the physical brain. Even monists understand that (I'm a monist). You don't need to be a Cartesian dualist to understand it. Mind-brain identity theories play an important role in Logical Positivism, which is usually neutral-monist. Donald Davidson's monism recognises that mental events (beliefs, desires, sensations etc.), have two levels of description; the ordinary 'mental' level, where we use the usual terms "belief", "desire" etc., and the physical brain level, where scientists use neuro-physiological terminology.  Description on one level operates entirely differently from description on the other. He proposes that only a token-token identity theory could link the two. You seem permanently fixated in a confusion between the two levels of description.

> Why can't he be aware of all of those, of both the sensations in his hand AND of what it tells him about objects on the path?

Well in a sense he is, but the objects on the path it tells him about are the only objects of knowledge it gives him.

> Everything we know comes from a chain: e.g.: object -> photons ->  microscope -> photons -> eye -> brain. 

> It seems perverse to say that this chain gives "knowledge" of the object (and perhaps of the microscope and perhaps of the photons?) but not the eye and not the brain.

Does it? I think it seems perverse to say the opposite. How can looking through a microscope at a sample of bacteria give knowledge of the human brain? That seems perverse.

Post edited at 17:18
 Coel Hellier 16 Apr 2020
In reply to Ratfeeder:

> The object of knowledge in experience is never our own brain; how could it be?

Straightforwardly!  That's how it could be!

> In the case of alcohol, the object of knowledge is how one is being affected by the alcohol - stumbling, slurred speech, losing balance, slower reactions ...

Well yes, and so we know that we are being affected by alcohol, and we know that the brain is being affected because we can feel it. 

> Sensations like dizziness and nausea are not in themselves objects of knowledge to the person who feels that way; we're not aware of feeling dizzy, we just feel dizzy (the feeling is the state of consciousness).

We're not aware of feeling dizzy, we just feel dizzy??  Surely, when we feel dizzy, a modicum of self reflection makes us aware that we feel dizzy.

> I had to do a double take on this. Are you seriously suggesting that the distinction between minds and brains was invented by philosophers?

Well no, I was more suggesting that the distinction between "knowledge of" things outside the body and "knowledge of" things inside the body (eyes, stomach, brain) is an artificial distinction that just invents a problem when there is none.

> We could talk about and express our mental states even if we didn't know we had brains.

And we could talk about the state of our stomachs even if we didn't know we had stomachs.  But we would know that the feelings such as hunger pertained to that part of the body.  Similarly, we know that a headache pertains to the brain (or to the "inside of the head" if we have no idea what's in the head).

> If we're going to talk about our brains, we need to talk in physical terms about the physical brain.

Is saying that "I have bellyache" talking in physical terms about the physical stomach?  Or are you going to add the stomach to the class of body parts that we can't know about?

> Well in a sense he is, but the objects on the path it tells him about are the only objects of knowledge it gives him.

So you're now saying that poking things with a stick cannot give us knowledge of our hands -- even though we can see our hands (which seemed to be a sticking point regarding the brain)? 

So are our hands added to the class of body parts that we can't know about?   (Really?) 

How about if we amputated someone's hand, and gave him an artificial one, complete with sensors on the fingers connected to electrodes in the brain. Would he then -- magically -- be able to "know" about the artificial hand, in a way that he can't "know about" his biological hands?

> How can looking through a microscope at a sample of bacteria give knowledge of the human brain? That seems perverse.

Well, let's state the chain again: "object -> photons ->  microscope -> photons -> eye -> brain."

If I see a rather blurry image, I learn "the microscope is out of focus", so I learn about the microscope, not the object.

If everything looks dark it could be I've not switched the light on, or left the microscope cap on, so I learn about the (lack of) photons passing through, not about the object.

If I see only a dull and blurry image with my left eye, but a clear image with my right eye, I learn that there is something wrong with my left eye, such as a cataract.

If I see the image swirling around, coming in and out of focus, then I learn that I've had too much alcohol which is affecting my visual system.  Thus I learn about my brain, not about the object. 

Post edited at 19:52
 Sir Chasm 16 Apr 2020
In reply to TobyA:

> It seems you need to write off most of 20th century moral philosophy if you're not interested in language though. 

What insights into the human condition do you think we would have missed? 

 Ratfeeder 16 Apr 2020
In reply to Coel Hellier:

> OK, but the true nature of morality is a scientific position, not a "philosophical one".  Morality is about biology, it's about our evolved human nature.  It's a branch of evolutionary psychology.  Thus, being a scientific position, it should be assessed on scientific grounds.

Even Richard Dawkins doesn't think that. What makes you think morality (as opposed to human behaviour in general) has a "true nature"? The domain of "morality" can only be determined by how people use the word "morality", otherwise you're talking about something else. From what I can gather so far, your view of the word "moral" is that it refers to an unspecified brain state. What does that tell us about the "true nature of morality"? Absolutely nothing. Morality is not "about" biology or our evolved human nature. It's about what reasons we might have for acting, how they are generated, and how they operate in our decision making as we try to deal with the complexities of our human lives. Evolutionary biology may well explain (and I'm sure it does - I'm a fan of Darwin) why we tend to act in the ways that we do, including altruistically, and the capacities we have to make decisions, but it doesn't give an account which suggests what factors are involved in deciding what is best or better for us to do. I think your answer basically is that we can't, because what we do is always determined by our evolved nature. In effect, there is no morality, just the subjective illusion of it. But this simply assumes that evolutionary processes haven't endowed us with autonomy.

> And if, as a result, we conclude that human language is misleading (as is the term "sunset") then ok.

Wittgenstein, to someone who defends Earth-centrists of olden days by saying "But it looks as if the Sun goes round the Earth.": "How would it have looked, if it had looked as if the Earth orbited the Sun?"

I wouldn't deny that human language can be misleading, or that we don't need science to correct our naïve beliefs. But morality doesn't concern the truth about the natural world, it concerns how we should behave towards each other as human beings. It concerns the practical considerations that affect each and every one of us in our everyday lives. So your whole enterprise is just a huge category mistake.

> I'm only pointing to Hume as a nod of respect to one of the first people to get the answer right.  But I'm not arriving at that conclusion from an assessment of Hume's arguments -- I arrive at the position essentially from a consideration of evolutionary biology, post Darwin.

However you arrived at that answer, it is precisely your formulation of it that is vulnerable to Wiggenstein's arguments, because it is Humean in the relevant respects. I can only go by what you actually say, in language. 

> Thus, it doesn't actually matter to me whether Hume's account is fully watertight in terms of a philosophical analysis using philosophical tools ("the logical analysis of language"), whats matter is whether the account of morality is the truth about the way reality is -- and science and its tools are the best way of assessing that.

But whatever conclusion about the way reality is reached by science and it's tools, how does that help us to think about the ethical choices we are forced to make in our everyday lives? How does it help me to make a decision in a moral dilemma? So, I know I'm just the product of evolutionary processes. Well I knew that anyway. How does that help me to be a better person?

> Of course I'm willing to allow that philosophy can help with that process, but we need to allow for the possibility that human language (a cobbled together mess) is highly misleading on the topic. 

But philosophy is very good at revealing how misleading language, and the senses, can be. From Descartes to Wittgenstein and beyond, that's exactly what philosophers have been doing. We make all kinds of category mistakes and other logical errors, as well as having misconceptions about the natural world. Various distinctions between appearance and reality go back at least to Plato.

In reply to Coel Hellier:

Interestingly, one of the profounder comments Wittgenstein made – that I think does give us an insight into the human condition, or at least as a starting point to a weird aspect of it – is a passing remark he made in a 'Lecture on Ethics' in 1930:

'Is speech essential for religion? I can well imagine a religion in which there are no doctrines, and hence nothing is said. Obviously the essence of religion can have nothing to do with the fact that speech occurs - or rather: if speech does occur, this itself is a component of religious behaviour and not a theory. Therefore nothing turns on whether the words are true, false, or nonsensical.'

Another translation:

'Is talking essential to religion? I can well imagine a religion in which there are no doctrinal propositions, in which there is thus no talking. Obviously the essence of religion cannot have anything to do with the fact that there is talking, or rather: when people talk, then this itself is part of a religious act and not a theory. Thus it also does not matter at all if the words used are true or false or nonsense.'

And another translation of just the central bit. 

'I can well imagine a religion in which there are no doctrines, so that nothing is spoken. Clearly, then, the essence of religion can have nothing to do with what is sayable.'

What I really like about this is that it's not a theory so much as an observation of a human phenomenon: the need for human beings to worship something, believe in something, even if it makes no sense. Also, the fact that he doesn't pass judgement, he just throws it out at us as an extraordinary idea/ thought. (I actually take it as quite a scathing comment, but he doesn't let on.) He's certainly not playing any 'language game', in fact he's saying that language is totally impotent to explain the phenomenon.

I'm certainly not going to discuss it because it's obviously something that needs to be mulled over at some length, because there's just so much implied.

I see that some religiously minded person has immediately taken offence at this comment by W, with a Dislike, which makes me feel he got something right here.

I've added another smiley now because it's bloody funny and I'm in a good mood.

Post edited at 21:16
1
 Coel Hellier 16 Apr 2020
In reply to Ratfeeder:

>  What makes you think morality (as opposed to human behaviour in general) has a "true nature"?

There must be facts about what morality is, whether it is objective or subjective, et cetera.

> From what I can gather so far, your view of the word "moral" is that it refers to an unspecified brain state.

Yes, except that I've specified the brain state as "liking/approving".  (Which is indeed a brain state; just because it's a high-level description of the state of the brain doesn't stop it being a description of the state of the brain.)

> What does that tell us about the "true nature of morality"? Absolutely nothing.

Lots!  It tells us that morality is subjective, not objective. It tells us that morality is about what humans want or don't want.  And it tells us that there is nothing external to humans that tells humans what "morally" they "ought" to do.

> Evolutionary biology [...]  doesn't give an account which suggests what factors are involved in deciding what is best or better for us to do.

Evolutionary biology sorts the meta-ethics.  It does not give a detailed account of our day-to-day life, agreed.  An account of "applied ethics" is rather more complicated than an account of meta-ethics.

> I think your answer basically is that we can't, because what we do is always determined by our evolved nature.

... plus a lot of environmental influences.

> In effect, there is no morality, just the subjective illusion of it.

There certainly is morality, we humans are moral agents routinely uttering moral proclamations.  That's not an "illusion", we really do care about things.

> But this simply assumes that evolutionary processes haven't endowed us with autonomy.

But of course we have a high degree of autonomy.  The whole point of genes programming a recipe for a brain is that the brain acts as an autonomous decision maker.  (Thus being able to react to circumstances and make real-time decisions that our genes could not.)

> But morality doesn't concern the truth about the natural world, ...

Humans are part of the natural world!

> ... it concerns how we should behave towards each other as human beings. It concerns the practical considerations that affect each and every one of us in our everyday lives. So your whole enterprise is just a huge category mistake.

Not at all. What you've just said (morality concerns the practical considerations that affect each and every one of us) is entirely in line with my view on this.  It is precisely as part of our decision-making toolkit that we have "moral values" built into us.

> ... how does that help us to think about the ethical choices we are forced to make in our everyday lives?

It tells us that there is nothing external to humans that mandates what we "should" do.  That excises a large number of ways you might think about morality, and that helps think about it more clearly.

> How does it help me to make a decision in a moral dilemma? So, I know I'm just the product of evolutionary processes. Well I knew that anyway. How does that help me to be a better person?

Even if the answer were: "This account of meta-ethics does not help one tiny bit in being a better person", that does not mean there is anything defective or lacking about that account of meta-ethics.  It is not attempting to be a prescriptive moral manual for daily life.  It is attempting to answer the questions of meta-ethics (is morality objective?, are there moral facts?, et cetera).

Post edited at 22:18
 Sir Chasm 16 Apr 2020
In reply to Gordon Stainforth:

> I see that some religiously minded person has immediately taken offence at this comment by W, with a Dislike, which makes me feel he got something right here. 

You're wrong, it was my dislike and I'm not religious minded. It was merely a reaction to you posting something and saying "I'm not going to discuss this".

But I can see why you wouldn't want to discuss this "I can well imagine a religion in which there are no doctrines, so that nothing is spoken. Clearly, then, the essence of religion can have nothing to do with what is sayable.". Can you imagine that religion? 

1
 Coel Hellier 17 Apr 2020
In reply to Gordon Stainforth:

This is just so typical of Wittgenstein!       Provokes thought, yes, but whether it is even halfway sensible is another matter:

> 'Is talking essential to religion?

Well that would depend almost entirely on what one means by "religion", wouldn't it?

> I can well imagine a religion in which there are no doctrinal propositions, in which there is thus no talking.

W might be able to imagine it, but I must admit that I'm struggling to do so.  Something that *I* would consider to be a religion would indeed need to have doctrinal propositions, but then one can reasonably conceive of religion in a variety of ways. 

> Obviously the essence of religion cannot have anything to do with the fact that there is talking, ...

Obviously???? Stating a wild claim supported only by an "obviously"?

> ... or rather: when people talk, then this itself is part of a religious act and not a theory.

Hmmmm ....

> Thus it also does not matter at all if the words used are true or false or nonsense.'

Well that's rather a big leap from the previous remarks, even if we accept them.

 TobyA 17 Apr 2020
In reply to Sir Chasm:

It's not something I would claim any great understanding of, but despite the presumed sarcasm of your question, I think significant advances in the study of logic and the logic of language.

 TobyA 17 Apr 2020
In reply to Coel Hellier:

I asked the political philosopher on another thread why he hasn't joined in on this thread, his response was quite amusing: https://www.ukhillwalking.com/forums/off_belay/lockdown_achievements-718272?v=...

I'm not sure, but I think he has accusing you of being "absurdly reductive" and not Humean at all! Or perhaps it was me at fault for describing you as Humean. I am trying to encourage Paul to come and explain to us why we are all wrong. I mean what else is there to do once you've done your daily exercise?

 Ratfeeder 17 Apr 2020
In reply to Coel Hellier:

> Yes, except that I've specified the brain state as "liking/approving".  (Which is indeed a brain state; just because it's a high-level description of the state of the brain doesn't stop it being a description of the state of the brain.)

> There certainly is morality, we humans are moral agents routinely uttering moral proclamations.  That's not an "illusion", we really do care about things.

> Not at all. What you've just said (morality concerns the practical considerations that affect each and every one of us) is entirely in line with my view on this.  It is precisely as part of our decision-making toolkit that we have "moral values" built into us.

Ok, but here's your problem as I see it (and it is a philosophical (epistemic) one):

I can see there's a straightforward sense in which human values are subjective - it's we humans who do the valuing. But the meta-ethical question is, are these subjective valuings just likings/approvals, or are they cognitive recognitions of qualities in the world - of the things we value? 

We agree that the valuings are subjective states, which you say are brain states. But then so are beliefs and sensory experiences. According to your view, these subjective states (brain states) are themselves objects of knowledge. So all of these states are cognitive states. The question now is, how do you distinguish the cognitive states that give us knowledge of the external world from those that give knowledge only of brain states? If the object of knowledge for a visual experience is one's own brain state, where is the "external" object that the experience is supposedly of? How would we even know there is one? And how would we know there's a physical brain behind the qualia of the state? That is basically Hume's problem, which he inherits from Descartes.

On this account, you have no more grounds for saying that subjective values are brain states, than you have for saying that the sense-data of experience provide us with knowledge of an external world. Both are just unwarranted assumptions. So it's just a matter of prejudice to pick on values and say they don't correspond to anything in the world outside our brains.

Wittgenstein's private language argument addresses exactly this empiricist problem, and so far as I'm aware, it's the only answer to it. But this answer gives us no reason at all to suppose that moral and aesthetic values are not objective - that they're not cognitive recognitions of qualities in the world - any more than sensory experience is not objective. On the contrary, if true, it decides the meta-ethical question in favour of realism (because it has refuted idealism as a whole).

I'm not at all saying this is the ultimate truth of the matter, but it at least explains why someone might take a different meta-ethical view from you and feel reasonably justified in so doing.

Post edited at 09:00
 Coel Hellier 17 Apr 2020
In reply to TobyA:

> I'm not sure, but I think he has accusing you of being "absurdly reductive" and not Humean at all! Or perhaps it was me at fault for describing you as Humean.

Though (as explained a bit in one of the comments up thread), I wouldn't say that I was primarily "Humean"; primarily I'm Darwinian (and arriving at an account of morals from a biological and evolutionary perspective). 

I do think that Hume got a lot right (but again, that evaluation comes from the biological/Darwinian perspective, not from a philosophical analysis using philosophical tools).

Post edited at 09:38
 Coel Hellier 17 Apr 2020
In reply to Ratfeeder:

> I can see there's a straightforward sense in which human values are subjective - it's we humans who do the valuing.

I like "straightforward senses" of terms!  

> The question now is, how do you distinguish the cognitive states that give us knowledge of the external world from those that give knowledge only of brain states? If the object of knowledge for a visual experience is one's own brain state, where is the "external" object that the experience is supposedly of? How would we even know there is one? And how would we know there's a physical brain behind the qualia of the state?

At this point I'll invoke a Quinean Web account of our world view. 

We all have a "web" of ideas, beliefs and knowledge about the world. (Literally so, it's our brain's neural network.) Sensory data goes into the web, we interpret the data in the light of our existing web of ideas, beliefs and knowledge, and then we update our web.  That update is new beliefs/knowledge. 

So, it's not as simple as a naive: sense data ==> knowledge.   Instead it is: sense data ==> web of ideas ==> web updates ==> "knowledge".

So, in arriving at the conclusion from what I experience (= "knowing") that my brain's function is being affected by alcohol, I am factoring in prior knowledge of alcohol, of brains, of the fact that I had drunk some wine, et cetera. 

If a wizard had magicked alcohol into my brain without me knowing, I might instead conclude that I was going down with a virus. 

All knowledge is like this, whether of external objects or whatever.  And from there is no reason to distinguish knowledge about things external to the body from knowledge about our internal body.

So in Wittgenstein's scenario (brain, arm, hand, stick, objects in path) and in my scenario (brain, eye, photons, microscope, photons, object) all of those things are part of our web of ideas, and our prior knowledge of all of those things is part of what goes into interpreting new sense data -- and thus, that interpretation of sense data can lead to "knowledge" about any of those things. 

That's how I can interpret the image of the object as "the microscope is out of focus" and not "this object is very fuzzy".     

Of course, that new knowledge depends on the previous "web", but again, all knowledge is like that, and there's no reason to distinguish between things external versus internal to our body.  Our body is just one of the things we know about, one part of our "world model", just as anything else in our world model is. 

> On this account, you have no more grounds for saying that subjective values are brain states, than you have for saying that the sense-data of experience provide us with knowledge of an external world. Both are just unwarranted assumptions.

No, they are both conclusions from our overall "world model" of how things work, interpreted in terms of a vast number of things that we know about, including our post-Darwinian understanding of ourselves and our evolutionary heritage. 

> But this answer gives us no reason at all to suppose that moral and aesthetic values are not objective - that they're not cognitive recognitions of qualities in the world ...

A neat reversal of the burden of proof!  One might think that the burden was on the moral realist to state explicitly what these objective qualities are, and what "moral facts" are, and how they relate to those qualities, and how humans know about them, and all such similar questions.

If a subjective account explains everything that we actually need to explain, and is fully in line with expectations from a Darwinian understanding of ourselves as social animals, then at that point what warrant do we have for supposing that there are any such external and objective "moral qualities"?

"Well you can't prove there aren't", is as bad as the last resort of the theist: "Well you can't prove that God doesn't exist!".   Ditto Russell's orbiting teapot and any number of other things.   Didn't Occam invent a neat little way of dealing with such claims? 

OP Jon Stewart 17 Apr 2020
In reply to Coel Hellier:

I'm pretty convinced there's nothing new I can add, and that I've answered all of your objections multiple times already. All I can do is summarise so we can see what's at the bottom of the disagreements.

> But are you proposing a universal foundation for morality, or not?

Yes. Not in a metaphysical sense, in a practical sense: because we implicitly all agree with its base value judgement, we can come to a consensus about right answers, just like science. It's confusing to see how it works at the individual level, but it's clear at the policy level. The policy level reveals that there seems to be moral progress and thus objective morality.

> I don't agree. "Progress" is a value judgement (it depends on an aim or desire being specified), and thus we can make "progress" according to someone's subjective judgement. 

Pinker and I believe that there is such a thing as moral progress, and it's objective because we can measure it by wellbeing, which is subject-independent. You disagree with us and won't go further than saying that you personally happen to like the changes that have happened in the world since the Enlightenment.

> There's no *rational* basis, but there is however a values-based basis.   I would seek to persuade based on my values.

I cannot be persuaded by unjustified value statements (bar the axiom, because it's universal); and taking this approach results in moral progress. Using non-rational persuasion, such as appeal to emotion, bribe, threat and dishonesty militates against moral progress.

> Moral realists seem to be people who are unhappy that there isn't a Sky Daddy

I'm not a moral realist, but I can see a way of making common-sense objective moral judgements that works for policy, and that's the reason I'm going to use it.

Post edited at 12:13
 Coel Hellier 17 Apr 2020
In reply to Jon Stewart:

> Yes. Not in a metaphysical sense, in a practical sense: because we implicitly all agree with its base value judgement, we can come to a consensus about right answers, just like science. It's confusing to see how it works at the individual level, but it's clear at the policy level.

Right, so you are indeed proposing a universal moral system, in a practical sense. Which means it should give us all practical guidance on the "What should I do?" question. 

So, should an individual care more for their family than they do about a distant and unrelated stranger, or should they care about every person on Earth equally?

So, for example, "should I take my kids on skiing holiday in Switzerland, at a cost of £5000 for the whole family, or is the moral thing to do to go camping in North Wales and send the £5000 to Save the Children?"

Practical guidance please, from your practical and universal system of morals (for bonus points, show how your answer derives from your axiom).

Or, let's say that my kid has a rare cancer.  By throwing the best treatment and the latest drugs at him, costing £500,000, the doctors say there is a 50% chance that he lives and recovers fully.   Alternatively, that amount of money would save and radically improve the lives of 500 children currently in a third-world slum.  Is the moral thing to do to try to save my child, or am I morally obligated to let my kid die and insteasd send the money to a slum charity in India?

Practical guidance please, from your practical and universal system of morals (for bonus points, show how your answer derives from your axiom).

NB: "it's confusing" is not an answer. Also NB: "its about societal-level policy, but doesn't apply on the individual level" relinquishes any claim to the system being "universal".

> ... we can measure it by wellbeing, which is subject-independent.

No it is not!   For example, my wellbeing would be enhanced by blasting Metallica's Master of Puppets out of the stereo; but that would reduce my mother's wellbeing.

 Ratfeeder 17 Apr 2020
In reply to Coel Hellier:

> I like "straightforward senses" of terms!  

Me too!

> At this point I'll invoke a Quinean Web account of our world view. 

> We all have a "web" of ideas, beliefs and knowledge about the world. (Literally so, it's our brain's neural network.) Sensory data goes into the web, we interpret the data in the light of our existing web of ideas, beliefs and knowledge, and then we update our web.  That update is new beliefs/knowledge. 

> So, it's not as simple as a naive: sense data ==> knowledge.   Instead it is: sense data ==> web of ideas ==> web updates ==> "knowledge".

> So, in arriving at the conclusion from what I experience (= "knowing") that my brain's function is being affected by alcohol, I am factoring in prior knowledge of alcohol, of brains, of the fact that I had drunk some wine, et cetera. 

> If a wizard had magicked alcohol into my brain without me knowing, I might instead conclude that I was going down with a virus. 

> All knowledge is like this, whether of external objects or whatever.  And from there is no reason to distinguish knowledge about things external to the body from knowledge about our internal body.

Nice invocation of Quine (a philosopher I'm not so familiar with, and should be). But as far as that goes I wouldn't have thought otherwise. This accords very well with Jonathan Dancy's particularist version of moral realism (he's a coherentist in the theory of truth). Dancy would say that it's the coherence of the "web" of all the relevant reasons we might have for acting in a given situation, that gives a moral "shape" to it, suggesting the morally appropriate action in that spatio-temporal location. I suppose this could be given an anti-realist interpretation! I think the only reason Dancy takes the realist position is that it simplifies things for the agent. We don't need to keep reminding ourselves that our moral reasons are just subjective feelings, so the subjective feelings become redundant. We can just take them to be reasons that give us a sort of moral knowledge.

So it now occurs to me that one realist answer to the question of "what is a moral fact" is "it's a shape in the web of moral reasons". (At this point I'm not really bothered if the anti-realist insists that this must be given an ant-realist interpretation.)

Even so, Quine's account, being coherentist rather than semantic, doesn't really solve the fundamental empiricist problem.

> That's how I can interpret the image of the object as "the microscope is out of focus" and not "this object is very fuzzy".

Yes, that works fine in the analogous scenario; but at the level of the primary object viewed (whatever is on the microscope slide), we don't have an independently intelligible object to serve as the primary that we can compare the sense-data with. As Kant would say, such an object would be "noumenal" (beyond the reach of sense and knowledge).

Again, the coherentist account doesn't solve the fundamental empiricist problem.

> Of course, that new knowledge depends on the previous "web", but again, all knowledge is like that, and there's no reason to distinguish between things external versus internal to our body.  Our body is just one of the things we know about, one part of our "world model", just as anything else in our world model is. 

> A neat reversal of the burden of proof!  One might think that the burden was on the moral realist to state explicitly what these objective qualities are, and what "moral facts" are, and how they relate to those qualities, and how humans know about them, and all such similar questions.

Well, ok, but... you said yourself up thread that (to paraphrase) the original, "naïve", intuitive "common sense" conception of morals is a realist one. So surely the burden of proof is on the anti-realist? That's certainly Jonathan Dancy's view.

> If a subjective account explains everything that we actually need to explain, and is fully in line with expectations from a Darwinian understanding of ourselves as social animals, then at that point what warrant do we have for supposing that there are any such external and objective "moral qualities"?

We agree completely about the science, and about the human behaviour. We agree that moral valuings or evaluations are subjective states, just as sensory experiences are. No arguments. But the science doesn't establish the meta-ethics, because to assert a meta-ethical position is to adopt a philosophical position, which in the case of ant-realism is Humean. You gave a good stab at a philosophical way out of Humean metaphysics by invoking Quine. Excellent, if not quite successful. But the tools of science miss the target.

> "Well you can't prove there aren't", is as bad as the last resort of the theist: "Well you can't prove that God doesn't exist!".   Ditto Russell's orbiting teapot and any number of other things.   Didn't Occam invent a neat little way of dealing with such claims? 

A little unfair, I think. As mentioned above, Occam's razor could equally be applied to subjectivism, to simplify moral reasoning. Really, I reckon we're pretty close in our thinking. I'm really glad you mentioned Quine!

OP Jon Stewart 17 Apr 2020
In reply to Coel Hellier:

> Right, so you are indeed proposing a universal moral system, in a practical sense. Which means it should give us all practical guidance on the "What should I do?" question. 

It will do, but we won't like the answers because our evolved moral instincts don't give a hoot about consistency - or indeed right and wrong! They're just impulses that guide our behaviour towards reproducing more copies of our genes.

> So, should an individual care more for their family than they do about a distant and unrelated stranger, or should they care about every person on Earth equally?

They should try to decrease overall suffering. So, they've got to take their own suffering into account if they decide to let their own children starve while feeding a stranger's. To decrease overall suffering, the best way is to look after your kids.

> So, for example, "should I take my kids on skiing holiday in Switzerland, at a cost of £5000 for the whole family, or is the moral thing to do to go camping in North Wales and send the £5000 to Save the Children?"

The moral thing to do is give the money to Save The Children. This most likely decreases suffering much more than your skiing holiday. 

> Or, let's say that my kid has a rare cancer.  By throwing the best treatment and the latest drugs at him, costing £500,000, the doctors say there is a 50% chance that he lives and recovers fully.   Alternatively, that amount of money would save and radically improve the lives of 500 children currently in a third-world slum.  Is the moral thing to do to try to save my child, or am I morally obligated to let my kid die and insteasd send the money to a slum charity in India?

The 500 children in a slum is the best idea. Very hard to override your evolved instinct to spend the money on your kid though, I wouldn't expect to see it happen.

Now you don't want to sign up to this, because it doesn't chime with your evolved instincts. But rather than claim "well the system's bollocks then, because I'm always right", I'm just going to admit that I'm a moral hypocrite, and most of what I do is immoral. I *should* be out doing whatever I can to save lives and reduce suffering everyday, but I'm not because I can't be arsed. I am a moral hypocrite.

But what I can, realistically, do to contribute to moral progress is to pay attention to objective morality where it's feasible, where it's in tune with human nature: in policy.

> No it is not!   For example, my wellbeing would be enhanced by blasting Metallica's Master of Puppets out of the stereo; but that would reduce my mother's wellbeing.

Any given person is experiencing any given level of wellbeing at any given time. That is subject independent. As for whether you *should* blast out the Metallica, ask yourself firstly, "is the pleasure I'm going to get greater than the pain my mother is going to feel?". But then, try going to the next level of considering consequences: "is knowing that my mother is experiencing pain going to lessen my pleasure?". And the third level: "if I show flagrant disregard for my mother's feelings now, how is she likely to behave in future?". And so on. Keep considering the consequences and work out which of the possible worlds you would most like to live in. My bet is that it's the world with the least suffering.

 Ratfeeder 17 Apr 2020
In reply to Coel Hellier:

> Straightforwardly!  That's how it could be!

The sense in which my pain is my subjective state of counsciousness at time t is straightforward. But it's not "straightforward" that my pain is my awareness of the state of my brain at time t, because my state of awareness is directed to the source of the pain - I've hit my thumb with a hammer, my thumb hurts.

> Well yes, and so we know that we are being affected by alcohol, and we know that the brain is being affected because we can feel it.

The knowledge isn't the experienced effect though, is it? 

> We're not aware of feeling dizzy, we just feel dizzy??  Surely, when we feel dizzy, a modicum of self reflection makes us aware that we feel dizzy.

The modicum of self-reflection isn't the sensation though, is it?

> Well no, I was more suggesting that the distinction between "knowledge of" things outside the body and "knowledge of" things inside the body (eyes, stomach, brain) is an artificial distinction that just invents a problem when there is none.

Ok, but no significant distinction is being made here. The distinction is between knowledge of things outside the brain and "knowledge" of things inside it; you take the latter to be genuine first-personal objects of knowledge, W doesn't, and rightly so.

> And we could talk about the state of our stomachs even if we didn't know we had stomachs.  But we would know that the feelings such as hunger pertained to that part of the body.  Similarly, we know that a headache pertains to the brain (or to the "inside of the head" if we have no idea what's in the head).

We only know these things from learning about biology. Once again, the knowledge is not the sensation. 

> Is saying that "I have bellyache" talking in physical terms about the physical stomach?  Or are you going to add the stomach to the class of body parts that we can't know about?

The stomach is a part of the body that's outside the brain. A brain state isn't.

> So you're now saying that poking things with a stick cannot give us knowledge of our hands -- even though we can see our hands (which seemed to be a sticking point regarding the brain)?

But the blind old man can't see his hand. All he has is the sensation in it produced by poking with the stick. The point of the analogy (and it is only an analogy) is that his object of knowledge is not a brain state.

> So are our hands added to the class of body parts that we can't know about?   (Really?) 

Obviously not, because our hands are outside of our brains. It's only our own brain states that are not our objects of knowledge. 

> How about if we amputated someone's hand, and gave him an artificial one, complete with sensors on the fingers connected to electrodes in the brain. Would he then -- magically -- be able to "know" about the artificial hand, in a way that he can't "know about" his biological hands?

He does know about his biological hand, because it's outside of his brain.

I wonder if all scientists find the distinction between the first and third-personal perspective so difficult to grasp? Scientists always think from an "impersonal", objectified perspective.

> Well, let's state the chain again: "object -> photons ->  microscope -> photons -> eye -> brain."

> If I see a rather blurry image, I learn "the microscope is out of focus", so I learn about the microscope, not the object.

The microscope is the object because it's outside your brain.

> If everything looks dark it could be I've not switched the light on, or left the microscope cap on, so I learn about the (lack of) photons passing through, not about the object.

You'd only learn about the passage or absence of photons if you knew that light consisted of photons. The relevant thing you want to know is whether you've not switched the light on or not removed the microscope cap.

> If I see only a dull and blurry image with my left eye, but a clear image with my right eye, I learn that there is something wrong with my left eye, such as a cataract.

You're eye is outside your brain, so can know there's something wrong with it.

> If I see the image swirling around, coming in and out of focus, then I learn that I've had too much alcohol which is affecting my visual system.  Thus I learn about my brain, not about the object.

You learn that you've had too much to drink, and how that affects you.

Post edited at 15:29
 Coel Hellier 17 Apr 2020
In reply to Jon Stewart:

> Now you don't want to sign up to this, because it doesn't chime with your evolved instincts. But rather than claim "well the system's bollocks then, because I'm always right", I'm just going to admit that I'm a moral hypocrite, and most of what I do is immoral.

Sorry about this, but at this point I'm going to suggest that your whole argument is horribly inconsistent.

First you insist that your axiom is obvious and universal, as obvious as retracting ones hand from a hot stove, so much so that no-one could possibly dissent.

Then, you point out a couple of implications of your axiom and admit that no-one wants to abide by it! 

Aren't those two a teeny bit contradictory?

Do you want to live by your axiom or not?  If "not" then what standing does your axiom have?  You've accepted that the only standing your axiom has is the advocacy of those who want to advocate it as a way of life.  And now you're admitting that even you don't actually want to live by it. 

> But what I can, realistically, do to contribute to moral progress is to pay attention to objective morality where it's feasible, where it's in tune with human nature: in policy.

So now you're taking your supposedly "objective morality" and then cherry picking bits of it, and ignoring the rest, depending on whether you want to adopt it or not! 

 Coel Hellier 18 Apr 2020
In reply to Ratfeeder:

> So it now occurs to me that one realist answer to the question of "what is a moral fact" is "it's a shape in the web of moral reasons".

Which is another complete non-answer -- unless accompanied by an account of such "shapes" such that a Martian scientist could follow the recipe and arrive at a conclusion about whether the shape was "moral" or "immoral".  

And, of course, that account would have to be independent of  human feelings and values on the matter, since that's the definition of moral realism, that morals exist independently of human feelings and values.

> Yes, that works fine in the analogous scenario; but at the level of the primary object viewed (whatever is on the microscope slide), we don't have an independently intelligible object to serve as the primary that we can compare the sense-data with.

In a Quinean-web account (which by the way, wasn't originated by Quine, but goes back to Neurath's raft and other pre-cursors) there is no "primary" data.  The process is not "foundational", deriving from a primary source.  

Instead, any and all aspects of the "web" can be evaluated and updated by consideration of the web as a whole.  And one iterates to the best "world model" that way.  Given that, there's nothing to stop conclusions about our bodies being any less reliable than conclusions about anything else.

By the way, the Quinean-web view is not a purely coherentist view.  What matters is both the coherence of the web, and its correspondence to empirical reality.

> Well, ok, but... you said yourself up thread that (to paraphrase) the original, "naïve", intuitive "common sense" conception of morals is a realist one. So surely the burden of proof is on the anti-realist?

The realist view is much less parsimonious. It posits a new and distinct category of ontology, namely "moral facts", which are very hard to make sense of (as you are neatly illustrating!).   Anti-realist views just scrap all that and solve lots of problems instantly.  Yes, it might be counter-intuitive, but intuition never was that reliable in the first place. 

> But the science doesn't establish the meta-ethics, because to assert a meta-ethical position is to adopt a philosophical position, which in the case of ant-realism is Humean.

I don't accept these demarcation lines between philosophy and science.  As I see it, knowledge about the world is a unified whole, because the world itself is a unified whole. Meta-ethics is to me a scientific topic, since it's about the real world, about the status of morals as they actually are in the real biological world.    As I see it, Darwin solved meta-ethics in Descent of Man

(I'm quite happy with it being both a philosophical topic and a scientific one.)

> As mentioned above, Occam's razor could equally be applied to subjectivism, to simplify moral reasoning. Really, I reckon we're pretty close in our thinking. I'm really glad you mentioned Quine!

How would applying Occam to subjectivism simplify things?  We know that there are subjective moral opinions.  Lots of people have opinions about whether something is moral and immoral. 

The subjectivist account says that that's all there is to it. 

The objectivist account says, yes, there are indeed people and their moral opinions, but, in addition there is a whole other domain, a domain of objectively correct or incorrect morals, that is in addition to and independent of human subjective feelings. 

How is that favoured by Occam?

 Coel Hellier 18 Apr 2020
In reply to Ratfeeder:

> Obviously not, because our hands are outside of our brains. It's only our own brain states that are not our objects of knowledge. 

So the state of my stomach is something I can know about, through awareness of my body?  Why, then, the distinction between the brain and the stomach?  Why the distinction between headache and bellyache?

Post edited at 10:30
 Ratfeeder 18 Apr 2020
In reply to Coel Hellier:

Thanks for continuing the discussion. If nothing else, it's interesting (to me at least) and is helping me to clarify my own views.

> Which is another complete non-answer -- unless accompanied by an account of such "shapes" such that a Martian scientist could follow the recipe and arrive at a conclusion about whether the shape was "moral" or "immoral".

Well, I do think we can give such accounts that humans could follow, at least, by describing all the relevant properties of the "shape" in a given set of circumstances. I'm not sure we need to insist that the level of objectivity needs be such that a Martian scientist could follow the recipe. Dancy talks about relative levels of objectivity. The "facts" would be facts for humans, if not for Martians. If a Martian scientist lacked autonomy, for example, we might find it impossible to explain to him what it is. The crucial point for the particularist is that any such account can only be given recursively (as in Tarski's definition of "truth"). Properties of actions (e.g. helpfulness, pleasurableness), which count as reasons (justifications) for the action in one case, may count as reasons against the action in another. That's because the web of reasons in the one case is different from that in the other. The agent needs to take all the reasons that present themselves in the circumstances into account in order to discern the "shape" suggested in each case, which gives the overall justification. So, if the Martian scientist had sufficiently human-like characteristics, he might start to get a grip on what sorts of actions can be described by the human word "moral", if we take a sufficient number of properties of actions and show how they operate in a sufficient number of different cases. It would be a recursive process relying on induction (like Hume's account of causation).

> And, of course, that account would have to be independent of  human feelings and values on the matter, since that's the definition of moral realism, that morals exist independently of human feelings and values.

The overall moral shape would be independent of human feelings, but the shape itself would take human feelings into account (along with other reasons) which is why the Martian would need to have sufficiently human characteristics to get a handle on it. As for values, it's the sharp Humean distinction between facts and values that's being called into question here.

> In a Quinean-web account (which by the way, wasn't originated by Quine, but goes back to Neurath's raft and other pre-cursors) there is no "primary" data.  The process is not "foundational", deriving from a primary source.  

Yes, I totally go along with this and so does Jonathan Dancy - I remember how he so often argued against foundationalism. It's all about holism. I only used the word "primary" in response to your example for the sake of simplicity. The criticism still holds that Quine's system fails to address scepticism directly. Wittgenstein's Investigations does address it directly. I think we need both.

> Instead, any and all aspects of the "web" can be evaluated and updated by consideration of the web as a whole.  And one iterates to the best "world model" that way.  Given that, there's nothing to stop conclusions about our bodies being any less reliable than conclusions about anything else.

Agreed. See above.

> By the way, the Quinean-web view is not a purely coherentist view.  What matters is both the coherence of the web, and its correspondence to empirical reality.

I don't think that's quite right. Don't forget the "ideterminacy of translation" thesis. Quine's system is essentially empiricist (hence Humean), not Kantian.

> The realist view is much less parsimonious. It posits a new and distinct category of ontology, namely "moral facts", which are very hard to make sense of (as you are neatly illustrating!).   Anti-realist views just scrap all that and solve lots of problems instantly.  Yes, it might be counter-intuitive, but intuition never was that reliable in the first place.

The ontology of "moral facts" is no different from the ontology of "facts" as such. What is the ontology of a "fact"? A fact isn't a material object. Can you tell me what a "fact" is? You say yourself "there must be facts about what morality is", but to what ontological category do these "facts" belong? Don't forget Quine's answer to the question "What exits?" - "Everything"!

> I don't accept these demarcation lines between philosophy and science.  As I see it, knowledge about the world is a unified whole, because the world itself is a unified whole. Meta-ethics is to me a scientific topic, since it's about the real world, about the status of morals as they actually are in the real biological world.    As I see it, Darwin solved meta-ethics in Descent of Man.

Actually I am prepared to revise that demarcation I made. I think the two disciplines work together and need each other, but as you'll admit, they are separate disciplines. Any scientific discovery needs to be put into theoretical terms, and it's the logical coherence of the theoretical terms that philosophers are concerned with. The scientist is more concerned with the method of scientific discovery itself - observation and experiment. Darwin's discoveries may seem to suggest an answer to the meta-ethical question, but it would be too reductivist to reduce the terms of the meta-ethical question purely to the terms of the discovery.

> (I'm quite happy with it being both a philosophical topic and a scientific one.)

Me too, I think we can agree on that.

> How would applying Occam to subjectivism simplify things?  We know that there are subjective moral opinions.  Lots of people have opinions about whether something is moral and immoral.

But the leading question is, are they subjective? Of course people have opinions about whether a human action is moral, but the question is, are those opinions objectively justified? In trying to decide this question, Occam's razor could work in favour of dispensing with the subjective feelings, and just talking about the objective reasons.

> The subjectivist account says that that's all there is to it.

But does that gives us an adequate account of what morality is? In a further post I'll explain why I don't think it does. 

> The objectivist account says, yes, there are indeed people and their moral opinions, but, in addition there is a whole other domain, a domain of objectively correct or incorrect morals, that is in addition to and independent of human subjective feelings. 

> How is that favoured by Occam?

Any moral opinion has to be justified, and it can only be justified by an objective assessment of the reasons for holding it. We can dispense with the subjective feelings in order to do that. We don't need to say that the result of the objective assessment is just a "feeling". Occam's razor.

 Ratfeeder 18 Apr 2020
In reply to Coel Hellier:

> So the state of my stomach is something I can know about, through awareness of my body?  Why, then, the distinction between the brain and the stomach?  Why the distinction between headache and bellyache?

There isn't a distinction between "headache" and "bellyache"! If I have a headache I am aware that my head aches, not that my brain aches. I don't feel pain in my brain. If I felt pain in my brain it wouldn't tell me which part of my body hurts. So the one organ in my body I can never feel pain in is my brain! In scientific terms, a person's brain states are his or her means of knowing what's going on outside of the brain, either in other parts of the body or outside the body. So the object of knowledge for the person in question is always, necessarily outside the brain.

 Ratfeeder 18 Apr 2020
In reply to Coel Hellier:

Here's why I think Hume's account of ethics (Humean subjectivism) is inadequate. The leading question here is, "Are moral requirements hypothetical imperatives?". Hume's account says they are, because human action (according to Hume) is always motivated by a desire (liking or want for something), where these are strictly non-cognitive ("approval" = "liking" or "wanting"). Hence, moral requirements are appropriately formulated like this: "If Joe want's to play golf with Dan then Joe should accept Dan's invitation."

To illustrate the inadequacy of this assertion, here's a little scenario:

Joe, who lives in London and owns a car, dislikes his uncle Fred, who lives in Manchester and doesn't own a car. Fred's wife, Mabel, who Joe doesn't like either, has asked Joe if he would kindly give Fred a lift, in 4 days time, from Milton Keynes back home to Manchester, as Fred has no other means of getting there (public transport strike). She will of course pay Joe's expenses. Joe is quietly furious at being placed under such an obligation, but reluctantly agrees, since Fred would otherwise be unable to get home. 3 days later Joe receives a phone call from his best mate Dan, who invites him to play a round of golf with him the next day. Golf is Joe's favourite game, it's a no-brainer. But then he suddenly remembers the promise he made to his aunt Mabel, and his heart sinks. He really does not want to have to keep that promise. What should Joe do? In the end, and with heavy heart, he turns down Dan's invitation. He puts Fred's need for a lift home before the satisfaction of his very strong desire to play golf with is best mate. Why would be do that?

According to Hume, he's acted immorally, since he turned down the invitation to do something he wanted to do. He had a good reason to do something, and didn't do it. He disobeyed the hypothetical imperative which, given his desires, bound him to it. Instead, he did something he didn't want to do.

How can the Humean account make sense of this? It can't. And yet such decisions are a common feature of our daily lives. I suspect that most people would agree with Kant that, far from being immoral, Joe's decision had moral value, precisely because he set his own desires, likes and preferences aside. Instead, he acted purely on the understanding of Fred's situation. That's what motivated him to do what he didn't want to do, and that's what gave his action moral value. It was a purely cognitive reason for acting.

Any adequate ethical theory needs to be able to take account of this sort of moral motivation, but at the same time be entirely compatible with the science of human behaviour. There's no reason why evolution by natural selection need not have endowed us with the capacity for purely cognitive motivation. Such capacities are part of us, of our evolutionary heritage. So the science remains in tact. But if that's the case, it solves the meta-ethical question in favour of moral realism. 

1
OP Jon Stewart 18 Apr 2020
In reply to Coel Hellier:

> Sorry about this, but at this point I'm going to suggest that your whole argument is horribly inconsistent.

If it's taken this long for you to realise that our evolved instincts don't chime with a rational ethical theory based on wellbeing, then you've followed a lot less than I thought.

> First you insist that your axiom is obvious and universal, as obvious as retracting ones hand from a hot stove, so much so that no-one could possibly dissent.

Yes, the axiom is obvious and universal. A world with less suffering is better than the same world with more. You've agreed with it yourself.

> Then, you point out a couple of implications of your axiom and admit that no-one wants to abide by it! 

Yes, as I said at the outset, if you try to follow this theory in your individual actions, you'll find that your moral instincts actually go against it. You are a moral hypocrite, because that's what evolution made you. 

> Aren't those two a teeny bit contradictory?

Yes. There is a contradiction between our rational analysis of the right thing to do, based on wellbeing, and our evolved instincts which serve to help reproduce our genes. This is old ground.

> Do you want to live by your axiom or not?  If "not" then what standing does your axiom have?

In an abstract sense, I "should" live by the axiom - I should spend every minute of every day out saving lives and reducing suffering. It tells me what is the optimally moral thing to do, but I can't live up to that. What I "want" to do is carry on as I am, living like an average human following their moral instincts, which sadly does not coincide with living the morally optimal life. What I also want to do is make good moral choices as they come in the course of my life, so when it comes to deciding how I think society should be organised, I'll use the utilitarian approach to form my views. I won't just pull meaningless "rights" and "principles" out of my arse, and claim that these "values" - be they racism, social darwinism, a belief that the market knows best, or whatever - are just my preference and I need no rational justification for them. When it comes to policy, I will hold views that I can justify as being the best way to organise society.

> So now you're taking your supposedly "objective morality" and then cherry picking bits of it, and ignoring the rest, depending on whether you want to adopt it or not! 

I'm taking objective morality, which can see in moral progress as we look at which societies deliver the best outcomes for their people, and I will follow it when thinking about policy. I have been crystal clear from the outset that this objective morality gives answers that don't chime with our evolved moral instincts, which is why I will fail to live up to objective morality in my individual actions.

If you didn't understand that contradiction between our rational brains and our evolved instincts from the start, why did you only mention it now?

I used to think that there simply was no such thing as morality. If we use as our evidence the moral choices that people make, we're not going to see a rich variety of values, subjective to the individual. We're going to see a total mess. People do one thing when they think they're being watched, something different if they're trying to impress a peer, something different when they're tired, etc. The "values" people say they have are every bit a crock of shit as anyone who says that they live their life the objective morality I promote. We're all hypocrites. 

Now, rather than just saying "well morality is just a bunch of contradicting instincts - it's meaningless" I've changed my mind on the basis that some societies, and some policies really do seem to be better than others. You're happy to chuck out this idea of moral progress (well, you've tried a bit of cake-ism, but let's ignore that), so you can hang on to your subjective "values". Well I don't believe in your "values" - I think that human behaviour is just a mess of contradictions and your "values" are something you're kidding yourself about. The human being chucks out behaviour according to the immediate needs it is faced with, and then makes post-hoc rationalisations about that behaviour. That's all your "values" are. I would advise that you don't examine them too closely, because they'll evaporate into cloud of hypocrisy.

Post edited at 18:12
 Coel Hellier 18 Apr 2020
In reply to Ratfeeder:

> Thanks for continuing the discussion. If nothing else, it's interesting (to me at least) and is helping me to clarify my own views.

It's always interesting learning about how philosophers think!  Someday I may write a sociological analysis of the differences between how scientists think versus philosophers.

> So, if the Martian scientist had sufficiently human-like characteristics, he might start to get a grip on what sorts of actions can be described by the human word "moral", if we take a sufficient number of properties of actions and show how they operate in a sufficient number of different cases.

But that assumes that, in any given case, there is a "right" answer. But, as in helping a terrorist, different humans will come to opposite answers. They are not disagreeing about the facts of the situation, so no amount of analysis of the facts of the situation will -- on it's own -- be sufficient. You also need the magic property that makes the action "moral" or not.

> I think the two disciplines work together and need each other, but as you'll admit, they are separate disciplines.

I don't so much see science and philosophy as separate disciplines, they are more different styles of enquiry.   After all, they can both be addressing the same subject matter. 

> Any scientific discovery needs to be put into theoretical terms, and it's the logical coherence of the theoretical terms that philosophers are concerned with.

But the scientist is also concerned with that!  

> The scientist is more concerned with the method of scientific discovery itself - observation and experiment.

No, not really.  Science is about both. An observational/experimental physicist is primarily concerned with observation and experiment, but a theoretical physicist is concerned with the theoretical framework that explains observations and experiment, including the logical coherence of it. There is nothing that physicists leave to philosophers, they concern themselves with all aspects of it. 

This, I find, is a common misunderstanding of science by philosophers.  They think of science as being experiment and observation (that is, the stuff that they don't do), but fail to realise that, to a scientist, science is just as much about the explanations and the understanding (that is, stuff that philosophers also do).

> Darwin's discoveries may seem to suggest an answer to the meta-ethical question, but it would be too reductivist to reduce the terms of the meta-ethical question purely to the terms of the discovery.

Correct, but you can reduce the meta-ethical question to the theoretical framework of evolutionary biology that Darwin invented to explain biology. 

>> Lots of people have opinions about whether something is moral and immoral.

> But the leading question is, are they subjective?

Well their opinion is certainly subjective (it's a property of their mind). So, the question is more, is there also an objective fact to which their opinion accords. 

> Of course people have opinions about whether a human action is moral, but the question is, are those opinions objectively justified? In trying to decide this question, Occam's razor could work in favour of dispensing with the subjective feelings, and just talking about the objective reasons.

But you can't dispense with the subjective feelings! They are an observed property of the world!  People do indeed have feelings. 

> Any moral opinion has to be justified, ...

Why? Of course, an intuitive moral realist (and most humans are intuitive moral realists) will automatically presume that moral opinions need to be justified, but that's a moral-realist presumption! 

To the subjectivist, moral opinions do not need to be justified, and further, at root, they cannot be justified. They derive, at root, from values, not from reason and not from justifications.

Recall that the Darwinian perspective sees moral values as a subset of aesthetic values (re-purposed by evolution to enable us to live as social animals).  At root it makes no more sense to say that moral opinions need to be justified than it does to say that choosing coffee rather than tea needs to be justified.

Of course intuitive moral realists freak out in aghast amazement at this suggestion -- it really is counter-intuitive -- but that's just their intuitive moral realism. 

> ... and it can only be justified by an objective assessment of the reasons for holding it. We can dispense with the subjective feelings in order to do that.

But your world view, your account of humans, does need to include subjective feelings! Really, it does!  Humans do have feelings!    

Even if you don't use them in your moral-realist account of morality, you still need them in your account of humans. So your overall account is much less parsimonious, it includes a whole domain (objective moral properties and moral facts)  that mine simply doesn't.

1
 Coel Hellier 18 Apr 2020
In reply to Ratfeeder:

> There isn't a distinction between "headache" and "bellyache"! If I have a headache I am aware that my head aches, not that my brain aches.

I'd say that I am aware that it's my brain that hurts (and not my scalp or ears or nose or whatever). In the same way I'm aware that it is my stomach/guts that are hurting, and not my kidneys.

> I don't feel pain in my brain.

Well I do!

1
 Coel Hellier 18 Apr 2020
In reply to Jon Stewart:

> If it's taken this long for you to realise that our evolved instincts don't chime with a rational ethical theory based on wellbeing, then you've followed a lot less than I thought.

There is no such thing as a "rational ethical theory based on wellbeing". Ethical theories are not based on reason, they are based on values.  You have admitted this.  You have admitted that you need an axiom, which comes from your advocacy. 

The fact that the ethical theory based on your axiom does not correspond to people's actual values is what I've been trying to explain to you! 

You are saying: here is an axiom so universal and obvious that everyone would agree with it.  Then, after pointing out consequences of the axiom, you admit that it is wildly different from what people would agree to.   That is contradictory!

> Yes, the axiom is obvious and universal. A world with less suffering is better than the same world with more. You've agreed with it yourself.

You're acting like a stage magician, substituting something in when people are not looking!

So you point to an axiom worded in a vague and woolly way: "a world with less suffering is better", and ask people "do you agree?".

Then, when they say yes, you pounce and declare: "A ha, you've admitted that it is morally wrong to give treats to your own children if there is any disadvantaged child anywhere!"  

So which the reply is, "no, I didn't sign up to that". 

And it does not follow logically!   From "a world with less suffering is better" it does not follow that "we all ought to treat everyone's suffering as counting equally".  

That simply does not follow.  For one thing, agreeing that "a world with less suffering is better" does not entail "and this is the only consideration for moral purposes".    People can think many things at once, of which "a world with less suffering is better" is only one. 

So, if what you really mean by your axiom is:

"The sole and overriding consideration for morality is that we should minimise suffering, where everyone's suffering counts equally in the aggregate", then why don't you word your axiom that way?

The only reason for not wording your axiom that way is so that you can do your stage-magician's trick and sneak it in instead. 

1
 Coel Hellier 18 Apr 2020
In reply to Jon Stewart:

> I've changed my mind on the basis that some societies, and some policies really do seem to be better than others. You're happy to chuck out this idea of moral progress (well, you've tried a bit of cake-ism, but let's ignore that),

Of course one can indeed have moral progress as judged by someone!   If I like one thing more than an other, I can regard movement towards the former as "progress".  

Someone else, of course, may have a different opinion.  An ISIS supporter might, for example, disagree with Pinker on what constitutes "progress".

> Well I don't believe in your "values" -

You're not expected to! They're my values, and you, as another human being, may have different ones. Though they likely overlap in areas.

> The human being chucks out behaviour according to the immediate needs it is faced with, and then makes post-hoc rationalisations about that behaviour. That's all your "values" are.

Yep!   Were you expecting better?

1
 Coel Hellier 18 Apr 2020
In reply to Ratfeeder:

> Here's why I think Hume's account of ethics (Humean subjectivism) is inadequate. The leading question here is, "Are moral requirements hypothetical imperatives?". Hume's account says they are, because human action (according to Hume) is always motivated by a desire (liking or want for something), ...

Yes, but realise that human desires often pull in different directions. We are hugely tensioned creatures. So we want a slice of chocolate cake but we also don't want to get fat. We want another beer but we don't want to get done for drunk driving, and in the long term we don't want to drink too much beer because it's unhealthy. 

We want to think of ourselves as a reliable person who keeps his promises, and we certainly want other people to think about us that way, but we also might dislike having to keep a promise in changed circumstances. So, different desires pull against each other.

> According to Hume, he's acted immorally, since he turned down the invitation to do something he wanted to do

Noooooo!!!   The problem with discussing such things with intuitive moral realists is that they just can't help trying to map a subjectivist account onto an objectivist account.  

In a Humean account, there is no such thing as "he acted immorally".   There is no objective standard of what is moral, so there is no fact of the matter. 

Thus, the only statements a Humean would make would be of the form: "Tom considers that Joe acted immorally" (= he disapproves) or "Sue considers that Joe acted morally" (= she approves).

> He had a good reason to do something, and didn't do it. He disobeyed the hypothetical imperative which, given his desires, bound him to it.

That desire was simply over-ridden by other desires (and he was acting in accordance with the hypothetical imperative arising from *those* desires).  These included the desire to think of himself as the sort of person who keeps his promise; and his desire for other people to regard him that way.  A person who acquires a reputation for letting others down is going to suffer from it in the long term.  Just as a person who drinks too much will suffer from it in the long term. Social reputation is really a major driver for morals.

> Instead, he did something he didn't want to do.

He both wanted and didn't want to do it.  Human nature is complex and tensioned like that. 

(This is one reason why philosophical accounts in terms of coherent logical structures are bad models of human morality, because they are bad models of complex human psychology.)

> How can the Humean account make sense of this?

As above!

> And yet such decisions are a common feature of our daily lives.

Yep!

> Any adequate ethical theory needs to be able to take account of this sort of moral motivation, but at the same time be entirely compatible with the science of human behaviour.

Yep! 

> There's no reason why evolution by natural selection need not have endowed us with the capacity for purely cognitive motivation.

There is no such thing as a purely cognitive motivation! I've given an account above of Joe's values-based motivation for keeping to his word. 

Imagine yourself in Joe's situation, would you feel that you ought to keep your word (in addition to feeling that you don't want to?).  Would that aspect of it (keeping your word) really have no emotional content for you?

Traditionally one would ascribe the word-keeping to "conscience" not to "reason", but "conscience" is an emotion-laden feeling. 

Post edited at 20:17
1
OP Jon Stewart 18 Apr 2020
In reply to Coel Hellier:

> The fact that the ethical theory based on your axiom does not correspond to people's actual values is what I've been trying to explain to you! 

There has been no need to explain it to me, because I have understood from the start that an ethical theory that derives from a single universal axiom using reason is never going to chime with our evolved instincts. It'll tell us what the best possible action is for humanity, and in our day to day lives, that's just not what motivates us. When we think about policy however, we do want the best possible outcome for society, so it is useful here.

> You are saying: here is an axiom so universal and obvious that everyone would agree with it.  Then, after pointing out consequences of the axiom, you admit that it is wildly different from what people would agree to.   That is contradictory!

That's the contradiction between reason and evolved instincts. It's a contradiction that makes human beings humans. It's a contradiction that is baked into our nature by evolution. We have both reason and instinct.

Have you ever experienced a contradiction between what you can reason is the right thing to do, but some evolved urge has pulled you in the other direction? It's the same contradiction.

> So you point to an axiom worded in a vague and woolly way: "a world with less suffering is better", and ask people "do you agree?".

> Then, when they say yes, you pounce and declare: "A ha, you've admitted that it is morally wrong to give treats to your own children if there is any disadvantaged child anywhere!"  

So what? All it takes a minimal amount of humility to understand that you're not acting in the most morally perfect way. None of us are - we weren't built to.

Your only argument seems to be "your theory doesn't chime with what we feel we should do at the individual level". And all I can say is "yes, I know."

> And it does not follow logically!   From "a world with less suffering is better" it does not follow that "we all ought to treat everyone's suffering as counting equally".  

I can't see the reasons one person's suffering matters more than another, at the aggregate level. Maybe this is a blind spot?

> So, if what you really mean by your axiom is:

> "The sole and overriding consideration for morality is that we should minimise suffering, where everyone's suffering counts equally in the aggregate", then why don't you word your axiom that way?

Fine. It's just not adding anything.

> Someone else, of course, may have a different opinion.  An ISIS supporter might, for example, disagree with Pinker on what constitutes "progress".

They might - so you've given up the cake-ism and don't believe in moral progress, which is now consistent with your subjectivism. Now ISIS don't care about wellbeing, and don't value reason. So it's no surprise that when, as secular humanists who believe in objective moral progress, we assess their ideas, we see that they result in huge amounts of suffering and are therefore objectively shite. That means they're not just shite to Pinker and I, they're shite to all those who care about wellbeing. It's not objective in the metaphysical sense, but to care about wellbeing is so widespread as to make it common-sense objective, like science. That is more than a preference, because it derives from a common value that all rational people can agree on.

> You're not expected to! They're my values

I meant it literally. It isn't just that I don't hold your values. I don't believe in them. They don't exist, they're not a thing. Your behaviour is what it is, and it's inconsistent - it depends on whether you think you're being watched, whether you're trying to impress a peer, whether you're tired, etc. There's only one "value" that really guides your behaviour: "I do what I just did in that situation". I don't think that this "value" is worthy of the lengthy defence you've devoted to it.

1
 Coel Hellier 18 Apr 2020
In reply to Jon Stewart:

> I have understood from the start that an ethical theory that derives from a single universal axiom using reason is never going to chime with our evolved instincts.

Then why do you keep insisting that the axiom is so obvious and universal that everyone will sign up to it?

> When we think about policy however, we do want the best possible outcome for society, so it is useful here.

If you're suggesting that the axiom only really "works" at the level of society, then ok, but then it's not a universal morality, merely a policy prescription for societal level.

> That's the contradiction between reason and evolved instincts.

No, it's more a difference between what people want at the societal level and what they want at the person level.  (We want and expect parents to favour their own children; we don't want societal-level policy makers to do that.)

And again, it's not the case that your scheme is one of "reason", it derives from your axiom. 

> Have you ever experienced a contradiction between what you can reason is the right thing to do, but some evolved urge has pulled you in the other direction?

Nope, I haven't.  But I have often experienced one evolved urge pulling in one direction, and another evolved urge pulling in the opposite direction (with "reason" being of some influence in which prevails).

> Your only argument seems to be "your theory doesn't chime with what we feel we should do at the individual level". And all I can say is "yes, I know."

Then why do you keep insisting that the axiom is so obvious and universal that everyone will sign up to it?

> I can't see the reasons one person's suffering matters more than another, at the aggregate level.

Well how about -- on the issue of whether to spend £250,000 saving ones child from cancer, as opposed to saving the lives of 100 kids in an Indian slum -- the reason why one kid counts more in the aggregate is "because he's my child and I care about him much more than those 100 kids I've never met put together".

> Now ISIS don't care about wellbeing, and don't value reason.

They do care about wellbeing, it's just that their evaluation of their wellbeing is very different from ours.

> It isn't just that I don't hold your values. I don't believe in them. They don't exist, they're not a thing.

Want a discussion on free speech and the right to utter unpopular views on social media?

> Your behaviour is what it is, and it's inconsistent - it depends on whether you think you're being watched, whether you're trying to impress a peer, whether you're tired, etc.

Well yes, that's true about every human.  (And I think I am just about human.)  

Post edited at 22:07
OP Jon Stewart 18 Apr 2020
In reply to Coel Hellier:

I thought we were just going round in circles, but there is something we understand differently which I can try to clarify:

> Then why do you keep insisting that the axiom is so obvious and universal that everyone will sign up to it?...it's more a difference between what people want at the societal level and what they want at the person level.

What I'm saying is that when you ask someone if they agree with the axiom that, in shorthand "suffering is bad" which can be fleshed-out to its implications "The sole and overriding consideration for morality is that we should minimise suffering, where everyone's suffering counts equally in the aggregate", then you're asking someone to think abstractly. They can't fall back on instinct, they have to engage reason and think about an abstract value judgement, "do I care about suffering?". This is why those who don't have some other value system (e.g. divine command) will agree. However, when you follow through the implications, you get to a point where they're no longer just engaging reason, their evolved instincts come into play.

So yes, the axiom is obvious and universal. The reason we agree with it but not its personal implications is because it's abstract - we can't rely on our instincts when we're considering it. 

Now if it was just an axiom that made sense in the abstract but not in real life, I wouldn't bother banging on and on about it. The reason it's worth thinking about and agreeing with is that it is useful in policy, where parochial concerns don't apply.

> the reason why one kid counts more in the aggregate is "because he's my child and I care about him much more than those 100 kids I've never met put together".

Old ground. You're saying what you care more about, not what is actually more important in the aggregate. You're just demonstrating for the umpteenth time that our evolved instincts tell us what is right for the propagation of genes, not for the whole of humanity. I honestly do already know this. I also know that you reject the idea that anything is actually more important, but what you're trying to do here is show that I'm inconsistent by own rules. You fail to do so.

> Want a discussion on free speech and the right to utter unpopular views on social media?

We can have a meta-discussion about it.

I'll give you a consequentialist account of when it's great to have free speech on twitter, and when it's better to have some sanction or other. You will pull some "principle" out of your arse, that "it's a violation of the right to free speech", give no justification, and think that's just brilliant. Or, you could speak my language and give a consequentialist account of why sanctions for abuse of minorities on twitter actually lead to lower wellbeing once you give a sophisticated account of the consequences. If you did that (and in fairness, sometimes you do), then we'd be having a rational discussion, and for all intents and purposes you'd have signed up to my moral theory and admitted that your so-called "values" were actually just a bad heuristic for wellbeing.

1
In reply to Coel Hellier: Jon Stewart

To throw my own value laden opinion in, I think most people recognise a good deed as one that has been beneficial to others rather than to oneself and this is held in even higher esteem if it has involved some sort of sacrifice. If it is just something that a person wanted to do anyway then it wouldn't be seen as a particularly good moral choice, it is when you put aside your evolved desires in order to do something good for someone else that you are seen to have acted morally.

> Well how about -- on the issue of whether to spend £250,000 saving ones child from cancer, as opposed to saving the lives of 100 kids in an Indian slum -- the reason why one kid counts more in the aggregate is "because he's my child and I care about him much more than those 100 kids I've never met put together".

In this example you value ones own child as worth at least 2,500 Indian slum children. As harsh as it sounds in those terms, of course you would, we would probably all make that same choice and in a such a crisis we would all move mountains to save those we love. That doesn't make it a good moral choice though.

Looking after ones own family is not a moral choice, it is the minimum responsibility of every human being. But a responsibility to look after one's own family does not mean that the moral choices we make outside of a crisis should be to hoard wealth within one's family so that such a huge inequality becomes available. In my opinion, the moral lifestyle would be one where we constantly try to suppress our ego (recognising our family as an extension of our self) and strive to create a society where we should minimise overall suffering, counting each human equally in the aggregate, starting with those close enough for us to have the most effect (our neighbour > our fellow citizen > wider humanity).

 Ratfeeder 19 Apr 2020
In reply to Coel Hellier:

> But that assumes that, in any given case, there is a "right" answer. But, as in helping a terrorist, different humans will come to opposite answers. They are not disagreeing about the facts of the situation, so no amount of analysis of the facts of the situation will -- on it's own -- be sufficient. You also need the magic property that makes the action "moral" or not.

Different humans are taking different facts and reasons into consideration, being sensitive to some and insensitive to others. The "right" answer comes to a person when they are sensitive to all the facts and reasons in the context. The terrorist is completely insensitive to the lives of innocent people and thinks his own particular grievances and convictions are more important than anything else in the world. That is a total failure of moral sensitivity, and many terrorists eventually come to realise that when they are forced to confront the families of victims. So yes, there very often (not necessarily always) is a "right" answer, and it's not a "magic property".

> I don't so much see science and philosophy as separate disciplines, they are more different styles of enquiry.   After all, they can both be addressing the same subject matter. 

Possibly so, but philosophy usually concerns itself with apparent contradictions and other logical implications that emerge in the consideration of the subject, as with Kant's antinomies. Most fields of philosophy concentrate on key areas of conceptual difficulty, like "knowledge" "truth" "mind", while metaphysics concerns what on earth we mean by "reality". Most of philosophy, including Logical Positivism, concerns itself with meaning.

> But the scientist is also concerned with that!  

Of course.

> No, not really.  Science is about both. An observational/experimental physicist is primarily concerned with observation and experiment, but a theoretical physicist is concerned with the theoretical framework that explains observations and experiment, including the logical coherence of it. There is nothing that physicists leave to philosophers, they concern themselves with all aspects of it. 

Quite right, I didn't really mean to suggest otherwise but my wording did seem to suggest that. I meant to say that, while scientists try to build logically coherent theories from the observed data that have explanatory and predictive power, philosophers concern themselves with the logical implications of the theories and possible incoherences in them (some theories are better than others). Philosophers can help in the process of improving theories or replacing them.

> This, I find, is a common misunderstanding of science by philosophers.  They think of science as being experiment and observation (that is, the stuff that they don't do), but fail to realise that, to a scientist, science is just as much about the explanations and the understanding (that is, stuff that philosophers also do).

Ok I'm with you on that.

> Correct, but you can reduce the meta-ethical question to the theoretical framework of evolutionary biology that Darwin invented to explain biology. 

Actually I agree.

> >> Lots of people have opinions about whether something is moral and immoral.

What matters is which opinions are the most informed and sensitively formed.

> Well their opinion is certainly subjective (it's a property of their mind). So, the question is more, is there also an objective fact to which their opinion accords. 

An opinion about anything, including the correctness of a scientific theory, is a property of someone's mind. That doesn't make it subjective. Or if it does, then all so-called knowledge is subjective. Knowledge is a property of people's minds.

> But you can't dispense with the subjective feelings! They are an observed property of the world!  People do indeed have feelings. 

> Why? Of course, an intuitive moral realist (and most humans are intuitive moral realists) will automatically presume that moral opinions need to be justified, but that's a moral-realist presumption! 

But if that's true, then Darwinian evolution ought to be able to explain it. We do feel the need to justify our actions when they affect other people. Why? Because it's necessary for the possibility of living as social animals.

> To the subjectivist, moral opinions do not need to be justified, and further, at root, they cannot be justified. They derive, at root, from values, not from reason and not from justifications.

> Recall that the Darwinian perspective sees moral values as a subset of aesthetic values (re-purposed by evolution to enable us to live as social animals).  At root it makes no more sense to say that moral opinions need to be justified than it does to say that choosing coffee rather than tea needs to be justified.

This is the nub of it. Choosing coffee rather than tea is nothing like an aesthetic or moral judgement. Moral choices concern our treatment of other people, which requires a level of objectification. We see things from the other person's point of view and decide how to balance their needs and feelings with our own. We need the ability to do this to live as social animals, and that capacity is the result of evolution. You're simply putting a Humean interpretation on that result, which reduces it to the level of choosing coffee over tea. If it really were like that, we couldn't function as social animals at all. Your account reduces human behaviour to simple amoralism. But when you actually look at human behaviour, it's more than that. People don't just do what they feel like doing. Sometimes they are motivated purely out of consideration for others.

> Of course intuitive moral realists freak out in aghast amazement at this suggestion -- it really is counter-intuitive -- but that's just their intuitive moral realism.

Which no doubt we've inherited as a result of evolution.

> But your world view, your account of humans, does need to include subjective feelings! Really, it does!  Humans do have feelings! 

It does include human feelings. It just says that in moral reasoning the feelings of others have to be taken into account as well as our own. The feelings of others become reasons for me to act in certain ways that do not depend entirely on my own personal desires. What you can dispense with is the Humean notion that the overall judgement is just a feeling. It's a cognitive assessment which motivates action independently of any associated feeling. Hence, the associated feeling, if there is one, is irrelevant.  

> Even if you don't use them in your moral-realist account of morality, you still need them in your account of humans. So your overall account is much less parsimonious, it includes a whole domain (objective moral properties and moral facts)  that mine simply doesn't.

But your account simply dispenses with the idea of morality altogether. You are left with amorality! Mine accords both with the science concerning human beings and with practical reasoning in everyday life, doing justice to our cognitive moral intuitions.

Post edited at 01:25
 Coel Hellier 19 Apr 2020
In reply to Jon Stewart:

> What I'm saying is that when you ask someone if they agree with the axiom that, in shorthand "suffering is bad" which can be fleshed-out to its implications "The sole and overriding consideration for morality is that we should minimise suffering, where everyone's suffering counts equally in the aggregate", then you're asking someone to think abstractly.

Yes, though firstly, the "abstract thinking" is still an appeal to values. 

The first statement "suffering is bad" is so tautological that of course people agree with it. And in that form it is not normative and does not amount to much ("The Star Wars prequels are bad").

The second statement, though, is then concrete and normative.  In the vaguer forms your axiom has an implicit "all other things being equal" (which you've explicitly added in some formulations). However, that rider is violated if someone's family is involved, since people do not regard their family as no more important to them than unrelated strangers.   When their family is involved, other things are not equal. 

So, in the concrete and normative form, people do not agree with your axiom (at least at a personal level; they might agree with it as policy for the societal level). And again, people's advocacy of axioms such as "everyone's suffering counts equally" really is the only standing they have.

> They can't fall back on instinct, they have to engage reason and think about an abstract value judgement, "do I care about suffering?".

To which the truthful answer is: "Yes, but I care vastly more about my suffering or that of my family, and vastly less about that of unrelated strangers".

> This is why those who don't have some other value system (e.g. divine command) will agree.

I don't agree that they would agree! 

> So yes, the axiom is obvious and universal.

I just don't agree with that claim!  In the expanded form in which I stated it, it is a very radical claim that many would dissent from.

If you re-state it slightly as: "Morally, one should not care about the suffering of ones own family any more than one should care about the suffering of unrelated strangers", then most people would flat-out dissent. 

You only obtain dissent by your stage-magician's trick of not being up-front with what your axiom actually is, and asking them instead to agree merely with "suffering is bad".

Note that in Sam Harris's formulation ("worst possible misery for everyone"), everyone can indeed consent, since that would include worst possible misery for themselves and their family.

> Old ground. You're saying what you care more about, not what is actually more important in the aggregate.

There is no such thing as objective "importance".  Importance is again a value judgement, made by a person (who else?).

> You're just demonstrating for the umpteenth time that our evolved instincts tell us what is right for the propagation of genes, not for the whole of humanity.

The point is that you, as an intuitive moral realist, think intuitively that there is something that is objectively "right" for the "whole of humanity", and that that is what is morally primary.

Why is someone morally obligated to consider what is best for the whole of humanity, and to put that above consideration primarily for their own family?

"Because Jon wants them to"? "Because Jon has declared an axiom requiring it"?     The answer you cannot give is: "because objectively it is the right thing to do, as shown from first-principles reason", because there is no such concept. 

> I honestly do already know this. I also know that you reject the idea that anything is actually more important, but what you're trying to do here is show that I'm inconsistent by own rules. You fail to do so.

Your account only works given moral-realist presumptions about objective "importance" that are sneaked in. 

 Coel Hellier 19 Apr 2020
In reply to Jon Stewart:

>  Or, you could speak my language and give a consequentialist account of why sanctions for abuse of minorities on twitter actually lead to lower wellbeing once you give a sophisticated account of the consequences. If you did that (and in fairness, sometimes you do), ...

Absolutely, I can readily defend free-speech "principles" in terms of the consequences for society if we don't uphold those principles. Indeed, that is exactly why we promote such principles.  Mill's On Liberty, for example, is a defence of free-speech principles in consequentialist terms (Mill, of course, also being a Utilitarian).

As I've suggested, "principles" are the sort of heuristic that we need for society to operate.  So we need the "heuristic" of a law against murder.  We can't have every court case being: "OK, let's consider, in fully glory, the consequences for society of this particular killing".

> ... then we'd be having a rational discussion, and for all intents and purposes you'd have signed up to my moral theory and admitted that your so-called "values" were actually just a bad heuristic for wellbeing.

As I've said, stating that consequences and wellbeing are what morals are about is akin to emphasizing that the Pope is Catholic.

Where I'm disagreeing with you is your attempt to turn that into an objective and normative moral system that is actually alien to human nature.  

 Coel Hellier 19 Apr 2020
In reply to cumbria mammoth:

> Looking after ones own family is not a moral choice, it is the minimum responsibility of every human being.

Wouldn't shirking ones responsibilities be immoral? If so, looking after one's family would then be the moral thing to do, surely? 

> In my opinion, the moral lifestyle would be one where we constantly try to suppress our ego (recognising our family as an extension of our self) and strive to create a society where we should minimise overall suffering, counting each human equally in the aggregate, starting with those close enough for us to have the most effect (our neighbour > our fellow citizen > wider humanity).

I have no disagreement with you, so long as you're advocating that as "in my opinion", proposing that as what you think we should do.

I'm only disagreeing with Jon where he claims one can, using reason, get to an objective requirement to adopt that policy. 

OP Jon Stewart 19 Apr 2020
In reply to Coel Hellier:

> So, in the concrete and normative form, people do not agree with your axiom (at least at a personal level; they might agree with it as policy for the societal level). And again, people's advocacy of axioms such as "everyone's suffering counts equally" really is the only standing they have.

Your understanding of the axiom is all over the shop. On the one hand you think that

> stating that consequences and wellbeing are what morals are about is akin to emphasizing that the Pope is Catholic.

Well yes. That's what the axiom says. Rather like the religious persuasion of the Pope, it's a fair assumption that we can all agree on it. 

On the other hand you say,

> If you re-state it slightly as: "Morally, one should not care about the suffering of ones own family any more than one should care about the suffering of unrelated strangers", then most people would flat-out dissent. 

Well you can't restate it like that, because that's not what it says. I find it galling to be accused of "stage magic" when you've just performed that outrageous manoeuvre in plain sight.

What the axiom says is that the world with the least suffering is the best world. It does not follow from there that one should not care about the suffering of ones own family any more than one should care about the suffering of unrelated strangers. That really is just a mistake in your reasoning.

This, again, is old ground:

"They should try to decrease overall suffering. So, they've got to take their own suffering into account if they decide to let their own children starve while feeding a stranger's. To decrease overall suffering, the best way is to look after your kids."

Deciding exactly how far out this "circle of increased concern" reaches is extremely difficult and utilitarianism isn't going to make it easy, which is why I don't think it can be used practically in day-to-day life. But as a way to think about it, the idea of the moral landscape is helpful. We are where we are, with a lot of extreme poverty and suffering in the world. Would the best move to make be to abandon your family to help someone in Congo? Probably not, because you're going to cause your family a lot of suffering. Would it be a good idea to donate some money or time to a charity helping victims of the conflict in DRC? Almost certainly. Should you and your family live in poverty yourselves so you can make a bigger difference to people in DRC? Well, perhaps that might be the most morally virtuous thing to do, but inflicting poverty on loved ones is a significant drawback to be considered; and who's going to do the most morally virtuous possible thing at all times anyway? That's just not what human beings are like, evolution did not make us that way. 

I think there is a useful distinction between moral duties and moral virtues. I don't think that utilitarianism prescribes what constitutes a duty and what constitutes a virtue. All it does is tell you what the most virtuous option is out of choices. Your argument seems to rest on the idea that every conclusion of utilitarianism is a moral duty. That's a much more subtle "magic trick" than the blatant switch manoeuvre above. This one is, I think, just a mistake.

Post edited at 12:42
 Coel Hellier 19 Apr 2020
In reply to Ratfeeder:

> An opinion about anything, including the correctness of a scientific theory, is a property of someone's mind. That doesn't make it subjective

Given that the very definition of "subjective" is: "based on or influenced by personal feelings, tastes, or opinions", it's hard to argue that an "opinion" is anything other than subjective.

Of course there will be facts of the matter about the correctness of scientific theories, and it may be that someone's opinion does indeed align with those facts.

We can then define "knowledge" as a belief that does indeed align with the facts, and by that definition "knowledge" is objective.  (Don't philosophers often define "knowledge" as "justified, true belief"?, though this could lead to a diversion into Gettier problems )

> But if that's true, then Darwinian evolution ought to be able to explain it. We do feel the need to justify our actions when they affect other people. Why? Because it's necessary for the possibility of living as social animals.

It's not "necessary", but it does enhance the rhetoric, and so adds to the persuasiveness. 

My young nephew declares: "Uncle, you need to push me on the swing!", or "I need you to push me on the swing".   What he means, of course, is that he would like me to.  The adoption of the "objective" language of "need" is just enhancing the rhetoric of the plea.

This generally works on humans, but is not fully necessary.   In actuality, his pleasure from being pushed on the swing and my pleasure from interacting with him is what motivates me to push him. 

> This is the nub of it. Choosing coffee rather than tea is nothing like an aesthetic or moral judgement.

You sure about that, that choosing coffee rather than tea is not an aesthetic judgement? What is it then?

> Moral choices concern our treatment of other people, which requires a level of objectification.

Yes, "moral" judgements are aesthetic judgements regarding how humans treat each other. 

> Your account reduces human behaviour to simple amoralism.

Not at all.  "Amoralism" is having no values. An "amoral" person simply doesn't care.  Reducing morality to human values is the opposite of amoralism.

> But when you actually look at human behaviour, it's more than that. People don't just do what they feel like doing. Sometimes they are motivated purely out of consideration for others.

That's because one type of value that people have, one sort of feeling and desire people have, is consideration for others.

> But your account simply dispenses with the idea of morality altogether. You are left with amorality!

Absolutely not.  I am left with human values and feelings.

 Ratfeeder 19 Apr 2020
In reply to Coel Hellier:

> Well their opinion is certainly subjective (it's a property of their mind). So, the question is more, is there also an objective fact to which their opinion accords.

> But you can't dispense with the subjective feelings! They are an observed property of the world!  People do indeed have feelings.

Straw man. I'm not trying to dispense with feelings; in fact I make it clear that taking the feelings of others into account is a necessary part of moral judgement. Incidentally, if feelings are just brain states, how would you go about observing them in the world?

> Why? Of course, an intuitive moral realist (and most humans are intuitive moral realists) will automatically presume that moral opinions need to be justified, but that's a moral-realist presumption!

Yes, indeed it is. The burden of proof lies on the subjectivist to show that they don't. The subjectivist needs to justify his opinion, which is what you seem to be trying to do.

> To the subjectivist, moral opinions do not need to be justified, and further, at root, they cannot be justified. They derive, at root, from values, not from reason and not from justifications.

Just because moral opinions derive from values does not mean the values themselves cannot be justified. It's only when you take a reductionist view of values that this is so.

> Recall that the Darwinian perspective sees moral values as a subset of aesthetic values (re-purposed by evolution to enable us to live as social animals).  At root it makes no more sense to say that moral opinions need to be justified than it does to say that choosing coffee rather than tea needs to be justified.

The whole conflict boils down to this. Yes, moral values are like aesthetic values, (except that they apply to human conduct rather than artefacts and scenery). But suchvalues are not like flavour preferences. Having a flavour preference is an entirely non-cognitive state because it doesn't require any level of objectification (you don't need to stand back and include other considerations to have the preference). But a moral judgement requires at least a first level objectification, whereby we stand back from our own personal preferences and desires to consider someone else's, and form a judgement about how to treat the other person. That step of objectification is minimally what is required for the possibility of social life. The more considerations, and the more people, come into the judgement, the more steps of objectification we make, until an overall judgement can be made. A bad judgement is simply one that doesn't enter enough considerations of individuals, and/or enough people, into it (it fails to make the appropriate number of levels of objectification). The cultivation and development of this process is what constitutes moral progress. 

This human capacity for objectification will of course be the result of evolutionary processes, because socialisation is advantageous to human survival. So your account, which insistently reduces moral judgement to the level of flavour preference, fails to capture the very process that is necessary for socialisation. So it completely misses the point. The science does determine the meta-ethical question, but gives the opposite answer to yours, which is derived not from the science, but from your Humean interpretation of that science.

The same process of objectification is required for the sciences (it's the same capacity, as produced by evolutionary processes), only it's taken much, much further, to the point where as much subject-relativity is stripped away as is humanly possible. But it's a mistake to demand of moral and aesthetic judgement that it must have the same level of objectification as biology or physics in order to have any objectivity at all. By doing that you collapse down to the level of purely personal flavour preference, which from an explanatory point of view is self-defeating. The evolutionary biologist has to remember that when his subject matter is human beings, he is dealing with human beings and not just material objects. 

> But your world view, your account of humans, does need to include subjective feelings! Really, it does!  Humans do have feelings!

Straw man.

> Even if you don't use them in your moral-realist account of morality, you still need them in your account of humans. So your overall account is much less parsimonious, it includes a whole domain (objective moral properties and moral facts)  that mine simply doesn't.

Ok granted, but your account is to reductive to be of any use.

OP Jon Stewart 19 Apr 2020
In reply to Coel Hellier:

> Your account only works given moral-realist presumptions about objective "importance" that are sneaked in. 

They're not sneaked in. Let's see how they got there.

Let's start with a blank sheet, and ask "what do I think morality is?". Well, to start with, I'm an atheist, and a materialist (although not of the radical identity theory of consciousness type, there's something a bit like property dualism in there, which is for a different thread ). So I can't go looking into the mind of God, or a platonic realm for an answer.

Maybe morality is just entirely subjective. Perhaps a desire to see the greatest pain for the greatest number of kittens and babies is one person's morality, and another might be to spread as much love and happiness in the world, and there is no way to choose between them, no consensus that we can come to about who's right and who's wrong here. Doesn't seem right, maybe we can do better. OK, well we've evolved certain instincts that drive us towards care for our families and not stomping on kittens, so maybe our evolved instincts are really all there is. This has the problem that our evolved instincts don't look anything like what I'd call morality. They're totally inconsistent (as the trolly problem and shallow pond thought experiments demonstrate) which shouldn't be surprising as they came about through evolution with a specific purpose of helping us reproduce our genes.

One option is to give up. There is no such thing as morality, there are just evolved instincts. You can try some flaky appeal to "values" but when you scrutinise anyone's "values" you find that they're actually just talking about their evolved instincts and they're totally inconsistent - so not really worth mentioning. Saying that you obey "values" is just posturing, trying to elevate your evolved instincts to something more, something which they are not. Can we do better?

Well yes we can, because as sure as we know the Pope is Catholic, we intuitively understand that morality is about wellbeing. Many would argue, as you did, that even ISIS' morality is about wellbeing, it's just a very different conception of it that prizes wellbeing in an afterlife above all else. So, if this idea can unite secular humanists and ISIS, then it's certainly a good contender for being a consensus - that is, something that can give us common-sense (or scientific) objectivity. We're out of the blocks. We've got a central value we can agree on, from which we can derive morality.

Can we be a bit more precise than "morality is about wellbeing"? Maybe we can. So more wellbeing is good, and less wellbeing is bad. Does it matter who's experiencing the wellbeing? Is there any moral good in transferring wellbeing from one person to another? We'd need good reasons to justify that, like say, sacrificing someone's wellbeing to protect others' (as in criminal justice). Would I say it was moral to transfer wellbeing from some random person to me, or to my family?

So what's your answer? Sure you care more about you and your family than strangers, we've been over that. But in your subjective "values"-based morality, do you think it's right to transfer some random person's wellbeing to you, just because you're you and you want it? That's what it means to say that no one person's wellbeing is more important than another.

If you want to have a debate about whether this is reason, or part of the central, universal value judgement, you'll find I'm probably agnostic. I haven't thought about it, and it makes no difference. But the point I'm trying to get across is that caring more about you and your family is fine and universal; but believing that actually we're all of equal importance is just as universal. No stage magic required. Just a little bit of thinking about morality in the abstract rather than relying solely on evolved instincts and how they relate to specific personal situations. Rocket science it is not - you of all people should have no problem.

Edited "substance" for "property". Hopefully the idea of me as a substance dualist seems a bit weird!

Post edited at 12:45
 Coel Hellier 19 Apr 2020
In reply to Jon Stewart:

> Your understanding of the axiom is all over the shop.

It would help with that if you settled on a consistent formulation of the axiom that isn't tautological, and is normative (contains a "should" or "ought" instruction), and clearly entails everything you intend it to entail!

> Well you can't restate it like that, because that's not what it says.

OK, well let's see. When I stated the axiom as:

"The sole and overriding consideration for morality is that we should minimise suffering, where everyone's suffering counts equally in the aggregate",

You agreed that this was "fine" and stated that I had not "added anything" to your axiom. 

But, if the "sole and overriding consideration for morality" is that "everyone's suffering counts equally" it then surely follows that: "Morally, one should not care about the suffering of ones own family any more than one should care about the suffering of unrelated strangers".

But when I state it that way you suggest that I'm misrepresenting it. 

> What the axiom says is that the world with the least suffering is the best world.

Here you give yet another formulation. It's vague: "best" in whose evaluation? It gives no account of how to aggregate suffering (if it's a basic axiom it's good to be explicit, surely?).  It's not normative, yet it needs to be, to function as your axiom.  If you mean: "morally, we ought to head for that "best"", then you need to say so.  

As it is you merely have a descriptive statement akin to "The original Star Wares movie was the best of them".

So, can you give a proper and full statement of the axiom that you will then stick to?   (We can then refer to it as "the axiom" to avoid typing it all out each time.)

> It does not follow from there that one should not care about the suffering of ones own family any more than one should care about the suffering of unrelated strangers. That really is just a mistake in your reasoning.

That does indeed follow from the "sole and overriding consideration for morality" being that "everyone's suffering counts equally".    And you stated that that formulation had not "added anything" to your statement.

If everyone's suffering counts equally,  then, necessarily, it must be that, in moral terms, you should not care about any one person's suffering more than any other person's suffering. 

OP Jon Stewart 19 Apr 2020
In reply to Coel Hellier:

> When I stated the axiom as:

> "The sole and overriding consideration for morality is that we should minimise suffering, where everyone's suffering counts equally in the aggregate",

> You agreed that this was "fine" and stated that I had not "added anything" to your axiom. 

Agreed. Let's use that formulation.

> But, if the "sole and overriding consideration for morality" is that "everyone's suffering counts equally" it then surely follows that: "Morally, one should not care about the suffering of ones own family any more than one should care about the suffering of unrelated strangers".

> But when I state it that way you suggest that I'm misrepresenting it. 

Because it doesn't actually follow. I can see that it superficially appears to follow, but when you think about it more carefully, it doesn't. When you mis-apply the axiom and stop caring for your family any more than strangers, your family suffers and you suffer, and you've failed to live by the axiom - you caused unnecessary suffering. You took a superficial interpretation and cocked it all up. That's your fault, it's not a problem with the axiom.

> If everyone's suffering counts equally,  then, necessarily, it must be that, in moral terms, you should not care about any one person's suffering more than any other person's suffering. 

Only if you fail to consider the facts the world, that wellbeing and suffering for each person depends upon caring for their loved ones. That's your failure of reasoning, not a problem with the axiom.

Post edited at 13:03
 Coel Hellier 19 Apr 2020
In reply to Jon Stewart:

> When you mis-apply the axiom and stop caring for your family any more than strangers, your family suffers and you suffer, and you've failed to live by the axiom

So then you are saying that parents *should* care more for their children, but only in the sense that a school teacher should care more for the children in the class that has been assigned to them, than they do about another class of children on the other side of the country.

Again, I don't agree that people will assent to your axiom.

In fact, it's very much a left-wing way of thinking. Left wingers tend to think primarily at the societal level, on the basis of what is good for society, for everyone as a whole, and then work from there to policy for individuals. 

Right wingers tend not to think like that.  They think primarily about the individual level and the family level. Then they see "society" as emerging from the interactions of everyone.  So they think primarily in terms of what an individual should do, and then work from there to societal-level policy. 

These top-down and bottom-up approaches are different.  Thus a left-winger might think in terms of societal-level provision for the needy, paid for by redistributive taxation, whereas a right winger might think more in terms of individual charitable donations for the needy. 

So, the point of this is that many people (particularly those who tend to think in right-wing ways) are not going to assent to your axiom!

And again, the only standing your axiom has is the assent of those who assent to it. 

So you can't just say to those who dissent "well you're just wrong", you can only say that "in my opinion you are wrong", and "that's not how I want society to be".  

Which is fine, that's how things work, with you attempting to influence wider society based on your values.  But this does not give you a universal foundation for morals based on an axiom that everyone will assent to, because plenty of people will not assent to your axiom!

Just for example, given the axiom as stated:

"The sole and overriding consideration for morality is that we should minimise suffering, where everyone's suffering counts equally in the aggregate"

... I do not assent to your axiom!  And in that one fell swoop I've destroyed your claim that it has universal assent.   (And I'm not just saying that to be argumentative, though I'm good at being argumentative , I really do not assent to your axiom.)

You are doing what everyone searching for an objective moral scheme does, seeking illusory backup for your opinion.

In reply to Coel Hellier:

Yes, I'm just advocating my own opinion but my observation that most people recognise a good deed as one that has been beneficial to others rather than to oneself and this is held in even higher esteem if it has involved some sort of sacrifice was the key point that I wanted to make towards the debate that you and Jon are having.

If we could say all people then we would have this universal axiom, and I don't think it is far off all people that recognise a good deed to be a selfless deed. It does seem to me that this concept is as close to universal as can be found and the right starting point for those who would like to find a rationale to help make better ethical choices.  But, as it is only most, then I suppose you are probably correct that morality isn't part of the fabric of the physical universe and can't be 100% derived from reason.

 aln 20 Apr 2020
In reply to Jon Stewart:

There ain't half been some clever bastards...

 Ratfeeder 20 Apr 2020
In reply to Coel Hellier:

> Absolutely not.  I am left with human values and feelings.

Which (for you and Hume) are neither morally right or wrong. Your problem is you want to have your cake and eat it.

Post edited at 08:04
 Coel Hellier 20 Apr 2020
In reply to cumbria mammoth:

> Yes, I'm just advocating my own opinion but my observation that most people recognise a good deed as one that has been beneficial to others rather than to oneself and this is held in even higher esteem if it has involved some sort of sacrifice was the key point that I wanted to make towards the debate that you and Jon are having.

Indeed, most people are fairly similar to each other (we have a low level of genetic diversity compared to many species), which means that, in practice, we do agree on many of our moral judgements.  That's good, that means we can readily live in large, crowded social groupings such as cities. 

(That by the way, is fairly remarkable compared to most species, that we can continually interact with large numbers of unrelated strangers without fights continually breaking out.)

But this doesn't get you anywhere towards an "objective" morality, since the moral judgments are still coming from our values.

Even if we were a species of genetic clones, with utterly identical opinions on all moral issues, it would still not give a "objective" morality, it would still be a subjective morality (one deriving from our feelings and values), however much we all agreed.

 Coel Hellier 20 Apr 2020
In reply to Ratfeeder:

> Which (for you and Hume) are neither morally right or wrong.

They are not "objectively right" or wrong, agreed.  There is no such concept as objective right or wrong. 

So that does amount to saying there is no objective morality.  But it is not saying that there is no subjective morality.  Subjective morality -- our feelings and values about how humans interact with each other -- is very real and very important to us, it's an entirely necessary aspect of our entire social way of life.  

A subjective morality is not a second-rate morality, not a non-existent morality, and not an unimportant morality.  Indeed it is of supreme importance to us because it's about what we want. 

An objective morality, on the other hand, if it existed, now that would be an unimportant and dispensable morality.  Suppose it really were "objectively morally wrong" (whatever that is supposed to mean) to mix cheese and meat in the same dish, or to eat pork, or to wear clothes made out of more than one type of thread.

Now *that* would be an utterly dispensible and unimportant morality because it would not relate to anything that matters to us.  Thus we could just ignore and dispense with such a morality, because nothing bad would result. 

If you're tempted to respond, "but maybe bad things would happen if we ignore objective morality", then that admits that what matters is our evaluation of the rules and their consequences, and that (by the definition of the word) is subjective. 

The whole discussion of meta-ethics is always bedeviled by intuitive moral realists treating the word "subjective" as a bogey word that must be avoided at all costs, coupled with a strong desire to get to slap the word "objective" on morality, somehow, anyhow.  But those feelings are themselves just moral-realist presumptions! 

Once you cure yourself of intuitive moral realism, you realise that there is nothing at all wrong with or lacking in the subjectivist account.

OP Jon Stewart 20 Apr 2020
In reply to cumbria mammoth:

> most people recognise a good deed as one that has been beneficial to others rather than to oneself and this is held in even higher esteem if it has involved some sort of sacrifice

Totally agree. I think it's both possible and useful to use this consensus intuition to analyse moral choices. 

> It does seem to me that this concept is as close to universal as can be found and the right starting point for those who would like to find a rationale to help make better ethical choices.  But, as it is only most, then I suppose you are probably correct that morality isn't part of the fabric of the physical universe and can't be 100% derived from reason.

Exactly. I'm making no spooky claim that morality has to be analysed in this way because it's somehow part of the fabric of the universe. It isn't - it's just a matter of how human beings feel about human behaviour. I'm just promoting it as a good idea, because it works - particularly for policy where we don't encounter the confusing issues that stem from our evolved drives. And what I mean by "works" is that adopting this view of morality accelerates moral progress, firstly by establishing a meaning for "moral progress". You can't have progress unless you develop some sort of consensus around what is meant by good and bad, i.e. have a common-sense (not philosophical) objective morality. 

I think this worries Coel because some of his ideas about right and wrong will come up looking a bit shit under this analysis - so therefore the analysis must be wrong! Haha!

OP Jon Stewart 20 Apr 2020
In reply to Coel Hellier:

> So then you are saying that parents *should* care more for their children, but only in the sense...

I don't know what you mean by this. Parents simply *do* care more for their own children. I don't understand what more can usefully be said on this point.

> Again, I don't agree that people will assent to your axiom.

I think this is a fair point to clear up. I need to be more rigorous about what I mean by "universal" and I accept that it isn't the case that everyone in the world is going to sign up to the axiom if I were to send it out by email with a voting button.

What I actually mean is that the axiom formalises moral intuitions held by nearly everyone - precisely the intuitions cumbria mammoth cited: that caring for your family is just normal behaviour, but making sacrifices to reduce the suffering of a stranger is a moral virtue. Whether or not people would "sign up" to a formalised expression of this is, on reflection, a pretty daft way to think about it, since very few people are interested in thinking about morality in the abstract - people would in general just delete the email and shrug their shoulders. 

The claim I am making about the axiom is that it does a good, useful job of formalising commonly held moral intuitions (a consensus on which we can base common-sense objectivity). I'll come back to your objection to this claim.

> In fact, it's very much a left-wing way of thinking

> These top-down and bottom-up approaches are different.

That's an interesting point, although I don't think it's as left and right as you make out. The way of thinking that this analysis leads to is precisely that of those dreadful leftists like Steven Pinker and Sam Harris. Those people who are accused by the far left of being alt-right figureheads! This analysis is the foundation of secular humanism, not leftism.

I accept your point that it makes secular humanists like me come across as smug and arrogant about the superiority of our political beliefs, in contrast to those on the right. This is because we've got reason and evidence on our side. We look at what works in the world to reduce suffering and say "we should do that, because that's what reason and evidence says is best". Someone on the right might say "well I don't like paying taxes, and that's my values", we just roll our eyes and think "dickhead". We can then ask "well show me why that's better - where does it work better? How would it work better?". And what kind of response might we receive? Something like "I'm entitled to my opinion, you lefties are all so arrogant". And then we roll our eyes again.

> And again, the only standing your axiom has is the assent of those who assent to it. 

Or, it does a good job of formalising commonly held moral intuitions.

> ... I do not assent to your axiom!  And in that one fell swoop I've destroyed your claim that it has universal assent.   (And I'm not just saying that to be argumentative, though I'm good at being argumentative , I really do not assent to your axiom.)

That sounds quite a lot like an "I'm entitled to my opinion" non-response to the question of whether or not the axiom does a good job of formalising your moral intuitions. We have been trying to get to the bottom of what it is you actually disagree with, and we haven't been getting very far. I'll see if I can help:

1. The sole and overriding consideration for morality is that we should minimise suffering

As I understand you agree it's entirely obvious that morality is about wellbeing, aka minimising suffering, but if I've understood you correctly, you believe that there's something more that isn't captured, that you call "my values". I asked you to give an example of a "value" you want to hold onto that didn't reduce to wellbeing, and I can't remember what your response was. It's a long thread to search through - could you remind me?

2. where everyone's suffering counts equally in the aggregate

You described this as the "sticking point", but above it's absolutely clear that "signing up" to this axiom does not require you to care the same about strangers as your family - that was just an error of understanding. The way I've framed it to try to clarify what it meant is: "do you think it's moral to transfer wellbeing from one person to another - say from a stranger to your child - while keeping the total amount of suffering equal?".

So you say you won't "sign up" to the axiom, which I can clarify as meaning "this axiom does not correctly formalise my moral intuitions". So show me how it fails.

Post edited at 11:18
 Coel Hellier 20 Apr 2020
In reply to Jon Stewart:

> It isn't - it's just a matter of how human beings feel about human behaviour. I'm just promoting it as a good idea,

Exactly.  You are acting as a moral agent, advocating your moral ideas and attempting to influence society to that end.

This is entirely in line with my account of morality.  I'm only disagreeing when you attempt to turn your system into a spuriously "objective" prescription.

> And what I mean by "works" is that adopting this view of morality accelerates moral progress, firstly by establishing a meaning for "moral progress".

Any moral system will advance "moral progress" as evaluated by that system!      Even Naziism would advance "moral progress" where "moral progress" is defined by Naziism.

OP Jon Stewart 20 Apr 2020
In reply to Coel Hellier:

> Even Naziism would advance "moral progress" where "moral progress" is defined by Naziism.

The difference being that Naziism, unlike secular humanism, wasn't founded on consensus moral intuitions.


New Topic
This topic has been archived, and won't accept reply postings.
Loading Notifications...