In reply to Coel Hellier:
> No, not necessarily. I maintain that facts, reason, experience can indeed influence and change our feelings and values, but only by acting on a brain that already is full of feelings and values.
I wouldn't dispute that. I'm not claiming that when Tom experiences the plight of the pigs he is devoid of feelings and values. Of course he isn't. But what matters is how his feelings and values after the experience differ from those before it. What is it about the situation Tom experiences that causes the change? Whatever it is, it's a feature of the situation itself, not just a feature of Tom's brain. And that is an objective, morally relevant feature. It's bit of a truism, isn't it, to say that there need to be feelings and values if reason and experience are to change them. But what is it about these feelings and values that makes them susceptible to change? Or does the brain have a sort of repertoire of pre-existing feelings and values that pop up when triggered? In which case they would have to be immutable (but are exchangeable).
> But this does not require that some of the values be "primary" or immutable. One can conceive of:
So the brain doesn't have a pre-existing repertoire of feelings and values that pop up when triggered? A more plausible account, to my mind, would be to say that human beings have a pre-existing capacity to feel emotions and form values, which evolve in tandem with our cognitive states - our experience and reason. But on a Humean account, which holds that desires are strictly non-cognitive, it's difficult to see how desires can be influenced by beliefs, reason or even experience. A person either likes the taste of strawberries or she doesn't; not much in the way of reasoning or experience is going to alter that. The pertinent question is, are desires necessarily like that (i.e. strictly non-cognitive)?
> {Facts, reason, experience} acting on {value A} lead to {value B}, but then {facts, reason, experience} acting on {value B} lead to rejecting {value A} and adopting {value not-A}.
> The point is that human brains are neural networks, and no part of a neural network is primary or immutable. All parts of the network can be changed under the influence of the rest of the neural network.
Yes, but a sharp Humean distinction between facts and values, cognitive beliefs and non-cognitive desires, is not really compatible with this account. Cognitive states will affect other cognitive states, but non-cognitive states will remain unaffected by a change in cognitive states. Only if desires can themselves be cognitive states, can they be influenced by other cognitive states. Pain, for example, is not influenced by believing one thing rather than another.
> But -- I assert -- the experience would have no effect if it were not operating on a human that already had feelings and values in their neural-network brain, feelings that would be triggered by the experience.
Again, that is a truism in the sense that if a human had no feelings and values, there would be no feelings and values for experience to have an effect on. The point is, though, that the feelings and values being changed by the experience are different feelings and values before the experience from those after it. Is that because one set of pre-existing feelings is popping down out of the way and being replaced by a different set of pre-existing feelings and values popping up, or that the original set of feelings and values has mutated into a different set, such that the original set no longer exists?
> Thus, you could create a robot with any amount of factual knowledge and reasoning ability, and then expose that robot to any number of experiences. But, that alone would never lead to the robot experiencing revulsion at the plight of the pigs. Only if the robot already were a feeling, valuing entity, with capability to feel revulsion, would such feelings be aroused.
But I'm not denying that, as human beings, we have the innate capacity to experience emotions and to evaluate. That is part of our evolutionary heritage. Of course the robot would need the capacity to experience emotions and values in order to have them. But is giving the robot a pre-existing set of emotions and values a fair representation of how the human brain operates?
So far, I've gone along with your lumping emotions and values together as subjective mental states. The assumption has been made that values are basically emotive and hence non-cognitive. But that, again, is a Humean assumption. I think there is an important distinction to be made here. It's much more difficult to make sense of the idea of the brain being full of pre-existing values than of emotions. That's because it makes little sense to abstract the evaluation of something from the thing or state of affairs that is being evaluated. The former is not independently intelligible from the latter. An emotion, such as anger, say, is much more conceivable in the abstract. You could see someone get angry and not even know what they're getting angry about. So you could imagine anger as a pre-existing entity in the brain's evolved repertoire of emotions. That's much more difficult in the case of values. How would you know what value I place on something without knowing what it is I'm placing value on? The value is attached to the object, person, action or state of affairs. Hence, we tend to talk about things as sometimes having intrinsic or universal value, not just value for Tom or you or me. We could talk about the intrinsic value of justice, for example. When value is attached to a person or the action of a person, then we're talking about moral values. The quality of a person's character, the rightness or wrongness of an action. When Tom experiences the situation of the suffering pigs, he may well feel the emotion of revulsion. But that is an independently intelligible mental state. You could abstract the revulsion from the state of affairs that causes it. But that leaves the value, or rather disvalue, that Tom attaches to the state of affairs, as a separate entity. Unlike the feeling of revulsion, the disvalue that Tom perceives is a property of the state of affairs itself. It's difficult to see this as a pre-existing disvalue in the brain's repertoire of pre-existing states.
> Thus there is no logical path from facts or reason alone to feelings and values. Hume was right on that. Us arriving at moral judgements depends on us being social animals, programmed by evolution to feel pity or revulsion or sympathy or anger or all sorts of other emotions. Though, yes, experience, facts and reason can strongly influence whether and how those emotions are triggered.
Indeed there is no logical (deductive) path between the two, but then neither is there a logical (deductive) path between cause and effect (another Humean insight). But if (contra Hume) feelings and values are intrinsic aspects of experiences of how things are in the world, no such path is needed. Similarly, if causes and effects are not really totally separate events, but are really aspects of the same event, then the need for a logical path between them becomes redundant. I do't dispute the evolutionary story. We are "programmed" to understand the world in terms of cause and effect. But that doesn't mean statements about cause and effect are not objective statements about the world. It just means, in Kantian terms, that they are objective in empirical reality, but not in reality as it is in itself.
> So, as an aside, your above sentence might be bettered phrased: "Someone who doesn't care about the welfare of sentient creatures that are not suffering in front of him may well start to care when witnessing the suffering in person".
I was making the rather stronger claim that someone who doesn't care about animal welfare at all may start to care when they actually witness animals suffering. An experience of reality can change a mind.
> (In fact, most humans are like that, we may not care very much about a child starving to death thousands of miles away in Africa, but would care a lot if that same child were suffering in our living room. Utilitarians don't like this aspect of our evolved natures.)
I think it's difficult to generalize. People are different. I don't think most people don't care, it's just that they already have responsibilities for family and friends and can't be responsible for everyone in the world. They probably feel a bit powerless. Perhaps they'll donate to charities as compensation for that shortcoming. But I agree, we do live in an outrageously unequal world and it's something that morality requires us collectively to address.
> Whereas I would say the "sense of the wrongness of this situation" is simply a dislike, a revulsion. Indeed, likely evolution took existing aesthetic sentiments (e.g. revulsion at putrid meat) and re-purposed the mechanism to produce revulsion about another sentient-being suffering. So moral sentiments are basically a sub-category of aesthetic sentiments.
I don't think that does any justice whatsoever to how we actually experience moral values. To borrow a phrase of Jon Stewart's, this is incredibly wishy-washy. Moral judgements sometimes have to be made in life or death situations. I hardly think that's analogous to aesthetic sentiments. I don't think you can reduce ethics to this level of sentimentality, and that's one of the problems with Humean ethics.
> But what does that mean? I can understand what it means if the declaration that something "is morally wrong" is a declaration made by a human, and what they are saying is equivalent to them expressing a revulsion and a desire that it stop. That's a value judgement, and value judgements require someone to make that judgement based on their feelings and values. Necessarily, such value judgements are subjective (being properties of their brain state).
The belief that people are dying from Covid-19 is also a property of peoples' "brain states" (it's a mental state); it requires someone to have that belief; It's a subjective state. Nevertheless it's either true or false, because it's a cognitive mental state. Either people are dying from Covid-19 or they're not.
> But, if that is not the correct interpretation, then what does "X ... is in fact morally wrong" even mean? Can you re-phrase it to elucidate what you mean by "morally wrong"?
It means it's something we shouldn't do, irrespective of what we may otherwise want. Like, for example, going out at the present time and coughing in people's faces. You shouldn't do that even if you don't care about spreading Covid-19. "If you want to help to reduce the spread of coronavirus, then you shouldn't cough in people's faces" just doesn't cut it for me I'm afraid. It's totally inadequate.
> So, could a robot, programmed with facts and reason (but having no pre-existing feelings, desires or values) also come to that apprehension?
If it was programmed to have the capacity to feel emotions and have desires, then yes.