Bayesian Brain hypothesis and climbing

New Topic
This topic has been archived, and won't accept reply postings.
Thread auto-archived as it is too large

I’m looking into different factors influencing decision making during climbing and came across the Bayesian Brain hypothesis. Whilst a little more bases in materialism than my interests it does seem to provide some good insights into what’s going on at a ‘computational’ level. I wondered if anyone had any deeper understanding of this / had thought about it already or had some good ideas? 

7
 Dr.S at work 07 Jun 2022
In reply to schrodingers_dog:

Prior to your post I had never heard of it - can you summarise your thoughts so far?

In reply to Dr.S at work:

My thoughts are probably quite basic so I was hesitant due to fear that I’d misunderstood the concept. The best I can do at the moment is that the Bayesian Brain hypothesis is a heuristic way of thinking about how we make decisions based upon predictions of outcome at a conscious and unconscious level. For example I was climbing on Cir Mhor recently and stood under a strange crack and slab climb, it was my 3rd day on of 2.5hr hikes in and I’d just spent an hour making a treacherous scramble up a gulley. The climbing style was unfamiliar to me and I was tired and a bit anxious, the whole grade of E3 was within my usual capability with all the other factors in play it was hard to predict the outcome with enough certainty to say I might not find myself in serious trouble. So my question would applying some sort of simplified Bayesian analysis help make the decision about setting off and subsequent decisions on the way increasing a sense of confidence and control? Or does this seem like a misinterpretation. 

6
 DaveHK 07 Jun 2022
In reply to schrodingers_dog:

> So my question would applying some sort of simplified Bayesian analysis help make the decision about setting off and subsequent decisions on the way increasing a sense of confidence and control? Or does this seem like a misinterpretation. 

My reading of it (and I'm probably even more likely to be wrong than you) is that the subconscious brain is somehow applying a process that yields similar results to  Bayesian analysis.

In reply to DaveHK:

Yes, so what I was thinking was could a person shine the ‘spotlight’ of their attention onto this process making it more conscious and useful as a decision making tool.

 profitofdoom 07 Jun 2022
In reply to schrodingers_dog:

> ............................Or does this seem like a misinterpretation. 

TO BE HONEST it seems like a lot of boll*x to me

That's just me probably

2
 DaveHK 07 Jun 2022
In reply to schrodingers_dog:

> Yes, so what I was thinking was could a person shine the ‘spotlight’ of their attention onto this process making it more conscious and useful as a decision making tool.

I'd be worried that would make the magic disappear.  

In reply to DaveHK:

This is what drove me to thinking about it, how could we science the magic out of climbing and would it be effective at improving performance my reducing anxiety and improving decision making. There could be a set of questions you ask yourself like a protocol that reduces feelings of uncertainty and their consequences 

In reply to profitofdoom:

That’s a reasonable interpretation tbf 

 Marek 07 Jun 2022
In reply to schrodingers_dog:

The whole "Bayesian Brain" business seems to be an attempt to avoid stating the obvious ("We make predictions about the future based on our past experiences") by appropriating and abusing terminology from fields which have well defined terms such as 'Baysian' (from maths/statistics), 'free energy' (from physics/thermodynamics) and 'predictive coding' (from computer science/machine learning). Anyway, that's the cynical view - cognitive neuroscientists will no doubt argue otherwise.

Of course, the elephant in the room is that humans are demonstrably poor at learning from their mistakes and are generally poor at intuitive statistics.

Post edited at 22:13
In reply to Marek:

I don’t think it’s cynical, it not quite right. I don’t believe we make predictions about the future based solely on past experiences. At least If we did the world might look like a different place! For example an error signal e.g. it’s time to back off, may be generated unnecessarily or not be generated when clearly needed.  

In reply to Marek:

Haha you beat me to it in your edit 

 Marek 07 Jun 2022
In reply to schrodingers_dog:

> ... For example an error signal e.g. it’s time to back off...

Case in point: 'time to back off' is not an error signal (in predictive coding terms). The error signal is the difference between what you thought would happen and what actually did happen. But since your reaction to what you thought would happen changed the future (presumably), the whole business of predictive coding is invalidated.

 morpcat 07 Jun 2022
In reply to schrodingers_dog:

The short answer is that BBH is bunk, that Bayesian probability is actually counter-intuitive for a lot of people, and that humans are famously bad at making these kind of objective judgements. Our brains work using heuristics, not mathematics.

However, it doesn't matter whether our calculation of risk is heuristic or precise, our brains do this calculation based on learnings from prior experience. Therefore, building up more experience is going to help improve your judgement, regardless of whether BBH is true or not.

Post edited at 22:50
 Bob Kemp 07 Jun 2022
In reply to schrodingers_dog:

There are a couple of Scientific American blog posts by John Horgan which I think you’ll find interesting and useful. This is the first, which gives an overview and his thoughts on the subject- 

https://blogs.scientificamerican.com/cross-check/bayes-s-theorem-what-s-the...


- and he follows it up with this one where he reports on a couple of speakers, for and against, at a symposium he attended on ‘Is the Brain Bayesian?’

https://blogs.scientificamerican.com/cross-check/are-brains-bayesian/

They’re thoughtful and entertaining, well worth a read. 

 Marek 07 Jun 2022
In reply to Bob Kemp:

> They’re thoughtful and entertaining, well worth a read. 

Entertaining, perhaps. Thoughtful, less so. Since the author admits that "Given that many brains, mine included, have a hard time grasping Bayes’ theorem" (a very simple equation), that hardly makes him qualified to judge or even comment on the subject matter.

3
 morpcat 07 Jun 2022
In reply to Marek:

I think you've mistaken modesty for incapability.

In reply to morpcat:

I wonder what role the 'collective unconsciousness' plays? If as in idealism our brains and all matter is a representation of consciousness which is in someway linked to a broader collective unconscious as opposed to just dealing with the ego (apparent consciousness) then BBH does work as a useful heuristic to guide decision making. Taking into account all information sources available. 

Thanks for the heads up about the error signal, I knew what I was saying wasn't quite right....

8
 morpcat 07 Jun 2022
In reply to schrodingers_dog:

Recommended reading, if you haven't already, is Thinking, Fast and Slow (Kahneman). That will explain some of the differences between slow/calculated/conscious decisions and fast/heuristic/unconscious decisions, and the circumstances under which we apply each (referred to as "System 1 & 2" in the book, with System 1 being - I *think* - analogous to what you mean by collective unconscious)

 Marek 07 Jun 2022
In reply to schrodingers_dog:

> I wonder what role the 'collective unconsciousness' plays?

I suppose 'fear of falling' fits that description, so 'quite a lot'.

> If as in idealism our brains and all matter is a representation of consciousness which is in someway linked to a broader collective unconscious as opposed to just dealing with the ego (apparent consciousness) then BBH does work as a useful heuristic to guide decision making. Taking into account all information sources available. 

No idea what that means. If anything.

In reply to Marek:

Sorry I didn't explain that very well, here's a link with the basic principles of idealism 

youtube.com/watch?v=Nls4o_mR-sY&

3
 morpcat 07 Jun 2022
In reply to schrodingers_dog:

> Sorry I didn't explain that very well, here's a link with the basic principles of idealism 

That video (deliberately) doesn't offer any explanations. 

Further recommended reading, if you haven't already, is Being You (Seth). If you prefer you can get a very short snippet with some valuable insights in his Ted talk:  youtube.com/watch?v=lyu7v7nWzfo&

 Marek 07 Jun 2022
In reply to schrodingers_dog:

> Sorry I didn't explain that very well, here's a link with the basic principles of idealism 

Sorry, no patience for another video, but if your position rests on philosophical idealism, then discussion is pointless since idealism allows you to say anything and prove nothing.

In reply to morpcat:

Thanks, I've watched the Seth video. I'm most of the way through Mark Solms 'Well-Spring' which attempts to answer the hard problem of consciousness through a materialistic world view much as Seth does there. With the conscious experience of self - others and the world apparently manifesting from various parts somewhere in the midbrain. 

Seth appears to be neurosciences own Yuval Harari, from my perspective a depressing and ultimately nihilistic world view. 

 jpicksley 08 Jun 2022
In reply to schrodingers_dog:

I think you may be over thinking the Bayesian element here. In my opinion when humans make decisions they are using past information/experience combined potentially with other external information (in a Bayesian sense these are known as interventions). This is classic Bayesianism at at the conceptual level and our brains do this really well subconsciously. In a sense, the use of "Bayesian" is just using a word to describe what humans have done forever. It's why Bayesianism is such a compelling idea and why it works so well and is increasingly popular for mathematical modelling and prediction. To try to make your thinking process a conscious Bayesian process (i.e. combining prior information, posterior information and intervention information) would just massively over-complicate matters and is unnecessary since we already do it really well subconsciously.

On another note, for morpcat, what does "objective judgements" mean? Aren't judgements subjective by definition?

 jpicksley 08 Jun 2022
In reply to schrodingers_dog:

To answer this directly (just noticed it didn't put in the comment I was responding to - it was your second comment in the thread), you're already doing a simple conceptual Bayesian analysis. You're thinking through the pros and cons based on past experience and other external factors. That is the basic idea behind Bayesianism. Now whether you make a good decision or not is a different matter and will depend on your personal ability to honestly appraise the different factors but I don't think trying to extend the "Bayesian analysis" into a more formal analysis will help at all. To do that you'd have to start thinking about attaching probabilities to events and without any meaningful data to guide you in this you'd just be guessing and the analysis would be, to use a technical word, crap.

Post edited at 10:24
 morpcat 08 Jun 2022
In reply to jpicksley:

> On another note, for morpcat, what does "objective judgements" mean? Aren't judgements subjective by definition?

Precisely. We want to make objective judgements, and we try hard to do so, but we can't. 

 jpicksley 08 Jun 2022
In reply to morpcat:

Ok, I get your point. Not sure I agree with it though but this could become a pointless debate so I'll leave it at that.

 morpcat 08 Jun 2022
In reply to jpicksley:

Wondering if you mean that the word "judgement" by definition must be subjective? In which case, meh, semantics, don't read too much into my word choice for an internet forum post I typed with my phone

 planetmarshall 08 Jun 2022
In reply to morpcat:

> Recommended reading, if you haven't already, is Thinking, Fast and Slow (Kahneman).

It's a great book, but worth keeping in mind that many of the studies Kahneman cites were subject to the "replication crisis".

Kahneman's response to the criticism is here - http://retractionwatch.com/2017/02/20/placed-much-faith-underpowered-studie...

 timparkin 08 Jun 2022
In reply to jpicksley:

> To answer this directly (just noticed it didn't put in the comment I was responding to - it was your second comment in the thread), you're already doing a simple conceptual Bayesian analysis. You're thinking through the pros and cons based on past experience and other external factors. That is the basic idea behind Bayesianism. Now whether you make a good decision or not is a different matter and will depend on your personal ability to honestly appraise the different factors but I don't think trying to extend the "Bayesian analysis" into a more formal analysis will help at all. To do that you'd have to start thinking about attaching probabilities to events and without any meaningful data to guide you in this you'd just be guessing and the analysis would be, to use a technical word, crap.

Yep - unless you have lots of formal probability samples, it's unlikely that you'll build a decent bayesian 'engine'. What you're really doing with your brain is most like a neural network. You've taken nearly every experience you've had and overlayed them all on top of each other, reinforcing paths with hormones, repetitionetc, eliminating others, fading some with time and then end result has none of the formal structure that a bayseian system would, it's just a black box of neurons with inputs (what's going on) and outputs (what you think and feel about it). A good blog post about the differences can be read here

https://ehudreiter.com/2021/07/05/bayesian-vs-neural-networks/

 Dave Garnett 08 Jun 2022
In reply to morpcat:

> Recommended reading, if you haven't already, is Thinking, Fast and Slow (Kahneman). That will explain some of the differences between slow/calculated/conscious decisions and fast/heuristic/unconscious decisions, and the circumstances under which we apply each (referred to as "System 1 & 2" in the book, with System 1 being - I *think* - analogous to what you mean by collective unconscious)

I was a bit underwhelmed by this, to be honest.  It seemed to me a carefully researched and thoughtfully presented theory of the rather obvious, but maybe I didn't understand the question he was trying to answer. 

1
 jpicksley 08 Jun 2022
In reply to timparkin:

Hmmm, I think we're straying into a whole different debate now and not one that I want to get into in this thread, so I'll respectfully decline to bite...

 Marek 08 Jun 2022
In reply to morpcat:

> Wondering if you mean that the word "judgement" by definition must be subjective? In which case, meh, semantics, don't read too much into my word choice for an internet forum post I typed with my phone

I think that this issue of semantics is what ultimately makes these discussions pretty sterile (no useful output other than research grants). They're really just highlighting the fact that terms like 'consciousness' and 'self-awareness' and so on are ill-defined and therefore any further statement about them is pretty meaningless. All too often the two term above (for example) are simply taken to mean "whatever it is that only living things/humans have" - which is a bit circular.

2
In reply to Marek:

The sterility comes from a materialist world view in which the brain in a biological computer generating experiences and all we need to do is understand which bit does what, how it does it and why it does it. 

By arguing that a mind can tap into something else outside of present notions of ego, call it the collective unconscious, then a whole new dimension of possibilities opens up. Talking about the popular definitions of consciousness and unconsciousness in decision making generally known as the ego, id and super-ego, I like Iain McGilchrist description of the spotlight of attention in which the beam can be made broader or narrowed. 

4
cb294 08 Jun 2022
In reply to Marek:

/rant on

It is about time that the philosophers shut up and their departments are kicked out of universities.

They have a 3000 year long publication record amounting to nothing at all with respect to resolving the problem of how consciousness arises.

Modern neuroscience has done more to understand how consciousness arises physiologically, ontologically, and during evolution in a few years than philosophy has done across milennia.

/rant off

1
In reply to cb294:

God was sitting in Heaven one day when a scientist said to Him, "Lord, we don't need you anymore. Science has finally figured out a way to create life out of nothing. In other words, we can now do what you did in the 'beginning'."

"Oh, is that so? Tell me..." replied God.

"Well", said the scientist, "we can take dirt and form it into the likeness of you and breathe life into it, thus creating man."

"Well, that's interesting. Show me. "

So the scientist bent down to the earth and started to mold the soil.

"Oh no, no, no..." interrupted God,

"Get your own dirt."

6
 Marek 08 Jun 2022
In reply to schrodingers_dog:

> The sterility comes from a materialist world view in which the brain in a biological computer generating experiences and all we need to do is understand which bit does what, how it does it and why it does it. 

No, it comes from endless semantic-free verbal meanderings which purport to build deep insights on top of vacuous statements.

> By arguing that a mind can tap into something else outside of present notions of ego, call it the collective unconscious, then a whole new dimension of possibilities opens up. Talking about the popular definitions of consciousness and unconsciousness in decision making generally known as the ego, id and super-ego, I like Iain McGilchrist description of the spotlight of attention in which the beam can be made broader or narrowed. 

I rest my case.

In reply to Marek:

Not a fan of Heidegger then....

1
In reply to schrodingers_dog:

I’m not entirely sure what you were getting at, but Heidegger it ain’t.

 john arran 08 Jun 2022
In reply to schrodingers_dog:

... which illustrates well that, however advanced science gets and however much of 'God's work' is shown to be intelligible to and perhaps even reproducible by man, there will most likely always be a more distant horizon beyond which those asserting divinity may claim it to lie.

 Marek 08 Jun 2022
In reply to schrodingers_dog:

> Not a fan of Heidegger then....

You might find this thought provoking: https://www.elsewhere.org/pomo/

 timparkin 08 Jun 2022
In reply to jpicksley:

> Hmmm, I think we're straying into a whole different debate now and not one that I want to get into in this thread, so I'll respectfully decline to bite...

I was basically agreeing with you. I'll summarise. The brain is not Bayesian, it's a neural network. The influences of the neural network may be modelled using Bayesian approaches. 

cb294 08 Jun 2022
In reply to Marek:

Brilliant!

Deleted a longer post to avoid spoiling.

CB

In reply to Stuart Williams:

Ha. I realised it might be read like that after the post. Don’t worry I’m aware of my place, here at least. 

In reply to Marek:

Maybe you prefer a post like ‘when is e2 not e2’ when it’s 10 feet of a neo cultural reaction to hyper critical parenting and a failure to live up to expectations of self and others 

 Marek 08 Jun 2022
In reply to timparkin:

> ... The brain is not Bayesian, ...

Agreed

> ...it's a neural network. 

Not sure that's quite right assuming you are using 'NN' in the current computer science sense (as opposed to 'a network of neurons'). There are behavioural similarities (that was the point of NNs), but I think it's a step too far to say the brain is an NN. You'ld have to take the string theoretic view, i.e., "We don't know what an NN is, but the whatever the brain turns out to be we'll say 'That's an NN!' So we'll have been right all along."

I think my main complaint is that NN behaviour is very dependant on the internal network structure of the NN and I don't think there's an equivalent structural dependency in the brain. I could of course be quite wrong.

 Marek 08 Jun 2022
In reply to schrodingers_dog:

> Maybe you prefer a post like ‘when is e2 not e2’ when it’s 10 feet of a neo cultural reaction to hyper critical parenting and a failure to live up to expectations of self and others 

I wouldn't exactly prefer it, but I also wouldn't be surprised by it. Not in this thread, anyway.

 profitofdoom 08 Jun 2022
In reply to cb294:

> /rant on

> It is about time that the philosophers shut up and their departments are kicked out of universities.

> They have a 3000 year long publication record amounting to nothing at all with respect to resolving the problem of how consciousness arises.

> Modern neuroscience has done more to understand how consciousness arises physiologically, ontologically, and during evolution in a few years than philosophy has done across milennia.

> /rant off

Excellent rant! "kicked out", "nothing at all", "across millennia"

cb294 08 Jun 2022
In reply to Marek:

Quite a bit outside my field of expertise, but I always thought the difference was that the structural dependence of artificial NNs was in higher biological systems almost completely masked by redundancy.

However, like you I have to admit I may well be completely wrong.

 Marek 08 Jun 2022
In reply to cb294:

> Quite a bit outside my field of expertise, but I always thought the difference was that the structural dependence of artificial NNs was in higher biological systems almost completely masked by redundancy.

It's certainly been posited, but far from proven.

> However, like you I have to admit I may well be completely wrong.

I like to assume I'm probably wrong. That way there's always a positive take-away from any outcome!

In reply to john arran:

So God doesn't exist?  

 timparkin 08 Jun 2022
In reply to Marek:

> I think my main complaint is that NN behaviour is very dependant on the internal network structure of the NN and I don't think there's an equivalent structural dependency in the brain. I could of course be quite wrong.

I was using mostly from a reductive biological viewpoint. As in the early experiments using actual neurons (a later version of which can be seen here - https://futurism.com/the-byte/brain-cells-play-pong)

There's obviously a lot more going on with the actual input signals (suppression circuits, reduced signals for multiple stimuli etc) but the basics are a sort of black box network of neurons

 Fredt 08 Jun 2022
In reply to schrodingers_dog:

Sounds like STBO to me, but the whole conversation reminds me of a new and unique decision making process I chanced upon at the start of my alpine career in the late 70s.

I was soloing Mont Blanc, and I was terrified, of the unknown, the conditions, the effect of altitude.

I was first out of the Gouter. I was plodding up towards the Dome, and came upon a wide crevasse. I walked along the edge, and eventually found a snow bridge. It didn’t look great, but I figured it would take my weight. 
Then I wondered if the altitude was affecting my judgment, so that judgment couldn’t be trusted. So I turned back.

But then I thought. ‘If the altitude is affecting my judgement, it wouldn’t have occurred to me that the altitude was affecting my judgment.’
So therefore my judgment was not affected, therefore the snow bridge was probably sound. 
So I walked across it.

 john arran 08 Jun 2022
In reply to schrodingers_dog:

> So God doesn't exist?  

Evidence for existence is in short supply.

In reply to schrodingers_dog:

> call it the collective unconscious

Ah, that's who you are...

In reply to john arran:

Dutifully agnostic or fundamental atheism in the Sam Harris / Dawkins vein? There are no fundamental truths and values come as a pick and mix with reciprocal altruism being all that stands between us and bloody murder. The only meaning is what we make of it and the stairway to heaven lies in Silicon Valley and involves being uploaded onto a super computer at the point of death. Glory be 😂 

Post edited at 19:09
5
 Dr.S at work 08 Jun 2022
In reply to Dr.S at work:

> Prior to your post I had never heard of it - can you summarise your thoughts so far?

Given my gag only got two likes, my confidence that the brains of UKcers are not Bayesian has increased.

 john arran 08 Jun 2022
In reply to schrodingers_dog:

> Dutifully agnostic or fundamental atheism in the Sam Harris / Dawkins vein?

Rather more Occamian. Trying to fit a concept of God between the cracks of currently known science seems rather more contrived than is justified for a core belief. I'm happy accepting elements of ignorance if it means I don't need to believe things for which there is no evidence.

> There are no fundamental truths and values come as a pick and mix with reciprocal altruism being all that stands between us and bloody murder. The only meaning is what we make of it and the stairway to heaven lies in Silicon Valley and involves being uploaded onto a super computer at the point of death. Glory be 😂 

You're welcome to believe that if you so choose. I don't.

In reply to Fredt:

I love this story, it reminds me of the mountains and rivers Zen path, something like- 

Before I set out on the path to enlightenment, mountains were mountains and rivers were rivers, after I set out mountains were no longer mountains and rivers were no longer rivers, when I achieved enlightenment mountains were mountains and rivers were rivers 

Like you say STBO 

In reply to john arran:

Core beliefs align with Kierkegaard's 'subjective truths' for which he considered no evidence is needed. In modern terms this represents the head-heart dilemma in which a person can rationalise a belief (say about themselves) to be false but feel it to be true and regard it as true no matter how much evidence is marched out in front of them. I think Jung was asked about his belief in God later in a later interview - he answered I don't believe God exists, I know God exists. God being the best word somebody can offer fro something which is inexplicable. The chapter on divinity in the book 'The Matter with Things' is quite enlightening if you have the time and motivation. 

 planetmarshall 08 Jun 2022
In reply to schrodingers_dog:

> The sterility comes from a materialist world view in which the brain in a biological computer...

Positing that the brain is a biological computer is certainly materialist, but the reverse is not necessarily true. While it is a fashionable subject in sci-fi to treat the brain as some sort of universal programmable hardware that can loaded and unloaded by different consciousness as if they are copies of Microsoft Office, in reality the physical structure of the brain is important, as demonstrated by any number of traumatic brain injury studies in which the personality is significantly altered.

Many neuroscientists appear to be divided on the subject of how useful the computation metaphor is in studying the workings of the brain - at least computation as we currently understand it. I am not a neuroscientist but it seems to me that arguments in favour of modelling the brain as a computational process requires some intellectual gymnastics in defining what that term actually means.

However, whatever the underlying mechanism; it's either materialist, or it's magic.

 john arran 08 Jun 2022
In reply to schrodingers_dog:

Yes, I do acknowledge that societal conditioning can be a very powerful constraint to rational belief, and there's no reason to suspect that philosophers have immunity against that.

cb294 09 Jun 2022
In reply to schrodingers_dog:

Jung is the charlatans' charlatan. It is safe to assume that any argument referencing Jung is a priori wrong. Psychology, unlike neurosciences, has no scientific or rational basis.

However, that Kierkegaard argument is really taking the piss, it takes the rejection of science and anything that helped us overcome the dark ages of religious dogma to its theoretical maximum!

Trumpian alternative facts in more fancy words.....

CB

 Marek 09 Jun 2022
In reply to cb294:

I was desparately trying not to mention Trump in this thread, but I guess it was inevitable that someone would. A modern version of Godwin's  Law?

In reply to cb294:

Brilliant! I love this discussion  

Should psychology be binned along with philosophy? 

Re the Trump thing - given the current state of politics across the Atlantic being likened to the little orange ass-clown is a positive compared to the apparent degeneracy and delusional state of things. 

3
 Marek 09 Jun 2022
In reply to planetmarshall:

> Positing that the brain is a biological computer is certainly materialist, but the reverse is not necessarily true. While it is a fashionable subject in sci-fi to treat the brain as some sort of universal programmable hardware that can loaded and unloaded by different consciousness as if they are copies of Microsoft Office, in reality the physical structure of the brain is important, as demonstrated by any number of traumatic brain injury studies in which the personality is significantly altered.

Hmm, I'm not sure that's a defensible conclusion: If you remove part of the memory of a running PC (assumes to be 'universal programmable hardware'), it's behaviour will also change. I happen to agree with you that structure *probably* is important and unique to an individual, but I'm not sure there's much hard evidence to support that contention (or for that matter its negation).

cb294 09 Jun 2022
In reply to schrodingers_dog:

Philosophy should not be binned entirely, however the futile attempts to explain consciousness while ignoring biology (primarily neurobiology and evolution) should be laughed at.

I have no issue with philosophers discussing e.g. the basis of ethics, which can clearly not be derived from biology or even physics. Quite the opposite, anyone trying to derive ethics from evolution will end up with an unpleasant and - even more importantly in this context - conceptually unjustified mess.

Same with psychology. There obviously is good empirical and experimental work in many areas, both theoretical and clinical, but at the core all too often you can still find a reverence for the entirely baseless concepts of charlatans like Freud or Jung. Until psychology stops referencing these names it simply does not deserve to be considered a science.

CB

 Darkinbad 09 Jun 2022
In reply to schrodingers_dog:

I think I have a frequentist brain. Whenever I have a decision to make, a million possibilities spin through my head. Unfortunately I don't have time to count them, so I never get anything done.

 planetmarshall 09 Jun 2022
In reply to Marek:

> Hmm, I'm not sure that's a defensible conclusion: If you remove part of the memory of a running PC (assumes to be 'universal programmable hardware'), it's behaviour will also change. I happen to agree with you that structure *probably* is important and unique to an individual, but I'm not sure there's much hard evidence to support that contention (or for that matter its negation).

Because it leads to fairly thinly disguised Cartesianism, where the mind is some sort of disembodied software floating around in the brain's hardware. Everything we know about modern neuroscience suggests that that hypothesis is at best hopelessly naive, at worst a magical fantasy.

 Marek 09 Jun 2022
In reply to planetmarshall:

Not sure where Cartesianism comes into it (had to look that up), but just because I said that there was insufficient evidence to conclude A doesn't mean I think that not-A is right either. I pointed out that your example of brain damage leading to behavioural changes doesn't particularly support the mind-is-structure hypothesis since hardware damage in a PC (an example of a UTM) can be said to have the same effect. Is there any better evidence you can point to (I'm not an expert - or even very knowledgeable - on neuroscience)? So far the conclusion seems to be based on "We have a gut feeling that..." (which I tend to agree with).

cb294 09 Jun 2022
In reply to Marek:

In a brain, but largely not in a computer, the connectome is shaped by activity. Neuronal activity ("software") alters physical connections between neurons ("hardware"), reinforcing or pruning synapses or entire neuronal processes depending on the flow of information across these structures, and will continuously integrate newborn neurons and eliminate existing ones depending on activity.

This is qualitatively different from simply changing the flow of information across certain connections in silico.

Probably not enough to argue that mind is structure, but evidence IMO that distinguishing "software" from "hardware" is much less clear cut in bio brains compared with computers, and that using these technical terms as metaphors for brain function will only get you so far.

 Marek 09 Jun 2022
In reply to cb294:

That makes sense. Thanks.

Ultimately, the hardware/software distinction in computers is really just a developer 'convenience' rather than an optimisation, so you wouldn't expect it to be the norm elsewhere. There have of course been attempts to build an adaptive function-is-structure machine, but not with any success (as far as I know).

Post edited at 14:07
In reply to cb294:

Whoah.... easy Tiger. The baseless concepts of Freud and Jung provide the basis for almost any contemporary talking therapy. Beck was a dissolusioned psychoanalyst who did a job on Freud's terminology and turned it into CBT newspeak. Freud was a Dr and neurologist who found new ways to make sense out of observations which were not measurable by the technology available and still aren't to some degree. Loose similarities between terms e.g. - 

Core Beliefs = Unconscious material 

Conditional beliefs / rules for living (inner critic etc)  = Super ego 

Self Esteem - The ego ideal or in Bayesian Brain the amount of error signal generated vs the capacity to establish the desired state or adjust to new state of being 

Concepts such as authenticity, individuation, self actualisation and life transitions are Jungian in their integration in to modern therapy and person centred care is a Jungian ideal central to ethics in modern medicine.

Following your reasoning we'd truly be heading back into the dark ages , unless of course you're trolling in which case that's a really good one, you're havin a larf!

4
In reply to planetmarshall:

I think this is a wonderful post, thanks!

Material or Magic? I'm gonna use that in my writing if it's ok with you?

Rock Climbing - Material or Magic?

I asked my dad and six year old daughter to look into the garden this morning and said what do you see, material or magic. My dad, a physicist, chemical engineer and mathematician by background laughed and said - Magic. My six year old was reluctant to reply but then said, well both daddy. She indicated things with a metabolism - trees, birds etc were Magic and the sky was material. 

To my mind the modern world is aligned with scientism and materialism and there is no magic, why not begin to combine organic and inorganic materials or adjust the materials of being for profit. We just need more data and we'll all become better climbers - except it doesn't work does it. 

3
 wintertree 09 Jun 2022
In reply to cb294:

> Same with psychology. There obviously is good empirical and experimental work in many areas, both theoretical and clinical, but at the core all too often you can still find a reverence for the entirely baseless concepts of charlatans like Freud or Jung. 

I always enjoy - and silently endorse - your rants against psychology.  To this day, people are getting faculty posts propped up on publications built on what I consider highly methodologically questionable research about who wants to shag someone looking like their parent etc.  That's bad enough, but then the seriousness with which some people like to regurgitate findings and theories from psychology as part of some faux-intelligentsia grinds my gears even more.

But I would like to note that there are aspects of psychology that are very free of dogma and that are driven by methodologies from the hard sciences, such as vision psychology.  Given the quality of the highly quantitative science going on in that area and how much further the study of computer models vs instrumented reality can be pushed in vision psychology vs other areas of the brain, this area is very important to furthering our understanding of our wetware.

cb294 09 Jun 2022
In reply to schrodingers_dog:

Not a single concept proposed by Freud has survived scientific scrutiny. Yes talking therapy may help by chance*, but certainly not because the underlying assumptions and models of how our brain works have ANY merit at all.

Essentially Freud was a rich and therefore functional junkie bullshitting everybody with his eloquent but baseless theories. Just take his ideas about sexual development and try to square them with what we know today about hormonal control of sexual development.

Similarly, as for Jung, archetypes, seriously?

Anyway, I am not surprised that those explanations still have their adherents today, after all, religion is also still going strong despite having no evidential value.

Even that is all fine as long as you can show statistically that what you are doing helps your patients, please just don't claim that this is because you have an insight into how the brain works.

And no, I am not having a laugh, even though I have to admit that I phrase my criticism as pointedly as possible, likely often overstating my case in the process.

I honestly think that psychology as a discipline is held back by a large fraction of its practitioners preferring woo over hard neuroscience, to the detriment of patients.

As an example, treatments for PTSD that are based on molecular and cell biological insights in how memories are formed and connected to emotions, and how we can delete such memories or edit their links to emotions in mice in the lab, are still pretty much experimental. I contend that they could be much further advanced already if the field had, over the last decades, embraced progress in various hard biosciences** to a similar degree as e.g. oncology has during the same period.

Unfortunately, even in regular medicine we are struggling to get evidence based treatment universally established as the gold standard, when we should by now be more widely already moving on to medicine based on functional understanding.

CB

* not in my personal experience but I accept that it can work for others

**neurobiology, developmental biology, genetics, pharmacology and pharmacogenetics, biophysics/imaging,....

 Marek 09 Jun 2022
In reply to schrodingers_dog:

If you want magic in this world, just read up about quantum mechanics. We know Spell X produces Result Y and so on in great detail, but any attempts to explain *how* that works have floundered. As Bohr used to say: "Shut up and calculate!".

cb294 09 Jun 2022
In reply to wintertree:

Thanks for the endorsement! I find the human mind/brain a fascinating subject to study, and one that is well worth tackling with all tools we can borrow from the "hard" sciences.

As I wrote in my reply to schrodingers doggy, I have my troll hat only half on, because I feel that the "good" bits of the field are actively held back by traditionalists sticking with evidence free, just so models merely because they come from the established greats from 100 years ago. Looks awfully cult like to me!

cb294 09 Jun 2022
In reply to Marek:

Oh don't encourage him! The widespread abuse of "quantum" in all kinds of new age theories can make me vomit all over the internet (not that SD has posted such stuff, to my knowledge).

However, the proof is, as usual in the pudding: QM gave us gadgets that must indeed look like magic compared to tech from only 50 or 100 years ago.

In reply to cb294:

The archetypes of the Mother, the Warrior the Sage and the Hermit, even as metaphors for a way of being in the world are redundant fantasies of a bygone era peddled by charlatans and quacks with faux insight into the workings of human nature? 

These methods and concepts can be slowly got rid of and replaced by hard material neurobiological protocols that get down to the nitty gritty of shifting traumatic memories or character flaws with e.g. a scalpel, injectable / biological agent? 

Feck me it's bleaker than I thought, we truly are in for a horror show if that's where we're at. Frontal lobotomy any one..... partial or full... take your pick. 

The limited hang out makes a bit more sense from this ideology 

5
In reply to Marek:

Thanks! Magic is what I like to hear about.....

cb294 09 Jun 2022
In reply to schrodingers_dog:

Those archetypes have no proven neural architectural or activity correlate, they are not true building blocks of our mind.

The mouse experiments you dismiss so casually actually give me great hope. We (i.e. proper neurobiologists, not me...) can now implant or delete specific memories under controlled circumstances (usually fear conditioning with some spatial marker, e.g.  mild electric shocks in a black room vs. food in a white room) and identify the neurons and within them the genes involved in memory formation. Based on what we have learned from such proper mechanistic understanding of memory formation we have now started to devise protocols for deleting links to negative reinforcement in PTSD patients using specific pharmacologic treatments in combination with specific reactivation of the troublesome memories. As far as I follow the reports on these topics (i.e, whenever they pop up in the comments section of journals such as Science) these approaches show excellent promise, e.g. in a cohort of Canadian soldiers suffering from PTSD. Also, genetics, which is closer to my expertise, has provided lots of data on the origins of diseases such as schizophrenia, again with the promise of tailored medication that will be a huge improvement in comparison to broad brush drugs currently in use.

My point is that we could be much further if psychology in general would take to heart what we know about the actual way our mind works, and would focus less on vague models such as archetypes, which appear to me to be not much more than linguistic tricks to give the illusion that the psychologist has an idea of what is going wrong in their patients' brains.

CB

edit to add: I have no idea why you consider my scientific view of the world bleak in any way. To me, understanding nature, including our brains, only adds to the wonder I feel.

To take an example away from the topic of the human mind, knowing that our sun is a hydrogen bomb consuming half a billion tons of hydrogen every second, and will do so for billions of years is much more AWESOME in the literal sense of the word than any metaphor or myth about some sun god riding their chariot across the heavens could ever be.

Post edited at 17:34
1
 Marek 09 Jun 2022
In reply to cb294:

> Oh don't encourage him! The widespread abuse of "quantum" in all kinds of new age theories can make me vomit all over the internet...

Totally agree. I just thought I'd light the blue touch-paper and see what happens. 

In reply to cb294:

Hey, I'm not dismissing it... work done by Bessel Van Der Kolk and his book 'the body keeps the score' has my absolute attention and respect. Last year I bought myself a Mendi neurofeedback device based upon reading about his work and the treatment of trauma. It was pretty good, but beware you can achieve false positive results by bearing down.... Somatic therapy and polyvagal theory are also very informative and I have a great interest in dissociative states - DID / complex PTSD etc.... as well as some of the more uncanny experiences such as NDE's. Peter Levine's book 'Waking the Tiger' provides a practical set of somatic exercises to treat trauma that can be done at home and are great for reducing the disembodied feelings so often coming with ptsd for example. My criticism if any is the materialistic basis e.g. the mind as a product of the brain (which was shared by Freud but not Jung) to my mind is not fulfilling. For example I'd like to think Voodoo Death contains magic beyond the Nocebo effect and that 'Logos can be appropriately balanced with Mythos' 

Post edited at 17:39
2

Tell you what - I like the sound of MDMA assisted therapy!

 wercat 09 Jun 2022
In reply to schrodingers_dog:

look up memristors - there are electronics which alter according to behaviour and therefore by remembering alter future behaviour!

That is magic, if the research bears practical results eventually.  Imagine an electronic neural net whose interconnections are reinforced by current flowing each time they are triggered

 Marek 09 Jun 2022
In reply to wercat:

> look up memristors - there are electronics which alter according to behaviour and therefore by remembering alter future behaviour!

> That is magic...

Hardly. The original memristor was a hypothetical device (based on ideas of symmetry) and it's never been show to be physically meaningful. What people call memristors today (quite different) are just two terminal devices which have measurable state based on previous history*. You can build a 'memristor' out of ordinary components. The reason there's interest in creating a fundamental memristor device is just that it would be smaller and consume less power (hopefully) than a synthetic one or indeed current memory technologies. Nothing magical about that at all.

* A bit like flash memory in your pen drive, SSD or smartphone.

Post edited at 19:00
In reply to cb294:

In over a decade working in mental health services I can honestly say I’ve never heard anyone talk about archetypes, Freud or Jung other than as historical curios.

cb294 09 Jun 2022
In reply to Stuart Williams:

Good!

In reply to cb294:

I’m rather perplexed by your experience of them being cited so frequently to be honest. Where is this?

 wercat 10 Jun 2022
In reply to Marek:

I wasn't saying that I personally thought the technology was magic but that the effect of using it practically on synthetic neural networks might appear magic as it could be transformative.  I do realise that current progress is narrow, limited in scope and not Yet transformative.

It could be transformative in taking us away from the binary computational limit on AI and providing computation far more akin to brain function, though the step I hope we never make is to make the jump to making circuitry capable of being sentient of an emotional state.  That would be a far greater type change from simply modelling "cold" neural functions and I think entirely unethical unless we were to afford such a creation respect as a form of life.

 Marek 10 Jun 2022
In reply to wercat:

Ahh, OK. But the 'magic' you're referring to is more to do with the NN than the memristor (which is really just a storage element). Yes, NN can appear somewhat magical in the way they can 'learn'  and also in the way we don't really understand what it is they've learned (we just we the results of their learning - their behaviour).

I'm less bothered about the whole 'sentient' and 'emotional' thing until such time as we have a usable and unambiguous definition of those terms. And that seems a long way off.

I *do* have concerns with the *use* of NN's specifically in that we have no practical methodologies to understand the limits of and constraints on their behaviour. We can get an idea of what they will do most of the time, but not in what was classically known as 'corner cases'. It has been shown repeatedly that NN's can show behaviour  way outside their intended envelope (often quite the opposite of what was intended given certain inputs). Basically, we do not really understand *how* they achieve a specific result, just that if we train them enough then they seem to do it right most of the time. That's OK for some uses, but worrying in safety-critical cases.

Disclosure: I've been out of this game for a few years, but I do try to keep up.

 wintertree 10 Jun 2022
In reply to Marek:

> Yes, NN can appear somewhat magical in the way they can 'learn'  and also in the way we don't really understand what it is they've learned

When all is said and done, even the largest recurrent MLPs are mathematically equivalent to a single layer of polynomials with a lot of inputs and some outputs.  Which says if you throw enough equations at a problem, you'll find some that stick.  The structuring as an MLP and the use of recurrency aren't fundamental new capacities so much as a way of bounding the training problem to be solvable.

>  That's OK for some uses, but worrying in safety-critical cases.

Lot's of work going on over the use of ANNs and other machine learning tools in analysing medical datasets (particularly radiology imagery etc).   Some of that work is about trying to regulate the development to the point that one can have scientifically measured confidence in the code.  Problem is, an ANN as we know it doesn't have that intuition of a human that something is different with a particular set of inputs.

cb294 10 Jun 2022
In reply to Marek:

This is a worry for me as a biologist, too. Instinctively I much prefer algorithm based mechanisms for, say, analyzing image data or predicting protein structure, as I would at least in principle have a means of following how the system did arrive at some conclusion.

Unfortunately, the machine learning, NN systems massively outperform the algorithmic ones, but the way they arrive at their conclusions might as well be magic. At face value we could even follow which node connects to which other one with which weight, but WHY these parameters are as they are largely is plucked from the air.

This is not too much of an issue when I want to predict the structure of some protein I am working on: If the prediction is wrong my wet experiments based on it will fail soon enough (or I may even be able to say that the prediction must be wrong before I even start because it does not fit with things we already know).

However, I would not like someones car driving on the same principles and no one even being able to understand why it decided to run me over.

Even worse, imagine banks running such system, and "computer says no" being the best answer they could honestly give to you when you ask why they rejected your mortgage application.

It does not take much imagination to create a complete dystopia, imagine an AI making triage decisions in case of mass medical emergencies, or controlling police deployments, and in the end possibly running our judicial system.

Unfortunately, these examples are in part already reality, and no one seem to even bother.

CB

 wercat 10 Jun 2022
In reply to cb294:

> This is a worry for me as a biologist, too. Instinctively I much prefer algorithm based mechanisms for, say, analyzing image data or predicting protein structure, as I would at least in principle have a means of following how the system did arrive at some conclusion.

 I'm afraid you won't like the work my son is doing relating to proteins then ...

cb294 10 Jun 2022
In reply to wercat:

Oh I LOVE the protein structure predictions, but this is because there it is no problem for me to accept that they are a) magic and b) probably correct.

The worst that can happen is a bunch of experiments failing because one of many predictions was wrong, and I had the bad luck to have picked that protein.

Still, this is such an improvement over having to guess the structure and base your experiment designs on your gut feeling, which lead to failure much more often than not.

It would just be nice to be able to recapitulate how the program arrived at the conclusion that amino acid A should be adjacent to B but not C, e.g. based on total energy arguments.

In practise no one would do this anyway, but it is intellectually appealing to be able at least in principle to have an unbroken chain of reasoning from your intial experiments through structure prediction and then to your follow up experiments based upon these predictions.

Stick a layer of machine learning AIs in there, and we are almost back to taking some things on faith, wine into blood and all that, with the difference that this time it actually works:

I stick my data into a machine, a miracle happens, and I can take the output and continue happily ever after!

Highly useful, but at the time naggingly unsatisfactory.

In contrast, predictive policing, anyone?

edit to add: In this case, "magic but probably correct" would simply never be good enough, but seems to be where we are heading.

Post edited at 11:34
In reply to cb294:

In a way we already have predictive policing, or at least a predictable self policing system in the digital world. Manufactured consent and de facto limited hangouts leading an ever narrowing margin of acceptability and associated cancel culture. I’m currently reading William I Robinson’s global police state which provides a fascinating insight into how this is developing. Welcome to the digital gulag with its self appointed guards! 

4
In reply to cb294:

> imagine an AI making triage decisions in case of mass medical emergencies

Instead we have human intelligence making those decisions... Pretty much as fallible.

cb294 10 Jun 2022
In reply to captain paranoia:

Yes, but if the human screws up you can ask him to justify his reasoning and hold them accountable. This would also be possible with an algorithm, but not with a neural network.

In the latter case you would have to make the decision that you will trust the system unconditionally without ever being able to follow or cross check its decision making process.

cb294 10 Jun 2022
In reply to schrodingers_dog:

No need to look at digital only, we already have in some areas full on dystopian predictie policing: Data mining by companies like PredPol (they have since renamed) and Palantir essentially replaced policy decisions by the LAPD in recent years.

In reply to cb294:

> ask him to justify his reasoning and hold them accountable

Like AI, they may not be able to truly 'justify their reasoning'.

I'm not criticising either approach; I 'liked' your point about not being able to understand a NN's decision. I was just pointing out that human reasoning really isn't that much more visible, for all our after-the-fact justification.

In reply to cb294:

I didn’t realise! That’s insane.... I wonder what the future holds. 
 

Nudging back on the topic of predictive reasoning in climbing, I’ve come to the conclusion that if a person can tap into some otherly numinous state of being that might help, not in terms of performance as in numbers but in terms of quality of experience.  

 Marek 10 Jun 2022
In reply to captain paranoia:

> > imagine an AI making triage decisions in case of mass medical emergencies

> Instead we have human intelligence making those decisions... Pretty much as fallible.

Fallible certainly, but it's hard to quantify - as in "pretty much" - since the characteristics of the two system (human and AI/NN/ML) are so different. The two obvious differentiators at the moment seem to be (a) ML (pick your acronym) may be wrong but it's never malicious and (b) Human can be held to account for their mistakes - which provide incentive not to make them, but with ML-based systems: Who is accountable (to the same level) for their mistakes? It's nice to think that whoever deployed that system would be accountable and feel the same incentive, but recent history suggests not. Was Prof. Chandra ever held accountable for what his creation did in 2001?

 jkarran 10 Jun 2022
In reply to schrodingers_dog:

> So my question would applying some sort of simplified Bayesian analysis help make the decision about setting off and subsequent decisions on the way increasing a sense of confidence and control? Or does this seem like a misinterpretation. 

Seems like spectacularly overthinking not biting off more than you can chew while tired and in a serious position. We call used to it judgement or experience.

jk

In reply to Marek:

> ML (pick your acronym) may be wrong but it's never malicious

The technique may not be. The teaching may be, either deliberately, or unintentionally.

'Malicious' implies intent, and a conscious mind. I don't think we're at the stage of ANN being conscious yet... So no non-conscious system can be said to be malicious in itself. Those who develop or deploy the system? Yeah, they can be malicious.

> Who is accountable (to the same level) for their mistakes?

Self-driving cars are probably going to be the first big test of that problem, though use of ANN/ML for diagnostic purposes (which fail) might already be happening. Again, if those automatic diagnostics can be shown to have an accuracy better than humans, we're 'winning', surely? Just that we might have trouble making work for 'Lawyers4U' in medical malpractice cases, if it's hard to identify who is 'at fault'. I've never been a big fan of suing people trying to do their best, but failing; no fault compensation seems a better idea.

In reply to jkarran:

So one of the things that sets you apart from many other climbers (99.9999%) is a high degree of skill in risk analysis based on experience? 

I’d say there was something more to it, call it innate ability. 

 Marek 10 Jun 2022
In reply to captain paranoia:

> The technique may not be. The teaching may be, either deliberately, or unintentionally.

Indeed.

> Self-driving cars are probably going to be the first big test of that problem... ... I've never been a big fan of suing people trying to do their best, but failing...

That depend on the context: 'trying to do their best, but failing' is not good enough in a safety critical systems (like autonomous transport). If you're not sure you can get it right, then you shouldn't be doing it*. Not even if you 'do your best'.

* OK, it depends on what the alternatives are and what risks they carry. But in the case of self-driving cars, they're not exactly something the world really needs, so the responsibility lies with the developers/deployers to ensure that they are significantly safer than human-driven cars. If they can't do that then the designs should stay in the labs.

 wintertree 10 Jun 2022
In reply to captain paranoia:

> Self-driving cars are probably going to be the first big test of that problem

Medical image analysis would be my guess - outside of military stuff.

In reply to Marek:

> 'trying to do their best, but failing' is not good enough in a safety critical systems (like autonomous transport)

Autonomous systems aren't people; my point was people doing their best, in the context of medical diagnosis.

> If you're not sure you can get it right, then you shouldn't be doing it*

How 'sure' do you want to be?

("that, detective, is the right question"...)

You can never be 100% sure; things are too complex. That's why we have concepts like ALARP for safety systems, especially in transport.

Humans are fallible, but everyone seems far more relaxed about that, for some reason; it seems we're happy to forgive humans when they fail, but automatic systems are never allowed to fail. It's not logical.

Humans are even forgiven when they get pissed, drive into cyclists, then say they hate cyclists.

https://www.cyclingweekly.com/news/drink-driver-who-crashed-into-a-cyclist-...

Post edited at 18:57

With all the emerging AI technologies what are we going to do when the redundant peasants and useless eaters rise up from the grey zones of the mega city slums? 

2
 wbo2 10 Jun 2022
In reply to wintertree:

> Medical image analysis would be my guess - outside of military stuff.

That's surely already done.  We're using trained AI for interpretation of seismic data, same thing really.  I prefer the results as they're more data based than human interpretations which are usually simplifications to match a users concept of the truth

Marek: Define significant.  And compared to who - young, old , all drivers?

>'With all the emerging AI technologies what are we going to do when the redundant peasants and useless eaters rise up from the grey zones of the mega city slums? '

That is probably the best question you've asked in the entire thread, 

Post edited at 20:06
 Marek 10 Jun 2022
In reply to captain paranoia:

> Autonomous systems aren't people; my point was people doing their best, in the context of medical diagnosis.

I was referring to the designers/deployers of the AI system, not to the system itself. They are the ones with responsibility for any 'unintended consequences'.

> How 'sure' do you want to be?

That's a human judgement, i.e., today the roads are reasonably safe (for arguments sake) because most people driving are aware of their responsibility and liability when mistakes happen and that balances the tendency for thing to go wrong. So when you have AI-driven cars, where does that responsibility, liability and the 'if I'm not careful I'll go to jail' control lie? Certainly not the the AI. Is it a 'corporate liability'? That seems like just dodging the issue since corporation are no more sentient than AI. It has to lie with the designers and deployers. But does it?

> Humans are fallible, but everyone seems far more relaxed about that, for some reason; it seems we're happy to forgive humans when they fail, but automatic systems are never allowed to fail. It's not logical.

That may be just an extreme case of them-and-us.

Look at it this way. If someone knowingly throws rocks off a cliff with climbers on it and 'does his best to miss them', but makes a mistake and kill a climber, is the thrower responsible and liable?

So what's the difference between a rock thrown (with care) down the cliff and an AI-controlled car sent out onto the roads?

Post edited at 20:11
 wintertree 10 Jun 2022
In reply to wbo2:

> That's surely already done.  We're using trained AI for interpretation of seismic data, same thing really. 

That sounds like a person who has not written some code to try and extract quantitative data from medical imagery...

There's a lot of work - institutional and commercial - going on in to AI/ML for analysing medical images and also into creating accountability for the training of such systems, but AFAIK it hasn't yet got to the point where the courts are apportioning liability for mistakes (which - reading up the thread - is what my comment was in relation too).

In reply to wbo2:

> That's surely already done.  We're using trained AI for interpretation of seismic data, same thing really.  I prefer the results as they're more data based than human interpretations which are usually simplifications to match a users concept of the truth

> Marek: Define significant.  And compared to who - young, old , all drivers?

> >'With all the emerging AI technologies what are we going to do when the redundant peasants and useless eaters rise up from the grey zones of the mega city slums? '

> That is probably the best question you've asked in the entire thread, 

Can I take it back? 

1
 timparkin 10 Jun 2022
In reply to cb294:

> Yes, but if the human screws up you can ask him to justify his reasoning and hold them accountable. This would also be possible with an algorithm, but not with a neural network.

But can't we just make the neural network (AI) that comes up with post hoc rationalisations? They'd probably do a better job than us

 wbo2 10 Jun 2022
In reply to wintertree: So you don't analyse the images for similarity, image coherence and so on and so forth (attributes) , look for similarity clusters, boundaries etc?  

To schrodingers dog: No, it's interesting as it raises questions of person liberty, inequality and how information control supports totalitarianism as opposed to how you're attempting to force human behaviour into analysis via crude metaphors with statistical methods

cb294 10 Jun 2022
In reply to timparkin:

No, post hoc does not work, at least for what I am talking about. Of course a NN AI could give you an account of its training, and the adjustments it made to the connections and weight of each node based on input from its training set. However, none of these has any identifiable relation to something tangible, and only references to the network itself.

CB

 Morty 10 Jun 2022
In reply to schrodingers_dog:

You are smashing this. 10/10.

If you even hint at a mention of Indian Face I want my £5, DJViper.

Keep it up, mate - I'm loving your work.

I feel that you might be the hero that UKC doesn't realise it needs. 

In reply to wbo2:

My analysis is we're on target for Neal Stephensons Snow Crash by 2029 and Iain M. Banks culture - forking never 

1
 wintertree 10 Jun 2022
In reply to wbo2:

Sure - you’re often looking to segment features in 2D or 3D based on boundaries in image structure etc, or to recognise features of a given characteristic etc.

But it’s far from trivial on most datasets with all sorts of problems like poor signal to noise, “shadow rays” in some modalities, motion from the patient and from the separate rhythms of the heart/vasculature and the lungs.

AI people are working on it, but solutions are not simple.  If they were, the training period for a graduate medical student to become a radiologist wouldn’t be 5-6 years. 

In reply to wbo2:

> That is probably the best question you've asked in the entire thread, 

Not really; it's the elephant in the room. As opposed to all those monkeys typing away furiously.

 jkarran 10 Jun 2022
In reply to schrodingers_dog:

> So one of the things that sets you apart from many other climbers (99.9999%) is a high degree of skill in risk analysis based on experience?

Nothing sets me apart from other climbers except no longer being one, careful mediocrity was my thing though.

> I’d say there was something more to it, call it innate ability. 

Absolutely, we're all hard wired to do this, remove youth, add experience and you do it better. Yes it's significantly subconscious but a conscious, informed layer to your planning adds a lot. I fly now, it's part of our checks to think through the threats and possible errors just before we start rolling. Sometimes picks up and highlights stuff 'the willies' doesn't. 

Jk

In reply to jkarran:

I can understand why you might say careful mediocrity, if that translates to a calculated and restrained decision making process based upon all available data in regards to your desired ‘goal’. Careful mediocrity as opposed to loose cannon. Reminds me of Henry Barber’s approach described in his biography. 
 

by flying do you mean sky diving etc? I looked into it but it seemed pretty time intensive and unpredictable in the U.K. so I thought maybe when I’m older. I did a solo static line jump about 20 years ago and remember thinking it’s all about the preparation and associated conditioning. Fear really didn’t come into it. 

In reply to Morty:

Erm.... thanks? 

 jkarran 11 Jun 2022
In reply to schrodingers_dog:

Gliders not skydiving. Still pretty time consuming but a good mental challenge for a knackered body.

Jk

In reply to jkarran:

Cheers! In keeping with the thread, the proposed biotech utopia could help put an end to our knackered bodies maybe through complete disembodiment. My personal opinion is that we risk a manufactured hell. As RD Laing,who is a hero of mine, said

‘Modern scientific laboratory is the torture chamber of nature’ 

3
 timparkin 11 Jun 2022
In reply to cb294:

> No, post hoc does not work, at least for what I am talking about. Of course a NN AI could give you an account of its training, and the adjustments it made to the connections and weight of each node based on input from its training set. However, none of these has any identifiable relation to something tangible, and only references to the network itself.

I think you miss my point - of course it doesn't work but it's probably just as convincing as our own significantly flawed post hoc rationalisations of 'intuition' etc.

cb294 11 Jun 2022
In reply to schrodingers_dog:

What the fukc does a psychiatrist know about science?

CB

1
In reply to cb294:

Is that science or The Science? 

3
In reply to cb294:

Especially one who is keen on existentialism.

In reply to captain paranoia:

What’s wrong with existentialism? I’d have thought a world devoid of meaning would be the philosophy of choice for the revenge of the nerds crowd? Unless scientism is the new nihilism of choice  

2
 wintertree 13 Jun 2022
In reply to timparkin:

> I think you miss my point - of course it doesn't work but it's probably just as convincing as our own significantly flawed post hoc rationalisations of 'intuition' etc.

Humans can examine and communicate their decision making procees and - if so minded - can strive to do so with honesty and a rejection of bias.

No AI yet made can introspect.

Sure, humans can do a bad job of introspection, but there are many safety/life critical roles that select and train for people who can introspect.  No AI made to date can do that. 

1
 Marek 13 Jun 2022
In reply to wintertree:

> No AI yet made can introspect.

I think we need to be a bit more precise here. For a start, what do you mean by AI (it covers a lot of sins, historically? If you mean Neural Networks, then you have a point, but that's only a subset of AI. With other AI methodologies, introspection - as in 'explain the decision making process' is quite feasible.

Part of the general problem, of course, is that you need to provide a clear, unambiguous and non-circular definition of introspection...

In reply to schrodingers_dog:

> What’s wrong with existentialism?

In the context of what a psychiatrist knows about science?

Existentialism is subjective.

Science is objective.

I'll leave you to try to join the dots. 

1
In reply to captain paranoia:

If the primary tool of observation is ‘mind’ then all things sit on a spectrum of subjectivity? 

1
 wintertree 13 Jun 2022
In reply to Marek:

> I think we need to be a bit more precise here.

Fair point.

> For a start, what do you mean by AI (it covers a lot of sins, historically?) If you mean Neural Networks, then you have a point, but that's only a subset of AI. With other AI methodologies, introspection - as in 'explain the decision making process' is quite feasible.

Anything I've yet read about frankly - expert systems,  MLP neural networks (up to and including recurrent convolutional ones), statistical classifiers etc.  The creator of all these systems can take the data embodied in their configuration and analyse it for insight in to the decision making process - I love how visual this can be with convolutional networks for example - but the human has to write the code that produces this insight, in no case can the implementation be asked to explain its learned decision making process.

> Part of the general problem, of course, is that you need to provide a clear, unambiguous and non-circular definition of introspection...

A very deep point that I think cuts to the nature of self awareness.  No artificial neural network can introspect.  Why?  Fundamentally any MLP - including convolutional ones and recurrent ones bounded to a finite number of cycles - can be reduced to set of polynomials mapping the inputs to the outputs, just one layer deep.

  • Data pours in, mathematically transformed data falls out. 
  • Introspection is a process that results in learning - transformation of the network (polynomials for an MLP, wetware for us).

Perhaps this is where the MLP paradigm falls down so badly; all modification occurs during training and then the network is locked during operation.  Development is specifically forbidden.   All MLPs suffer from anterograde amnesia from the moment of their birth.  

 Andy Hardy 14 Jun 2022
In reply to schrodingers_dog:

> If the primary tool of observation is ‘mind’ then all things sit on a spectrum of subjectivity? 

*If*

cb294 14 Jun 2022
In reply to wintertree:

I don't think that the key to consciousness is introspection as in the ability to explain decision making processes. If it were, human consciousness would generally fail. There are plenty of experiments to demonstrate that the system can trigger responses to stimuli more quickly than we become aware of them, and the conscious decision making process is merely rationalising after the fact.

I think requirements are simpler. At the very minimum, consciousness should comprise internal feedback of some data processing system that is integrated to form some system wide information about the state of the system (in mammalian brains encoded e.g. in serotonin levels), filtering and processing of competing internal and external information, and responding to this filtered information in such a way that the general system wide activation state approaches some set value (reward system).

In principle such a system could be constructed in a simple ANN, or a worm brain, or maybe even a cniadarian or sponge neural net, and we would not conventionally call it conscious.

However, what we call consciousness when talking about humans is IMO the same thing, just incredibly more complex, probably scaling nonlinearly with the number of nodes / neurons (not fully exponentially, as there are structural constraints on the possibly connections).

CB

In reply to Andy Hardy:

Is there an alternative? 

1
 Andy Hardy 14 Jun 2022
In reply to schrodingers_dog:

Kitchen scales & the large hadron collider both do a job of observation.

In reply to Andy Hardy:

I see what you mean, the human is observing that though right? Even if an observational has tool strong internal and external reliability and validity (if that's the right terminology) I think it still sits on the spectrum of subjectivity?

Post edited at 09:25
cb294 14 Jun 2022
In reply to schrodingers_dog:

Yes but the assumption must be that, quantum weirdness aside, there is an observer independent RWOT (real world out there) than can in principle be measured by multiple observers whose individual, subjective measurements agree precisely because there is an RWOT.

This is the essence of scientific materialism, which has been proved* to be the correct description e.g. by its application in technology: Show me one faith based machine, or a therapy based on reincarnation....

Anything else ultimately leads to esoteric woo or solipsism, no idea which is worse!

CB

* in a scientific, pragmatic sense, i.e. until a system with larger explanatory power and better internal consistency comes along

 Andy Hardy 14 Jun 2022
In reply to schrodingers_dog:

The human is reading the weight on the scales, there is no subjectivity - if the scale reads 454g then  I have 454g on the scale. Any human who can read numbers will read 454g

In reply to cb294:

I think it's in scientific materialism where our beliefs diverge, that's not to say I don't think science can be wonderful and awe inspiring. If we're talking about healing or therapy based on faith then I'd say considering how poor the research is, it's all faith based. For example, I'd argue that a condition such as generalised anxiety may be more successfully treated with a protocol involving a tarot reading as it would a standard CBT protocol. There seems to be an explosion in the search for divine or numinous experiences to deal with the 'pathology of normalcy' - as Fromm puts it in this short excerpt from an interview in the 70's.  youtube.com/watch?v=6m84CG5iRf4&

2

As it’s now verboten! for me to view or post in off-belay re the Russel Brand thread, whoever linked the Bohm video, thanks, it was wonderful. I’ve started reading On Dialogue (the irony isn’t lost on me) what a fantastic man he appears to be. 
I was up at Craig Arthur again today, it was like climbing in a fan assisted oven. But I have to say Digitron is particularly numinous. 

cb294 18 Jun 2022
In reply to schrodingers_dog:

Scientific materialism has one thing going for it, and that is the entire technological process since the Renaissance. Anything before was tinkering, but putting our attempts to understand the world in a proper, universal framework and freeing ourselves from the shackles of faith has brought insight that in turn lead to technology. This is the key difference between alchemy and chemistry!

To be clear, with progress I do not mean that anything we have created is good in an ethical sense, but that we have built an integrated world view that has massive explanatory power.

I.e., modern medicine including psychiatry could not work if it was diverging from chemistry or physics.

I really do not get why people deliberately want to bin all that tested insight and instead opt for esoteric woo like tarot, dowsing, or astrology, or believe in rebirth*. Show me the first faith powered car (no, not a Tesla) and I will be convinced.

To claim that the human mind is inscrutable, works in some way that is not materialistic, and therefore cannot be understood in materialistic terms goes in that direction. It is the cheap way out.

CB

*every woman and her dog were some princess in some previous life, not Jenny the pox ridden prostitute from Soho. Forgot who I stole that phrase from, but anyway, if animals are allowed you WILL end up as krill, a midge, or a soil nematode next time round!

 wbo2 18 Jun 2022
In reply to schrodingers_dog:  I once saw a stranded fish being attacked atoneend by a seagull, and the other by a cormorant.  I did wonder who they'd been in a former life to merit this existence.

 Richard J 18 Jun 2022
In reply to cb294:

> Scientific materialism has one thing going for it, and that is the entire technological process since the Renaissance. Anything before was tinkering, but putting our attempts to understand the world in a proper, universal framework and freeing ourselves from the shackles of faith has brought insight that in turn lead to technology. This is the key difference between alchemy and chemistry!

The trouble with scientific materialism, at least in its naive form, is that it's difficult to reconcile with what physics tells us.  You said above "quantum weirdness aside" - but "quantum weirdness" is what the universe is made of!  If we can say anything about a "real world out there", it's that it's a state function evolving in a Hilbert space of vast dimensionality.  What we think of as the "real world" is at best some emergent phenomenon that arises from this quantum weirdness in ways we still don't entirely understand.

Modern science, through its combination of systematic experiment and mathematically based theorising, has been enormously powerful.  A philosopher, though, would rightly chide you for mixing up an instrumental justification of science - "it produces technologies that work" - with a natural-philosophical claim - "it makes true claims about the fundamental nature of reality".  There are plenty of working technologies that have been produced on the basis of theories we know to be fundamentally wrong (for example, pretty much all chemical technology before the 19th century, or even maybe before Linus Pauling explained with quantum mechanics what a chemical bond actually is).

You could argue that the relationship between science and technology goes the other way.  Often the conceptual framework for science arises from metaphors that are derived from the dominant technologies of the day - Newtonian mechanics from clockwork, for example.  

So it's not surprising that the way we think about how our brains might work and how minds arise from that working through a metaphorical comparison with human made computers.  I think those metaphors are often very misleading, all the more so when we forget what underlies the metaphor - think of expressions like "hard-wired", which become completely ridiculous when you remember what brains are actually made of.  

In reply to cb294:

Why will I end up as krill or a midge? Or am I personalising that and you were talking more generically? 

In reply to wbo2:

The answer to that would be, nature doesn’t care? 

cb294 18 Jun 2022
In reply to schrodingers_dog:

Nothing personal, just a joking claim about probability: If the system reincarnates human and animal souls in animals, as many versions of Eastern religions claim, there are just not enough humans being born to have even a minimal chance that a human is reborn human...

CB

 Offwidth 18 Jun 2022
In reply to cb294:

Well as a counterpoint to wintertree, I think your writing is illustrative of dangerous developments in the way some scientists look at faith and at times insultingly dismissive of legitimate academic subjects outside the physical sciences. Maybe I'm completly missing some very dark humour but what you write comes across as that of a dumbed-down Dawkins (I strongly disagree with his much more careful analysis). I would stand right beside you in calling out actual moral abuses of any faith, or in say exposing social scientists who's research results can't be repeated, and in highlighting the nonsense of the world of woo, but in a world where progress to a better informed humanity is so hard, making enemies of broadly good and rational people, and disparagement of legitimate academic subjects, is terrible strategy, bad ethics and has a high risk of making things worse. The last thing we need is lazy 'populist' style attitudes in science and more hate from militant atheism.

cb294 18 Jun 2022
In reply to Richard J:

The key point materialism has going for it is that it is a coherent system with high explanatory power. That it is incomplete, and knows it is incomplete, is actually a strength not a deficit. In that, the fact that quantum mechanics and general relativity are incompatible does not bother me as a biologist, if anything I find it fascinating, and expect that the approach that was successful over the centuries will eventually find some unifying theory.

Also, even if it is true that the RWOT emerges from quantum weirdness, in almost all instances quantum effects are irrelevant. Humans simply do not operate at the quantum scale (even if some of our technology certainly does). Give me one reason why brain function needs to be explained at the quantum level. Certainly, there is no evidence that the mind is some ill defined QM system operating in a probabilistic manner independent from a "classical" brain hardware.

Unfortunately, such QM explanations are often invoked completely gratuitously to explain badly understood stuff like the mind, similar to our ancestors invoking gods or magic to fill the gaps in their understanding.

Also, "hard wired" in the context of neurobiology is jargon, a shorthand for a well defined concept, i.e. a connective map that is not (or less) patterned by activity and use but exclusively (or at least more) by developmental genetics. That this does not hold 100% even in animals with mosaic development such as nematodes is well known, but of course you cannot always mention that in a discussion.

CB

 Richard J 18 Jun 2022
In reply to cb294:

> Give me one reason why brain function needs to be explained at the quantum level. Certainly, there is no evidence that the mind is some ill defined QM system operating in a probabilistic manner independent from a "classical" brain hardware.

One example: the van der Waals force, or dispersion force, is always present in any molecular interaction, in the brain or anywhere else.  This is properly understood as a fluctuation force that arises from quantum fluctuations in in em field between the molecules, and as such it has an intrinsic randomness.  That's why strict determinism doesn't, even in principle, apply if you were trying to model at a molecular level how a brain works.

There's a current discussion as well about how important quantum tunnelling and/or coherent transport is in various biological processes.  Very much so in photosynthesis, less convincing in other areas is my sense now but who knows.  I am not a adherent of Penrose's views of the relevance of quantum computing to theories of mind.

cb294 18 Jun 2022
In reply to Richard J:

Yes, VdW is essentially random, but so what, biomolecules anyway behave in a random manner. Proteins are not rigid Lego blocks! If you dive deep enough, sure, QM will be at work, but the vast majority of processes, specifically including a neuron firing or not, a neuronal circuit being active, or a mind becoming conscious are operating several levels above, fundamentally emerging from QM but not directly reacting to quantum unpredictability at the level of individual molecules. Thermodynamics clearly dominates over quantum mechanics almost everywhere in biology.

Photosynthesis, biofluorescence and a handful of other processes work because of quantum effects at the single molecule level, but then again, it does not really matter for the function of the cell if some cyanobacterial antenna complex captures one particular photon or the next, or whether the activated state of some individual jellyfish fluorescent protein decays after 2 or 3 ns.

Anyway, Penrose is talking out of his arse, a shame really but unfortunately part of a growing trend. Too many theoretical physicists spout off about biology without really having a clue.

CB

Post edited at 12:02
 Richard J 18 Jun 2022
In reply to cb294:

> Yes, VdW is essentially random, but so what, biomolecules anyway behave in a random manner. Proteins are not rigid Lego blocks! If you dive deep enough, sure, QM will be at work, but the vast majority of processes, specifically including a neuron firing or not, a neuronal circuit being active, or a mind becoming conscious are operating several levels above, fundamentally emerging from QM but not directly reacting to quantum unpredictability at the level of individual molecules.

It doesn't matter in practise, but it matters in principle.  Lots is still written, for example, about whether free will is compatible with a deterministic understanding of biology at the molecular level. But that's all a waste of time if at the molecular level things fundamentally aren't deterministic because of the importance of quantum mechanics.

>Thermodynamics clearly dominates over quantum mechanics almost everywhere in biology.

To be more accurate, statistical mechanics dominates over everything, because at the molecular level, systems are too small to reach the thermodynamic limit.  If 30 years of single molecule biophysics has taught us anything, it's that the ensemble average matters less than than the fluctuations.  So, once again, it comes down to that randomness, and where that randomness comes from.

To come back to the comparison between brains and computers, I think this underlines a fundamental difference.  Computers really are deterministic (more or less, given a cosmic ray or two), because we've put a lot of effort into engineering them to be that way.  We've built them to have a clean digital abstraction layer, which just doesn't exist in the biological systems.

cb294 18 Jun 2022
In reply to Richard J:

I was tempted to write something along these lines in my last post, because I spend forever discussing with my students why we can talk about transcription factor binding to some promotor in terms of Kds etc., even if in any given cell we have a couple of hundred transcription factor molecules binding (or not) to one of the exactly two copies of that promotor present in the genome.

In my own research I have measured the 2D affinities between receptor subunits embedded in the membrane of living cells using methods based on fluctuations, but again we talk about Kds while counting individual molecules.

My point above was that we can legitimately do this and are free to ignore the quantum uncertainty that rules at the level of events occurring at the binding interface between the protein and DNA.

Expression of genes encoding ion channels, establishment of membrane potentials via these channels, and everything else a neuron does subsequently is even further layers away from the quantum realm, and there are only very few examples where quantum mechanic properties become relevant again later in the process.

The take home message for me is that we do not even have to invoke quantum uncertainty to know that a cell does not behave in a deterministic way, thermal noise is easily sufficient!

CB

 Marek 18 Jun 2022
In reply to Richard J:

> ...

> To come back to the comparison between brains and computers, I think this underlines a fundamental difference.  Computers really are deterministic (more or less, given a cosmic ray or two), because we've put a lot of effort into engineering them to be that way...

Yes and no. It depends on your exact definition of 'deterministic'. Theoretically and in the absence of external stimuli: Yes. In practice: No. Also any asynchronous boundaries (present in all non-trivial CPUs) will result in loss of determinism.

 Marek 18 Jun 2022
In reply to cb294:

> Anyway, Penrose is talking out of his arse, ...

With respect to his track record, I prefer to think of these 'ideas' as 'vague questions after a few too many beers' rather than some carefully considered proposals.

> ... Too many theoretical physicists spout off about biology without really having a clue.

Too many theoretical [X]ists spout off about [Y] without really having a clue. Pretty much true for any value of X and Y. Even when X=Y.

 Marek 18 Jun 2022
In reply to cb294:

> ... Thermodynamics clearly dominates over quantum mechanics almost everywhere in biology.

I think it's dangerous (from the point of view of scientific rigour and the tendency to encourage woolly thinking*) to make such a distinction between an emergent behaviour and the underlying mechanisms. You may be right to say that thermodynamics is a 'more useful everyday abstraction' in biology, but deep down thermodynamics is wholly a consequence of QM (and other simpler dynamical processes in between).

I tend to wince whenever I read the 'quantum weirdness' phrase. There is no 'weirdness' in QM unless you choose to misunderstand it. Any weirdness arises when it's applied incorrectly in ways outside of its domain. Stick to the maths (the language of QM) and the result may be non-intuitive (but then so are a lot of classical theories), but there's no weirdness. The apparent weirdness is a consequence of trying to conceptualise QM in words rather than just predicting the outcome of experiments with the maths.

cb294 18 Jun 2022
In reply to Marek:

> With respect to his track record, I prefer to think of these 'ideas' as 'vague questions after a few too many beers' rather than some carefully considered proposals.

> Too many theoretical [X]ists spout off about [Y] without really having a clue. Pretty much true for any value of X and Y. Even when X=Y.

I totally agree, and would not cast aspersions on his physics just because he went off in a bizarre direction here (where I can detect BS whereas the physics are obviously way above my pay grade and would not dare voice an opinion).

Unfortunately, he is not alone. I recently read a popular physics book by Lee Smolin which I quite liked until in the end he went off on a tangent from quantum gravity to evolution, neurobiology, ethics and the holocaust. Just why?

Biologists can be just as bad, e.g. trying to derive ethics from evolution, which can also have some ultra nasty consequences. Better leave ethics to the philosophers. However, the brain/mind things IMO firmly belongs in the realm of neurobiology, evolution, and also information and computational sciences.

Also, as I just wrote in response to a post that was deleted by the time I tried to reply I do admit having my troll hat half on, making arguments that are more extreme and less nuanced than my true  opinions as a kind of sport, "competitive internet arguing" or whatever!

cb294 18 Jun 2022
In reply to Marek:

> I tend to wince whenever I read the 'quantum weirdness' phrase. There is no 'weirdness' in QM unless you choose to misunderstand it. Any weirdness arises when it's applied incorrectly in ways outside of its domain.

That is my point: As soon as you are talking about cells or brains you are too many layers removed from the actual QM effects to have any direct effect. The examples where  cellular level processes can only be understood taking QM into account are pretty rare. Therefore, attempts to invoke QM phenomena for explaining consciousness seemmisguided.

 Marek 18 Jun 2022
In reply to cb294:

> ...

> Unfortunately, he is not alone. I recently read a popular physics book by Lee Smolin which I quite liked until in the end he went off on a tangent from quantum gravity to evolution, neurobiology, ethics and the holocaust. Just why?

I do wonder if there's some influence from editors/publisher along the lines of "... and so what?", i.e., they're looking for some 'punchline' that'll resonate with the wider public (and increase sales). You should also avoid certain books by Max Tegmark...

cb294 18 Jun 2022
In reply to Marek:

Could be, at least that explanation makes sense to me

 Richard J 18 Jun 2022
In reply to cb294:

> The take home message for me is that we do not even have to invoke quantum uncertainty to know that a cell does not behave in a deterministic way, thermal noise is easily sufficient!

Thermal noise is indeed usually sufficient, my point is simply that to understand thermal noise, you need to invoke quantum mechanics.  You are arguing that your bit of science works because it corresponds to the real world out there, you're agreeing that randomness is a central part of what you need to invoke to explain what you see, so it seems quite important to me that you understand where in the real world that randomness comes from.  My argument is that fundamentally the randomness comes from quantum mechanics, and I've suggested a specific mechanism through which that happens.

cb294 18 Jun 2022
In reply to Richard J:

Absolutely, but as soon as your system adds layers of complexity you become uncoupled from the quantum effects and can ignore them, with the few well known exceptions. You have no way of going back from the behaviour of your protein complex to some quantum event having gone this way or other.

Anyway, I think we largely agree but look at the same issue either bottom up or top down.

CB

 Jon Stewart 18 Jun 2022
In reply to cb294:

> Unfortunately, he is not alone. I recently read a popular physics book by Lee Smolin which I quite liked until in the end he went off on a tangent from quantum gravity to evolution, neurobiology, ethics and the holocaust. Just why?

What do you think of Sean Carroll? 

> Biologists can be just as bad, e.g. trying to derive ethics from evolution, which can also have some ultra nasty consequences. Better leave ethics to the philosophers.

I think if philosophers don't understand our moral intuitions in terms of biology, they're stumbling around in the dark.

> However, the brain/mind things IMO firmly belongs in the realm of neurobiology, evolution,

Sure.

> and also information and computational sciences.

Not so sure. Trouble is, most of them have taken the approach of denying that consciousness exists, which isn't a great starting point for a satisfactory explanation.

cb294 18 Jun 2022
In reply to Jon Stewart:

> What do you think of Sean Carroll? 

Don't know enough to comment shooting from the hip

> I think if philosophers don't understand our moral intuitions in terms of biology, they're stumbling around in the dark.

Absolutely, our cultural veneer is only skin deep, and e.g. many religious precepts governing the role of women in society are a direct leftover from the evolution of human / primate mating systems.

Even leaving all the nasty social Darwinist crap aside, if you want to understand where the whole concept of dowries, facial coverings, forced self immolation of widows, rape as a strategy in conflict, etc. comes from you need to look no further than the mate/resource guarding polygyny that is the typical mating system for hominins and gorillas (but not chimps or bonobos).

However, while we should take this biological baggage into account to understand our psychology, the whole point of ethics should IMO be that we should make rules to organize or societies in a way that we see fit, not merely to codify our evolutionary history. Evolved does not mean good in an ethical sense.

 Jon Stewart 18 Jun 2022
In reply to cb294:

I'm a huge fan of Sean Carroll - The Big Picture is one of these popular science books by a theoretical physicist which goes on about everything from quantum field theory to free will to gender identity, setting out a philosophy he calls "Poetic Naturalism" based firmly in the scientific worldview. It's absolutely brilliant and a great example of why scientists should sometimes step 'out of their lane'.

> Absolutely, our cultural veneer is only skin deep

> However, while we should take this biological baggage into account to understand our psychology, the whole point of ethics should IMO be that we should make rules to organize or societies in a way that we see fit, not merely to codify our evolutionary history. Evolved does not mean good in an ethical sense.

Totally agree. In practice, 'evolved' tends to mean 'contradictory' and the philosophers don't like that much. That said, there are quite appealing arguments for trying to settle on a shared understanding of morality which is based in, and explained by, our evolved tendency to cooperate (where it's in our interests). To get this off the ground, we need biologists and philosophers to talk to each other - and to replace the roles of priests and imams. 

 Marek 18 Jun 2022
In reply to Jon Stewart:

> ... Trouble is, most of them have taken the approach of denying that consciousness exists, which isn't a great starting point for a satisfactory explanation.

Not so much that it doesn't exist, more that until we can agree on an unambiguous and non-circular definition for it, talking about it just becomes a battle of semantics.

 Jon Stewart 18 Jun 2022
In reply to Marek:

If it's like something to be x, then x is conscious. 

What's wrong with that?

 Marek 18 Jun 2022
In reply to Jon Stewart:

Not sure I understand that. Do you mean like as in 'similar to' or like as in 'get pleasure from'?

 Jon Stewart 18 Jun 2022
In reply to Marek:

As in 'similar to'. Would you agree that it's like something to be you, and it's like something to be a bat, but it's not like anything to be a tomato? 

So if it's like something to be x, then x is conscious. 

Post edited at 21:04
 Marek 18 Jun 2022
In reply to Jon Stewart:

> As in 'similar to'. Would you agree that it's like something to be you, and it's like something to be a bat, but it's not like anything to be a tomato? 

> So if it's it's like something to be x, then x is conscious. 

Sorry, I'm not getting it  

Please explain what you man by: "it's like something to be a bat". Being a bat is like being a bird (both have wings and can fly)? If so, then being a tomato is like being an apricot (both are fruit and have seeds)? I assume 'like being' = 'existing and having shared properties'.

Or perhaps your definition of 'being' is predicated on something more than simply 'existing' in which case I suspect it'll end up being a circular definition. Confused.

Post edited at 21:15
 Jon Stewart 18 Jun 2022
In reply to Marek:

> Sorry, I'm not getting it  

Oh no! It's someone who fails the Turing test

> Please explain what you man by: "it's like something to be a bat". Being a bat is like being a bird (both have wings and can fly)? If so, then being a tomato is like being an apricot (both are fruit and have seeds)? I assume 'like being' = 'existing and having shared properties'.

No that's not what I meant.

David Chalmers explains in the first 2 minutes here:

youtube.com/watch?v=uKIk5AL16Bg&

> Or perhaps your definition of 'being' is predicated on something more than simply 'existing' in which case I suspect it'll end up being a circular definition. Confused.

It's not circular. If it's like something to be x, then x has subjective experience, a first person perspective, and x is conscious. If it's not like anything to be x then x has no subjective experience and x is not conscious.

I simply don't believe people when they say they don't know what consciousness is. Everyone knows through direct experience that it's different to be awake than it is to be in a dreamless sleep. It's not like anything to be in a dreamless sleep, but it is like something to get up and brush your teeth. In precisely the same way, it's (most probably) like something to be a bat, but it's (most probably) not like anything to be a tomato.

 Marek 18 Jun 2022
In reply to Jon Stewart:

But surely the idea of a 'subjective experience' is itself predicated on consciousness which would make it circular, else how do you define 'subjective'? Or 'experience' for that matter.

And just because I may experience 'consciousness' doesn't mean that I know what it is. A bat experiences 'air' but doesn't know what it is. A tomato may experience sunshine but ...

 Jon Stewart 18 Jun 2022
In reply to Marek:

> But surely the idea of a 'subjective experience' is itself predicated on consciousness which would make it circular, else how do you define 'subjective'? Or 'experience' for that matter.

This just the nature of what a definition is, it's not a special problem for consciousness. Can you define time or space without running into the same problem? But of course these are vital concepts for doing science, you can't just ignore them because you can't define them in terms that aren't just using different words for then same thing.

> And just because I may experience 'consciousness' doesn't mean that I know what it is. A bat experiences 'air' but doesn't know what it is.

It probably does. It's what it flies through. It doesn't know that air is made of nitrogen and oxygen and whatnot, just like I don't know how consciousness works. But it knows the difference between air and a wall, just like I know the difference between consciousness and unconsciousness.

> A tomato may experience sunshine but...

I doubt it does.

Post edited at 22:21
 Marek 18 Jun 2022
In reply to Jon Stewart:

> This just the nature of what a definition is, it's not a special problem for consciousness.

That's my point: You are attempting to define something by referencing itself. That's not a good (useful) definition since that sort of logic can be used to 'prove' anything.

> Can you define time or space without running into the same problem?

Yes, you can. That's the whole point of 'background independent' theories like GR.

> ... just like I know the difference between consciousness and unconsciousness.

You only know what *your* experience of consciousness is. That doesn't allow you to say whether something else, be it a bat or a tomato, has similar experiences. It's purely internal to you. You have made an arbitrary choice that a bat is 'conscious' (in the same way as you are) and that a tomato is not, but that choice is only in your head and doesn't affect the bat or the tomato one iota.

 Jon Stewart 18 Jun 2022
In reply to Marek:

> That's my point: You are attempting to define something by referencing itself. That's not a good (useful) definition since that sort of logic can be used to 'prove' anything.

The same point applies to just about every other concept. E.g. define "computer" without just using different words for the same concept. It's not a valid challenge to the claim that consciousness is a real natural phenomenon that science should at least seek to explain.

> Yes, you can. That's the whole point of 'background independent' theories like GR.

We didn't have to wait for GR in order to define space and time - that theory gave us more insight. Space and time were valid concepts essential for describing the world long before any mathematical physical theories. Newton didn't say "I haven't got a non-circular definition of time, so I can't include it in my laws of motion". He knew what time was and did science with it.

> You only know what *your* experience of consciousness is.

I can only know that for sure, true.

> That doesn't allow you to say whether something else, be it a bat or a tomato, has similar experiences. It's purely internal to you. You have made an arbitrary choice that a bat is 'conscious' (in the same way as you are) and that a tomato is not, but that choice is only in your head and doesn't affect the bat or the tomato one iota.

Nothing arbitrary about it. I infer from evidence which things are conscious. Other humans are, mammals certainly seem to be, stuff like flies it's less clear, and organisms without nervous systems it seems very unlikely. Computer programs? Nope.

I don't think you genuinely believe that my judgement that a bat is conscious but a tomato is not is arbitrary. It's clearly based on the behaviour of those things.

 Marek 19 Jun 2022
In reply to Jon Stewart:

> The same point applies to just about every other concept. E.g. define "computer" without just using different words for the same concept...

Of course you can! I could define 'computer' without referencing computers. I generally don't bother because nothing much hinges on a precise definition - we don't have arguments about whether an Arduino is a computer or not. Well, I don't.

> We didn't have to wait for GR in order to define space and time - that theory gave us more insight. Space and time were valid concepts essential for describing the world long before any mathematical physical theories. Newton didn't say "I haven't got a non-circular definition of time, so I can't include it in my laws of motion". He knew what time was and did science with it.

No, he didn't know what it was. He made an *assumption* that it exists and formulated his laws based on that assumption. And those laws are only valid in the domain in which that assumption holds. Nowhere else. Your definition of 'consciousness' is based on your experience of it *in your head' and similarly is only valid there. Outside of your head - i.e., in bat or tomatoes - that definition is useless, just as Newton's Laws are not valid near a black hole.

> Nothing arbitrary about it. I infer from evidence which things are conscious. Other humans are, mammals certainly seem to be, stuff like flies it's less clear, and organisms without nervous systems it seems very unlikely. Computer programs? Nope.

The vary fact that you say 'seem to be' and 'less clear' and 'unlikely' shows that your personal definition of consciousness is not particularly useful since it can't be used to distinguish between thing that are and thing that are not.

> I don't think you genuinely believe that my judgement that a bat is conscious but a tomato is not is arbitrary. It's clearly based on the behaviour of those things.

Yes, I do. Behaviour? What precise behaviour is required for something to be given the accolade of consciousness (and lack-of means no consciousness)? And no circular definitions please - e.g., you can't say (for instance) that the defining behaviour is 'thinking' when thinking is defined as something done by conscious things.

 Jon Stewart 19 Jun 2022
In reply to Marek:

> Of course you can! I could define 'computer' without referencing computers. 

Without referencing the concept of computation is what's required. Shall I take it on trust that you can do this?

> Your definition of 'consciousness' is based on your experience of it *in your head

Correct, this is the nature of consciousness. Can't do anything about that.

> The vary fact that you say 'seem to be' and 'less clear' and 'unlikely' shows that your personal definition of consciousness is not particularly useful since it can't be used to distinguish between thing that are and thing that are not.

There's a problem here: you're making a circular argument.

We have a natural phenomenon, consciousness, which we'd like to explain scientifically. It's a 'hard problem' because consciousness exists subjectively for each conscious subject, its nature cannot be directly observed by a third person. We don't know how the brain generates this first-person experience, the what-it's-likeness, but we know that somehow it does. We all experience it every waking moment, so we can know with complete certainty that it's real (Descartes was right about that, but not much else).

You say, "we can't study it because we don't have a definition". I give you the philosopher's definition (I didn't make it up, it's from Thomas Nagel, taken on by David Chalmers and pretty much all other contemporary philosophers of mind who don't deny its existence) so we know what it is we're talking about.

You say "that's not good enough, I need a definition that *explains* consciousness in terms of objective third-person phenomena, so I can tell precisely which third-person phenomena have it".

Well, that's the problem we started out with. You say we can't solve it because we don't have a definition, and then you say we can't have a definition because we haven't solved the problem. It's not very helpful.

> Yes, I do. Behaviour? What precise behaviour is required for something to be given the accolade of consciousness (and lack-of means no consciousness)? And no circular definitions please - e.g., you can't say (for instance) that the defining behaviour is 'thinking' when thinking is defined as something done by conscious things.

It's a good question to get thinking about what's required for consciousness, but sadly we don't have the complete answer yet because we haven't solved the problem. But what do we know? We know with great confidence that humans, apes, mammals, birds and other animals that exhibit complex social behaviour are conscious. As creatures get simpler, we have less confidence. Is an ant conscious? My bet is, probably, a bit. A worm? A bit less. Bacteria, plants, fungi - no most probably not.

Just clarify: are you completely agnostic about the existence of other minds? When you talk to other people, do you do so as if they were conscious just on the off-chance? What about animals? Again, do you genuinely have absolutely no idea whether say, a horse, is conscious? If you see pain being inflicted on another person or animal, how do you react? I think you're every bit as sure as I am that other people and non-human animals that have nervous systems like ours and exhibit complex social behaviour are conscious. There's nothing arbitrary about it - not knowing where it fades as we look to less similar creatures does not make it arbitrary, it makes it difficult.

 john arran 19 Jun 2022
In reply to Jon Stewart:

If an ant sees another coming and steps aside to let it pass, is that sufficient to conclude consciousness, as it's aware of its own self and the space it's occupying?

 Marek 19 Jun 2022
In reply to john arran:

> If an ant sees another coming and steps aside to let it pass, is that sufficient to conclude consciousness, as it's aware of its own self and the space it's occupying?

I could build you a robot (OK, two robots) which would exhibit exactly that behaviour. Would you conclude that they have consciousness?

 Marek 19 Jun 2022
In reply to Jon Stewart:

> Without referencing the concept of computation is what's required. Shall I take it on trust that you can do this?

For the avoidance of doubt and as an example: I would (roughly) define an object which can take human accessible representations of integers, perform any non-null subset of addition, subtraction, multiplication and division and return another human accessible representation of the result. Nothing in that definition requires a concept of a computer (or computations) since integers and these operations are explicitly defined in the fundamental of pure maths (in terms of set theory). If another person was to accept that definition then there would be no ambiguity between us as to whether something was a 'computer' or not - e.g., an Arduino would be one but an abacus would not.

> You say, "we can't study it because we don't have a definition".

No, that's not what I'm saying. We can and should certainly study it, but until we have a 'good' definition (as explained above) we have to be careful how we apply the concept of consciousness to anything other than our own experiences.

> I give you the philosopher's definition (I didn't make it up, it's from Thomas Nagel, taken on by David Chalmers and pretty much all other contemporary philosophers of mind who don't deny its existence) so we know what it is we're talking about.

Sorry, but a 'philosopher's definition' (in my book) is a just a way of saying "There is no definition". That's pretty much what philosophy is: A load of untestable waffle.

> It's a good question to get thinking about what's required for consciousness, but sadly we don't have the complete answer yet because we haven't solved the problem. But what do we know? We know with great confidence that humans, apes, mammals, birds and other animals that exhibit complex social behaviour are conscious. As creatures get simpler, we have less confidence. Is an ant conscious? My bet is, probably, a bit. A worm? A bit less. Bacteria, plants, fungi - no most probably not.

Relying on complex behaviour is tempting, but dangerous for a number of reasons: (a) Complexity is a multi-dimensional continuum so it can't really provide any binary distinction such as is-or-is-not-conscious, and (b) Very simple systems can exhibit very complex behaviour. As with John Arran's suggestions, I could build a relatively simple robot that to you would exhibit effectively infinitely complex behaviour yet you wouldn't call it conscious. I don't think 'complexity of behaviour' is going to get you anywhere.

> Just clarify: are you completely agnostic about the existence of other minds? When you talk to other people, do you do so as if they were conscious just on the off-chance? What about animals? Again, do you genuinely have absolutely no idea whether say, a horse, is conscious? If you see pain being inflicted on another person or animal, how do you react?

No, I *believe* that other entities have consciousness, but I can't say that I *know* that they have because I don't actually know what consciousness is and therefor how to measure or detect it. 

> I think you're every bit as sure as I am that other people and non-human animals that have nervous systems like ours and exhibit complex social behaviour are conscious. There's nothing arbitrary about it - not knowing where it fades as we look to less similar creatures does not make it arbitrary, it makes it difficult.

That's correct, it is difficult. The arbitrary aspect arises (and is the bit I dispute) once you start saying something (other than yourself) is or is-not conscious - i.e., introduce an 'arbitrary' boundary for which your have no evidence. Your own word 'fades' implies no hard boundary, but a continuum from 'full consciousness' (human or more?) through lesser consciousness and minimal consciousness to zero consciousness (an Arduino). That may actually be a more useful concept - who knows - but most of the prior arguments have been based on a binary is-or-is-not-conscious premise.

cb294 19 Jun 2022
In reply to Marek:

 

> Relying on complex behaviour is tempting, but dangerous for a number of reasons: (a) Complexity is a multi-dimensional continuum so it can't really provide any binary distinction such as is-or-is-not-conscious, and (b) Very simple systems can exhibit very complex behaviour.

As you argue below the conclusion from a) should be that we need to discard our idea that consciousness is either there or absent. This only holds true (more or less) for undoubtedly conscious organisms that you render unconscious by punching them in the head (more subtle means are available).

However, aspects of consciousness such as self awareness that is essential in most definitions of consciousness can also easily be manipulated, e.g. using drugs such as ketamine.

Interestingly these drugs also work in organism that are not generally considered conscious, e.g. insects.

Looking across evolution shows as plenty of evidence of animals e.g. showing evidence of a theory of mind while at the same time being unable to recognize their reflection in a mirror (e.g. various bird species). Are these animals conscious, can they feel emotions such as fear?

IMO certainly, but that does not mean that they are quantitatively as "conscious" as we are, they will e.g. not reflect about their place in the universe and discuss whether other beings are conscious or not.

Any discussion of consciousness that does not take these insights from evolution into account is essentially pointless.

CB

 Marek 19 Jun 2022
In reply to cb294:

It also begs the question of whether we are confusing two things: the simple 'consciousness' that goes on-off when we wake up and fall asleep and the more subtle 'consciousness' that underlies (somehow) the idea that we are sentient beings and different from computers. 

cb294 19 Jun 2022
In reply to Marek:

Same thing.

 john arran 19 Jun 2022
In reply to Marek:

> I could build you a robot (OK, two robots) which would exhibit exactly that behaviour. Would you conclude that they have consciousness?

No I wouldn't, but mainly because you as the designer will have explicitly programmed in that behaviour; the robot would be simply an automaton following instructions.

But imagine a robot which has the mechanical ability to move but has never been programmed with any sense of what movement is or why one might want to move. Were such a robot to itself learn (maybe by observing and copying other entities) to move so as to let others pass and thereby gain some advantage for itself or for others, I'd argue that developing such awareness of its own place in the world, and choosing how and when to apply this awareness and knowledge, may well constitute the rudiments of sentience.

I'm also wondering whether consciousness may in fact be little more than a quick reference guide to our own brain, a short-cut way to assess and activate behaviours more quickly, a bit like hovering your foot over the brake when you perceive a potential road incident so as to cut down on the amount of processing and action we need to do in a hurry if such an incident ensues.

 Marek 19 Jun 2022
In reply to john arran:

> No I wouldn't, but mainly because you as the designer will have explicitly programmed in that behaviour; the robot would be simply an automaton following instructions.

Ahh, OK. Perhaps I should have said that you weren't to know that it was a designed robot? That all you could see was it's behaviour? Where would that leave you?

> But imagine a robot which has the mechanical ability to move but has never been programmed with any sense of what movement is or why one might want to move. Were such a robot to itself learn (maybe by observing and copying other entities) to move...

That's not far from what supervised learning in an NN is about - you let them do things and feed back 'rewards' to reinforce 'correct' behaviour. It's commonly done in image analysis, but there no reason (other than pragmatic ones of cost and safety) why you could do the same with movement. You might get some very unexpected result. A few years back someone used a similar technique to 'teach' an FPGA to detect certain patterns in a signal. It worked in that it got quite good at the job, but they could never figure how it was doing it. An analysis of the 'self-designed logic' suggested nothing of relevance and it was clear that it was very different from the way a human would design an FPGA to do that job.

 Jon Stewart 19 Jun 2022
In reply to john arran:

> If an ant sees another coming and steps aside to let it pass, is that sufficient to conclude consciousness, as it's aware of its own self and the space it's occupying?

No, as Marek says, you could easily build a robot that could so that, and I'd bet my house that it wasn't conscious (it would be a safe bet too since we don't have a consciousness detector to prove me wrong).

My view (Anil's Seth's view that I agree with) is that consciousness is a biological phenomenon associated with the control of an organism's behaviour, where this is done by a specific type of predictive processing. Indeed the 'Bayesian Brain' of the OP. The idea is that the brain generates best-guess models of both its internal state and of the external world, and uses incoming data from the senses (including interoception) to constantly update these models, continually minimising prediction error. It's this type of information processing that an ant might be doing but a robot isn't.

I think this is a bit of a start towards a theory of consciousness, I find something about it compelling. Once developed, we might be able to examine the way that an animal, or alien, or AI robot was processing information and have a good idea whether it was conscious and what its consciousness might be like. We might be able to build an AI that worked like this and have good reason to think it's conscious (although this would be a terrible idea).

 Marek 19 Jun 2022
In reply to Jon Stewart:

I don't think I'd argue much with what you've said, but...

1. I'd avoid references to 'Bayesian Brain' - that's just getting-on-the-Bayesian-bandwagon nonsense. Bayes derived a very specific algorithm on how to update probabilities based on prior and new events. I'm pretty sure a learning brain does *not* use Bayes' algorithm. Yes, it 'learns' by updating it's internal models of the world based on events, but to call it 'Bayesian' does both a disservice.

> ... It's this type of information processing that an ant might be doing but a robot isn't

2. But a robot could quite easily. Again, I could build a robot* which uses Bayes' algorithm to create and update an internal model of its surroundings as it trundles round. I'd be surprised if no ones done that at least as a research project. That would fit your description 100%. There are certainly plenty of applications - as opposed to robots - which do that sort of thing all the time.

* I've just realised that I have a 'robot' already that does what you describe - albeit a pretty trivial example. I have an astrophotography tracker that has an internal model of how the earth rotates and how the atmosphere might refract light (its 'world'). When running it take input from a camera about the location of a star in the sky (which wobbles around due to turbulence, so is somewhat probabilistic and is perturbed by physical deficiencies in the mechanical design) and updates its internal model of where it is pointing (in celestial coords) at that time. It then works out how to move the telescope to minimise errors in its movement in the future. trivial perhaps, but it fits your description. And doesn't use Bayes' algorithm (but perhaps should).

So yes, the ability to 'learn & adapt' is probably *necessary* for any reasonable definition of consciousness, but on it's own it doesn't really get you very far down the track of figuring out what is 'special' about consciousness.

 Jon Stewart 19 Jun 2022
In reply to Marek:

> I would (roughly) define an object which can take human accessible representations of integers, perform any non-null subset of addition, subtraction, multiplication and division and return another human accessible representation of the result.

OK sounds like a good definition - perhaps I should have asked for a non-circular definition of time. But either way, you don't want to accept a definition of consciousness that refers to what it is - subjective experience - so you're never going to get one. 

It might turn out that Integrated Information Theory (IIT) is correct, and that consciousness is identical to a precisely defined quantity labelled phi. This theory implies that while humans have very high phi, even systems that we would never consider conscious have non-zero phi, so it's a kind of sophisticated mathematical panpsychism. Note though, that in developing IIT, Tononi didn't start with any definition of consciousness in terms of third-person observable phenomena, he did exactly the opposite and asked "what is my consciousness like from inside?"

Here he is explaining it:

youtube.com/watch?v=-TP49-l46EU&

> No, that's not what I'm saying. We can and should certainly study it, but until we have a 'good' definition (as explained above) we have to be careful how we apply the concept of consciousness to anything other than our own experiences.

OK, I think we agree here, but what you call a 'definition' of consciousness is what I'd call a 'theory' of consciousness. A theory of consciousness like IIT would allow us to know what things are conscious, and how much. But you're not going to get what you call a 'definition' until we know how consciousness works.

> Sorry, but a 'philosopher's definition' (in my book) is a just a way of saying "There is no definition". That's pretty much what philosophy is: A load of untestable waffle.

Of course there's a definition, we're having a coherent conversation about it, and we agree that it's subjective experience which we're sure each of us has. I know it's like something to be me, and you know its like something to be you. Neither of us knows if it's like anything to be an ant. If it had no definition it would just be a meaningless bunch of letters.

Philosophy is what you have to do before you have scientific theory. If it was testable, it would be science, not philosophy! I don't think philosophers are going to solve the hard problem of consciousness, I think neuroscientists might do. Computer scientists definitely won't. But without philosophy we wouldn't have a conversation about consciousness, we'd be trying to ignore it, as has been the fashion in brain science for most of its history.

> Relying on complex behaviour is tempting

Clarification: I don't think that anything that exhibits complex behaviour is conscious, I was responding to your claim that my distinctions between human, bat and tomato were arbitrary. The complex behaviour is an additional requirement to being a living thing.

> No, I *believe* that other entities have consciousness, but I can't say that I *know* that they have because I don't actually know what consciousness is and therefor how to measure or detect it. 

Now you're acting like a philosopher! We're not trying to find deductive knowledge that has 100% certainty, we're trying to get scientific knowledge, a good description that fits with the evidence. Sure, I can't know for certain if other minds exist, but that's a philosophical problem that has no solution. I want beliefs supported by good reasons, e.g. that humans are conscious but tomatoes are not. No scientific knowledge has the level of certainty you're demanding here, that's not what science is for. I don't *know* that the sun will rise tomorrow, but I've got good reasons for my belief that it will.

> That's correct, it is difficult. The arbitrary aspect arises (and is the bit I dispute) once you start saying something (other than yourself) is or is-not conscious - i.e., introduce an 'arbitrary' boundary for which your have no evidence. Your own word 'fades' implies no hard boundary, but a continuum from 'full consciousness' (human or more?) through lesser consciousness and minimal consciousness to zero consciousness (an Arduino). That may actually be a more useful concept - who knows - but most of the prior arguments have been based on a binary is-or-is-not-conscious premise.

As you can see, I think the best contenders for theories of consciousness don't provide a binary cut-off, they imply a sliding scale. Under IIT, it's a sliding scale right down to pocket calculators, but Anil Seth's predictive processing stuff does something more like setting a minimum requirement. I agree with Seth that consciousness is a biological phenomenon that has come about by the evolution of things that want to maintain their existence in the world by acting on it. But these contenders don't start out by defining consciousness other than by the philosopher's definition I gave. A definition in terms of third person observable phenomena, and which allows you to identify whether the ant is or is not conscious is the result of finding the right theory, it's not going to suddenly come out of thin air after millennia of trying.

Max Tegmark has a nice way of putting what he thinks a theory of consciousness needs to show. He says that "perhaps consciousness is the way information feels when it's being processed in certain complex ways". Untestable philosophical waffle from a physicist. IIT and Anil Seth's ideas both give some structure to what these "complex ways" might be.

Post edited at 18:51
 Jon Stewart 19 Jun 2022
In reply to Marek:

> I don't think I'd argue much with what you've said, but...

> 1. I'd avoid references to 'Bayesian Brain'

It's well-used in neuroscience...

> 2. But a robot could quite easily. Again, I could build a robot* which uses Bayes' algorithm to create and update an internal model of its surroundings as it trundles round. 

Sorry, I'm not doing a very good job of summarising Anil Seth's ideas from his book Being You. There's rather more to it than having an internal model of the world and updating it with sensory data: particularly having a model of internal states and acting on them to minimise prediction error. I wouldn't rely on me to get a proper picture of his ideas.

> So yes, the ability to 'learn & adapt' is probably *necessary* for any reasonable definition of consciousness, but on it's own it doesn't really get you very far down the track of figuring out what is 'special' about consciousness.

Yes, agreed, there's a lot more to it.

 Marek 19 Jun 2022
In reply to Jon Stewart:

> OK sounds like a good definition - perhaps I should have asked for a non-circular definition of time... 

In which case I'd have wimped out and pointed you at a good introduction to the maths of GR.

> It might turn out that Integrated Information Theory (IIT) is correct...

OK, that's new to me. Thanks. From a quick scan it seems a defensible approach (in my book). I'll read on (I'd rather read than listen to videos).

> OK, I think we agree here, ... But you're not going to get what you call a 'definition' until we know how consciousness works.

It's the usual chicken-and-egg situation. You posit a definition/theory and see if that gets you anywhere in terms of a deeper understanding of what's going on. If not, try something else. The two thing ('what' and 'how') typically go hand-in-hand. Like 'theory' and 'practice'. One without the other is sterile and pointless (c.f. String Theory, but don't get me started on that!).

> Of course there's a definition, we're having a coherent conversation about it, and we agree that it's subjective experience which we're sure each of us has. I know it's like something to be me, and you know its like something to be you. Neither of us knows if it's like anything to be an ant. If it had no definition it would just be a meaningless bunch of letters.

That's what I worry about. We both have an 'intuitive' sense of what our own consciousness is, but we can't put it into words (or better, numbers). So we can't communicate and develop a shared model of what it is - when we try, we end up with "a meaningless bunch of letters". We *assume* (largely by the application of Occam's Razor) that our experiences of consciousness are at least similar (but significantly different from that of and ant), but we struugle to turn that assumption into anything more concrete.

> ... I don't think philosophers are going to solve the hard problem of consciousness, I think neuroscientists might do. Computer scientists definitely won't. But without philosophy we wouldn't have a conversation about consciousness, we'd be trying to ignore it, as has been the fashion in brain science for most of its history.

But for all those centuries of philosophising about consciousness, what have they acheived? Zilch! Neuroscience has probably done more useful work in just a few years.

> Now you're acting like a philosopher!

Oops!

> We're not trying to find deductive knowledge that has 100% certainty, we're trying to get scientific knowledge, a good description that fits with the evidence.

Agreed. Nothing is impossible, nothing is certain. But there is a pragmatic dividing line - actually very blurred - between this-is-so-woolly-I'll-call-'insufficient-data' and this-is-plausible-enough-to-build-on-it.

> Max Tegmark has a nice way of putting what he thinks a theory of consciousness needs to show. He says that "perhaps consciousness is the way information feels ...".

That's the sort of thing that makes me throw books at walls!

> ... Untestable philosophical waffle from a physicist...

They're all prone to it and I don't have a problem with that - in fact I think it's probably a requirement for theoretical physicists - but they really should exercise that vice in private. Or at most with other consenting physicists.

 john arran 19 Jun 2022
In reply to Marek:

> So yes, the ability to 'learn & adapt' is probably *necessary* for any reasonable definition of consciousness, but on it's own it doesn't really get you very far down the track of figuring out what is 'special' about consciousness.

My question would be: why are we presuming that consciousness is in any way 'special' at all? Presumably just because it 'feels' so. And given that consciousness appears to be our primary way of accessing the underlying workings of our brain, and therefore of the world around us too, it's perhaps understandable that we afford it high prominence.

But if, as I postulated earlier, consciousness turns out simply to be an evolved way to improve the efficiency of our brain's response times, perhaps computational entities will end up following a different route to an optimal survival strategy and will outperform humans while doing away with consciousness altogether.

 wintertree 19 Jun 2022
In reply to Richard J:

> There's a current discussion as well about how important quantum tunnelling and/or coherent transport is in various biological processes.  Very much so in photosynthesis, less convincing in other areas is my sense now but who knows.  I am not a adherent of Penrose's views of the relevance of quantum computing to theories of mind.

At least Penrose isn’t off measuring brain sizes and publicising spurious conclusions on those…

There are examples of “non trivial” quantum effects potentially cropping up at the cellular scale as you say.  I recall some popular science articles on potential quantum entanglement in avian vision sensing the magnetic field; I haven’t read the literature on it.

People tend to write all non-trivial quantum stuff off at room temperature; Peratech’s pressure sensitive resistance material and now PTC heaters are an example of tunnelling achieving something useful at room temperature in a non biological system.  I’d have to think a bit on the XOR gate but I suspect the PTC stuff could build a tunnelling based gate set capable of building Turing compete systems.  I mean, if dominos can do it…  

Diffusing chemicals and the membrane conditions around ion channels (*) both alter the neural network behaviour.  Even just at a classical level, there’s a lot of physics there with different spatial connectivity to the neurones that isn’t incorporated in modern ANN/MLP work.  Those auxiliary networks will add a significant degree of complexity to the ensemble system.  I see the field as a bit like trying to re-create the sounds of an orchestra by throwing millions of cheap, identical xylophones at the problem.  Aka SpiNNaker.

Some anaesthesia being reversible by putting someone under high atmospheric pressure in a hyperbaric chamber being a mad example.  Might be linked to a critical point in the phase diagram for the plasma membrane when it comes to mixing scales of different phospholipids, how that affects membrane structure around ion channels, how the structure affects the channels’ thresholds and how transmembrane pressure shifts the critical point….  

Post edited at 19:50

How does this discussion on the nature of consciousness fit in with the double slit experiment and Bohm’s quantum potential and the implicate order e.g. the alteration of the behaviour of particles when under observation? Note - I’m not a scientist, so my understanding of this is basic to say the least 

 Richard J 19 Jun 2022
In reply to schrodingers_dog:

Because of the importance of "making an observation" as the cause of the collapse of the wave function in quantum mechanics (i.e. a particle "deciding" to go through one slit or the other rather than some superposition of both paths), some people made the assumption that the "observer" had to be conscious, and thus that consciousness played a central role in qm.  This did cause a few physicists to disappear down some rabbit holes.  Bohm had a perfectly respectable theory to get round some of the paradoxes of qm, which turned out to be experimentally disproved, and he build a wider world view on the back of that.  More recently the physics consensus has moved to thinking about "making an observation" less in terms of a conscious intervention, more in terms of any interaction with a macroscopic object.  This has its own difficulties in that ultimately the macroscopic object should be describable by quantum mechanics too, and we know that any wholly quantum mechanical system evolves smoothly and deterministically without any abrupt changes.  Philip Ball's recent book "Beyond Weird" is very good on some possible resolutions.

 Richard J 19 Jun 2022
In reply to wintertree:

Indeed, neuromorphic computing may be an interesting way of trying out some radically different architectures, but it's difficult to see that it can provide a great deal of insight into how brains work, given the huge mutability and plasticity of neurons.  It should be clear that the basic unit of biological computing is something much smaller than a neuron.  In fact, there's a lot of biological computing that happens without any neurons at all, just through networks of interacting molecules.

 Richard J 19 Jun 2022
In reply to Jon Stewart:

I wonder if you've read "The ancient origins of consciousness" by Feinberg & Mallat?  If not, I think you'd find it very interesting.  It starts with the Nagel quote, but then assembles a lot of evidence from comparative neuroanatomy and evolutionary history to develop the idea of consciousness building up in stages from an organism using its sensory apparatus to build a constantly updated model of its environment.  

https://mitpress.mit.edu/books/ancient-origins-consciousness

 wintertree 19 Jun 2022
In reply to Richard J:

>  It should be clear that the basic unit of biological computing is something much smaller than a neuron.  In fact, there's a lot of biological computing that happens without any neurons at all, just through networks of interacting molecules.

Quite.  

I worked up a model of interacting IP3R clusters a few years back; 3 clusters could produce (statistical) AND, OR and XOR gates if arranged properly; I was a bit late to the "build a sub-cellular logic gate" trend, and left academia before I wrote it up.  I still want to publish the efficient computational basis of it...  I went down a rabbit hole of finding out that none of the published Markov models of IP3R receptor behaviour could replicate sub-cellular signalling behaviour; they never inhibited well enough after an opening event.

Given the complex behaviour of a single cell, they must have to do an incredible amount of computation just to continue existing, even before they carry out functions like transmitting a neuronal impulse.  The idea that these layers can be fully decoupled doesn't really reflect anything anywhere else in the way this goop works.

 Jon Stewart 19 Jun 2022
In reply to Marek:

> That's what I worry about. We both have an 'intuitive' sense of what our own consciousness is, but we can't put it into words

We really can put it into words, here's the original text from Thomas Nagel's 'What Is It Like To Be A Bat?':

Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it. (Some extremists have been prepared to deny it even of mammals other than man.) No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar systems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism. There may be further implications about the form of the experience; there may even (though I doubt it) be implications about the behavior of the organism. But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism, something it is like for the organism.

I find it perfectly clear what is being said here.

> (or better, numbers). So we can't communicate and develop a shared model of what it is - when we try, we end up with "a meaningless bunch of letters". We *assume* (largely by the application of Occam's Razor) that our experiences of consciousness are at least similar (but significantly different from that of and ant), but we struugle to turn that assumption into anything more concrete.

> But for all those centuries of philosophising about consciousness, what have they acheived? Zilch! Neuroscience has probably done more useful work in just a few years.

I agree that philosophy hasn't achieved progress on achieving the shared model that tells us about what it is in in terms of observable phenomena. But that isn't the point of philosophy. In philosophy of mind, the point is to explore all the whacky options, like "what if consciousness is an illusion?" (it isn't, but there's an argument); or "what if consciousness is present in the basic constituents of all matter" (it isn't, but there's an argument); or "what if the material world can be explained as existing only within consciousness?" (it can't, but there's an argument). By exploring all the ways we can think about a problem that science can't yet tackle, we get to understand what kind of thing we're talking about and how it might relate to our other ideas about how reality works.

But god, no, it doesn't get you anywhere!

Post edited at 21:33
 Jon Stewart 19 Jun 2022
In reply to Richard J:

Thanks, that sounds right up my street. I find I can only read books on consciousness if I agree with them. I nearly threw Daniel Dennett's book out of the window.

 Jon Stewart 19 Jun 2022
In reply to john arran:

> My question would be: why are we presuming that consciousness is in any way 'special' at all?

It's special because has a different way of existing to everything else we know about in the universe. Everything else can be seen by third person observation, but consciousness only exists for the subject. For this reason, we've failed to explain it by reducing it to component bits. Even the great mystery of 'what is life?' eventually succumbed to reductive explanation, but it ain't happening for consciousness.

> Presumably just because it 'feels' so. And given that consciousness appears to be our primary way of accessing the underlying workings of our brain, and therefore of the world around us too, it's perhaps understandable that we afford it high prominence.

Exactly this. Consciousness is our only way of accessing anything at all. All of reality, for me, exists only in my consciousness. Indeed the objective reality out there is nothing like what I experience in here: my brain constructs a representation of a thin slice of reality that includes only the bits of the outside world that are useful to my survival, and represents them in an extraordinary way so I'm hearing pressure waves in what I call the air, and seeing the surface reflectance properties of what I consider to be objects as colours. Our conscious experience is the whole of our reality, and yet is nothing like the objective world out there.

It really can't be overstated just how important and special consciousness is.

> But if, as I postulated earlier, consciousness turns out simply to be an evolved way to improve the efficiency of our brain's response times, perhaps computational entities will end up following a different route to an optimal survival strategy and will outperform humans while doing away with consciousness altogether.

Sure, they could do. But we'd be pretty dumb to accidentally program robots to care about nothing except their own survival. Please no one do this.

Post edited at 22:13
 Marek 19 Jun 2022
In reply to Jon Stewart:

> We really can put it into words, here's the original text from Thomas Nagel's 'What Is It Like To Be A Bat?':

> Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it. (Some extremists have been prepared to deny it even of mammals other than man.) No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar systems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism. There may be further implications about the form of the experience; there may even (though I doubt it) be implications about the behavior of the organism. But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism, something it is like for the organism.

> I find it perfectly clear what is being said here.

I kept re-reading that quote - yes, it's clear, but a clear statement of opinion. I couldn't help thinking that most of the statements could have been more honestly prefaced with "We like to think that (but have no evidence to the effect) ...". He says himself that "... it is very difficult to say in general what provides evidence of it" - in effect acknowledging that there really isn't much (if any). Don't get me wrong, I'm not saying that he's wrong. I'm saying that he's not justified in stating evidence-free opinion (or heartfelt wishes about what the world should look like) as facts.

I guess I have an unusual dislike for people confusing opinions with verifiable facts - I spent too much of my working days getting marketing people to understand that distinction. The scars are still there.

In reply to Jon Stewart:

Thinking on the ‘what it’s like to be something’ definition of consciousness, is this distinguished from meta cognitive process in humans and other animals e.g. reflective function, ‘mind reading’, empathic processes etc, which I imagine are significantly less in small animals like* bats or hamsters  

Post edited at 22:23
In reply to Richard J:

Thanks, what I understood by the double slit experiment was that the particles behaved differently when being observed, indicating that there was a reciprocal observation and decision making process underlying this behaviour? The observer is the observed 

 top cat 19 Jun 2022
In reply to schrodingers_dog:

I really really believe that you guys should get out and climb a whole lot more and think a whole lot less.......

In reply to top cat:

If only! 😭

 Jon Stewart 19 Jun 2022
In reply to schrodingers_dog:

> Thinking on the ‘what it’s like to be something’ definition of consciousness, is this distinguished from meta cognitive process in humans and other animals e.g. reflective function, ‘mind reading’, empathic processes etc, which I imagine are significantly less in small animals like* bats or hamsters  

Yes. By this definition, which seems exactly right to me, there's no self reflection or anything like that required, they're add-ons to consciousness. All that's required is some form of raw in-the-moment experience. Often, we're trying to escape all these complex reflective, internally focused aspects of our consciousness to try to get a taste of that raw in-the-moment experience (e.g. flow state achieved sometimes in climbing). But we're not trying to knock ourselves unconscious!

 Marek 20 Jun 2022
In reply to schrodingers_dog:

> Thanks, what I understood by the double slit experiment was that the particles behaved differently when being observed...

No! That's a common mistake made by people who only read/write pop-science rather than doing the maths. QM says *nothing* about what particles do. All it does is predict the result of a specific experiment: If you do X and measure Y you'll probably get Z. Nothing more. The bit about 'observed particles behave differently' is just some handwaving attempt to add a vague conceptual (non-mathematical) layer to QM that is really nothing to do with QM. As Neils Bohr said: "Shut up and do the maths!" The fact that our intuition struggles with (a) the need to add such a conceptual layer and (b) fails to do so, is more a reflection on our own limitations than anything to do with the truth or usefulness of QM. Only mental anguish lies down that path!

In reply to Marek:

The mental anguish is what I’m interested in, does it arise due to the ‘hidden variables’? 
 

The unexplained state of things prior to measurement? 

 Marek 20 Jun 2022
In reply to schrodingers_dog:

> The mental anguish is what I’m interested in, does it arise due to the ‘hidden variables’? 

The hidden variable idea was disproven (as a possible interpretation) quite a few years back. Look up Bell's Inequality and the experiments based on it.

> The unexplained state of things prior to measurement? 

More precisely, the attempt to visualise the 'state of things' (for instance a wave function) as having real physical meaning.

Post edited at 08:23
 Richard J 20 Jun 2022
In reply to schrodingers_dog:

> Thanks, what I understood by the double slit experiment was that the particles behaved differently when being observed, indicating that there was a reciprocal observation and decision making process underlying this behaviour? The observer is the observed 

I think you're conflating two slightly different things here.  In the problem of measurement, the influence is one way - the act of measurement forces the system to take up one definite value, where before it existed in a superposition of different possibilities.  But in this problem, we assume by definition that the thing doing the measuring is a classical object, that isn't itself affected by the process.  

You might be thinking of the problem of entanglement, which arises when you make a measurement on part of a system which consists of two or more spatially separated particles that have been created in an "entangled" state.  So if you've got a pair of socks, and one got left behind in the launderette, if you look at the one you took home and see that it's a left one, you know the one in the launderette is a right sock.  For a quantum pair of socks, they'd only be forced into deciding whether they are left or right when you make the measurement, so it's the act of looking at the sock at home and finding out that it is a right sock, that would force the sock in the launderette to be left.  This looks like action at a distance.  You might be tempted to wonder whether this permits transmission of information faster than the speed of light - it doesn't, but it does allow you to transmit information with unbreakable codes, hence huge amounts of research money going in from the NSA and their Chinese counterparts.  But fundamentally it does emphasise that in quantum mechanics the world can't really be thought of as being made up of independent particles.

I think it's fair enough to wonder, for a theory that is so empirically successful as quantum mechanics, what the meaning behind the calculations are.  If quantum mechanics really is a description of the way the world fundamentally is, then it does describe a very weird universe and there is a lot of work to understand how the world as it appears to us emerges from that.  On the other hand, some people think that qm tells us not about the universe as it actually is, but what it is possible to know about it - i.e. it's about epistemology rather than ontology.  

 Marek 20 Jun 2022
In reply to Richard J:

I think you're barking up the wrong tree with the sock analogy since it'll only points people down the 'hidden variables' path which leads nowhere useful.

> ... in quantum mechanics the world can't really be thought of as being made up of independent particles.

Or indeed of particles, waves or anything else similar. So far every attempt to visualise some 'reality' that underpins QM has shown to lack self-consistency. Perhaps someday someone will come up with something which hangs together (and encompasses the GR domain), but I don't think it'll be based on the abstractions we use today.

In reply to Marek:

What about the notion of idealism, that there is an undefinable substrate of 'consciousness' from which we emerge and return to? e.g. partially during a near death experience? 

I wonder about the idea of communion with nature (in climbing in this instance) becoming more broadly aware of the implicit which would be the substrate of being?

I don't know how this would fit in with QM - the wave that carries the particles as someone described it? 

 Marek 20 Jun 2022
In reply to schrodingers_dog:

> What about the notion of idealism, that there is an undefinable substrate of 'consciousness' from which we emerge and return to? e.g. partially during a near death experience? 

You need to ask a philosopher, not a physicist.

> I wonder about the idea of communion with nature (in climbing in this instance) becoming more broadly aware of the implicit which would be the substrate of being?

You need to ask a hippy.

> I don't know how this would fit in with QM - the wave that carries the particles as someone described it? 

Described it incorrectly. There is no 'wave' that carries 'particles'. Just because a mathematical structure in QM was unfortunately labelled a 'wave function' doesn't mean it makes any sense to think of it as like a wave on the sea. And even classical waves don't move things other than local, cyclical 'bobbing'.


New Topic
This topic has been archived, and won't accept reply postings.
Thread auto-archived as it is too large
Loading Notifications...