Russel Brand - Grifter, Guru or something else?

New Topic
This topic has been archived, and won't accept reply postings.

It's hard not to notice the increasing number of people relying of Russel for alternative insights and opinions, I couldn't stand him back in the day and that opinion hasn't shifted much apart from him seeming a bit of a better person these days. I recently listened to one of his rants about Biden meeting with the Saudi's and associated hypocrisy, and saw he has nearly 6 million subscribers on youtube! what on earth is going on?

5
 MG 09 Jun 2022
In reply to schrodingers_dog:

Cynical and self-regarding. Did a lot of damage to politics with the self-fulfilling "they are all rogues", "don't vote" schtick 

3
 ThunderCat 09 Jun 2022
In reply to schrodingers_dog:

He definately gets a few brownie points from me with his interview of two guys from the Westboro Baptist Church.

Totally civil and welcoming to them, whilst at the same time hilariously showing them to be the vile bigots that they are

youtube.com/watch?v=OBA6qlHW8po&

 Iamgregp 09 Jun 2022
In reply to schrodingers_dog:

Can't stand him.  His heart is in the right place but he's nowhere near as clever as he thinks he is.  Is quite happy to make meaningless statements such as "The country shouldn't be run by Politicians but by we the people"* but has no actual input as to what this would look like or how we would go about it.

He's a populist with an undeniable charisma which has taken him as far as he's got, including his huge online following but he doesn't actually contribute anything at all.  It's just boring and predicable ranting.  And he's a tw@t. 

As for his (almost) 6m followers?  The biggest podcast on earth in the Joe Rogan Experience.  There's no shortage of appetite for bullshit on this earth! 

*FWIW I agree, we the people should rule the country.  But rather than getting 65 million of us in a room, we'll democratically pick just a few hundred of us to represent each area, and then every few years we'll vote... Oh... hang on a minute. 

7
 Forest Dump 09 Jun 2022
In reply to Iamgregp:

We should have a 3rd house, randomly selected citizen assembly, one for each current MP seat

4
 Bottom Clinger 09 Jun 2022
In reply to ThunderCat:

> He definately gets a few brownie points from me 

Aye, at least two by the looks of it

 plyometrics 09 Jun 2022
In reply to schrodingers_dog:

Having hewn a career by pretending to be a guru, whilst looking like a grifter, suspect he fits into your “something else” category.

In reply to Forest Dump:

Forrest Dump, brilliant stuff...... reminds me of the self help book based on the wisdom of Forrest, it was called 'Gumpisms' 

In reply to Iamgregp:

After all we already live in a democracy... er hang on a minute 

 john arran 09 Jun 2022
In reply to schrodingers_dog:

I subscribed to his podcast for a (short) while. Occasional moments of interest swamped by long interludes of happy-clappy wishful thinking and woo.

I had some respect for many of his utterings when he was threatening to be a political player, but now that he appears to have settled into a niche of populist feel-good verbal diarrhoea, little of that respect remains.

1
In reply to Iamgregp:

> And he's a tw@t. 

This.

10
 mike123 09 Jun 2022
In reply to Iamgregp:

“He's a populist with an undeniable charisma which has taken him as far as he's got, including his huge online following but he doesn't actually contribute anything at all.  It's just boring and predicable ranting.  And he's a tw@t”

it’s very weird when somebody on here says almost word for word what you are thinking . Thanks 

1
 broken spectre 09 Jun 2022
In reply to schrodingers_dog:

He's better than Russell Crowe who in turn is better than Russell Grant.

5
 ExiledScot 09 Jun 2022
In reply to schrodingers_dog:

He's clever, self made millionaire from selling books that tell everyone how evil capitalism is. 

2
In reply to mike123:

> an undeniable charisma

Not convinced by this. Unless 'charisma' means something different to what I think it means.

2
In reply to schrodingers_dog:

A bit surprised by the amount of negativity towards him. Agreed he is probably unreasonably idealistic and doesn't necessarily come up with realistic solutions to problems of society, but it seems to me his heart is in the right place.

His programme on drug dependency, policy and recovery (several years ago now) talked a lot more sense than 99% of politicians do.

5
 Ridge 09 Jun 2022
In reply to john arran:

> I subscribed to his podcast for a (short) while. Occasional moments of interest swamped by long interludes of happy-clappy wishful thinking and woo.

Parklife!

youtube.com/watch?v=8B_Evl1fiLA&

Post edited at 22:00
In reply to mountain.martin:

I agree, he’s certainly been part of positively transforming a few lives that I’m aware of. I was stuck with radio 4 this morning for the first time in years. Listened to this woman leading on social mobility, Jesus Christ she was in cloud cuckoo land. Her slot was after the usual worm tongues of politics obvs 

3
 mondite 09 Jun 2022
In reply to captain paranoia:

> Not convinced by this. Unless 'charisma' means something different to what I think it means.

Charisma doesnt have to work on everyone. He does seem to have it though for quite a few people.

1
 broken spectre 09 Jun 2022
In reply to schrodingers_dog:

Inverting the anti-Brand sentiment for a moment, it could be argued that the general populous, especially in Britain, is that terrified of standing out that when someone slightly eccentric comes along everyone loses their shit.

In other words, taking exception to a character could say more about you than about them??

18
 Iamgregp 10 Jun 2022
In reply to broken spectre:

I'm not sure that really could be argued, given our current prime minister....

We adore eccentricity in this country, and populist grifters like Brand and Johnson have used this very much to their advantage, using their contrived eccentricity to mask their lack of substance.   

1
 ExiledScot 10 Jun 2022
In reply to broken spectre:

> Inverting the anti-Brand sentiment for a moment, it could be argued that the general populous, especially in Britain, is that terrified of standing out that when someone slightly eccentric comes along everyone loses their shit.

> In other words, taking exception to a character could say more about you than about them??

Or maybe we can spot a charlatan when we see one? 

I'd dislike him purely for prank calling Andrew Sachs, never mind his hypocrisy on anything else just to make money. The guy has no morals, despite preaching otherwise. 

3
 broken spectre 10 Jun 2022
In reply to ExiledScot:

> Or maybe we can spot a charlatan when we see one? I'd dislike him purely for prank calling Andrew Sachs,

I'd forgotten about that

> never mind his hypocrisy on anything else just to make money.

Valid.

> The guy has no morals, despite preaching otherwise. 

An over-reaction! I think he's an outlier with valid insights, although this can and does go to his head.

Edit: He could benefit from a haircut.

Post edited at 11:19
2
In reply to Iamgregp:

> using their contrived eccentricity to mask their lack of substance

Yup; Johnson and the deliberate dishevelled hair shtick; JRM and his antediluvian image (whilst pursuing entirely 21st century financial instruments).

 montyjohn 10 Jun 2022
In reply to ExiledScot:

> He's clever, self made millionaire from selling books that tell everyone how evil capitalism is. 

I do love the irony of this comment. Bravo.

He makes my skin crawl and I'm not entirely sure why? Maybe he said something a long time ago and I've forgotten the "thing" he said, but remember how it made me feel about him.

Probably time to get over it and form a new opinion based on what he's saying now, but that skin crawling feeling isn't a conscious response. Something to work on I guess.

2
 Tyler 10 Jun 2022
In reply to captain paranoia:

> > an undeniable charisma

> Not convinced by this. Unless 'charisma' means something different to what I think it means.

It does, you’re thinking charisma means someone you personally like rather than someone who’s personality appeals to many (but possibly not you)

 magma 10 Jun 2022
In reply to schrodingers_dog:

check out his interviews with Jordan Peterson - lots of philosophical musings that should interest you?: youtube.com/watch?v=kL61yQgdWeM&

2
 Dave Garnett 10 Jun 2022
In reply to schrodingers_dog:

> It's hard not to notice the increasing number of people relying of Russel for alternative insights and opinions,

Really?  I haven't noticed him for ages.

 ExiledScot 10 Jun 2022
In reply to montyjohn:

> He makes my skin crawl and I'm not entirely sure why? 

6th sense, instinct...

> Probably time to get over it and form a new opinion based on what he's saying now, but that skin crawling feeling isn't a conscious response. Something to work on I guess.

No, stick with your first decision.

4
 TobyA 10 Jun 2022
In reply to Dave Garnett:

> Really?  I haven't noticed him for ages.

Yep my thoughts exactly! "Oh him... is he still about". 

On a very vaguely climbing related note - I do remember thinking when Alex Honnold started getting super enthusiastic and maybe even a bit preachy about environmental sustainability, it was a bit like when Brand suddenly started lecturing everyone on socialism - two obviously very bright chaps who hadn't gone to uni and had 3 or 4 years to think about (and probably argue drunkenly about) in their late teens/early 20s, like many of us did. The difference I suppose was Honnold was living in a van and soloing multipitch 5.13s while Brand was addicted to heroin. 

 ExiledScot 10 Jun 2022
In reply to TobyA:

Joe wicks reminds me more of Brand. He's finished getting himself and others injured telling them to do high intensity exercise without warming up, now he's trying to mess with their heads. 

Post edited at 13:00
3
 Ridge 10 Jun 2022
In reply to magma:

> check out his interviews with Jordan Peterson - lots of philosophical musings that should interest you?: youtube.com/watch?v=kL61yQgdWeM&

Couldn't they just fight to the death with blunt instruments? I'd pay to watch that.

 TobyA 10 Jun 2022
In reply to ExiledScot:

> Joe wicks reminds me more of Brand.

Isn't that just the (lack of) hair cut?

> He's finished getting himself and others injured telling them to do high intensity exercise without warming up, now he's trying to mess with their heads. 

Lots of the kids I teach seemed to enjoy his lockdown videos. Not heard of injuries. I thought he was all wholesome and "my nan loves him"? Sort of well on his way to being a "national treasure"?

 mondite 10 Jun 2022
In reply to Ridge:

> Couldn't they just fight to the death with blunt instruments?

Be better with grenades or possibly maces tipped with undiluted nitroglycerin.

Otherwise stands too much of a chance of one of them winning relatively unharmed.

In reply to TobyA:

The difference I suppose was Honnold was living in a van and soloing multipitch 5.13s while Brand was addicted to heroin. 

What’s the difference? 

1
 timjones 10 Jun 2022
In reply to Iamgregp:

> He's a populist with an undeniable charisma which has taken him as far as he's got, including his huge online following but he doesn't actually contribute anything at all.  It's just boring and predicable ranting.  And he's a tw@t. 

OK that's enough about our prime minister, now tell us what you think about Brand

 magma 10 Jun 2022
In reply to mondite:

> Be better with grenades or possibly maces tipped with undiluted nitroglycerin.

> Otherwise stands too much of a chance of one of them winning relatively unharmed.

you don't have  much time for them then? (why?) or are you suggesting they have opposing views? (in which case listen to link)

Post edited at 14:31
 TobyA 10 Jun 2022
In reply to schrodingers_dog:

> The difference I suppose was Honnold was living in a van and soloing multipitch 5.13s while Brand was addicted to heroin. 

> What’s the difference? 

Taking heroin as long as it hasn't been cut with rubbish, is probably safer!

In reply to schrodingers_dog:

I'd be prepared to forgive him a lot for his famous observation that 'when I was poor and complained about inequality they called me bitter. Now that I'm rich and complain about inequality they call me a hypocrite. I'm beginning to think they just don't want to talk about inequality'.

jcm

1
In reply to TobyA:

Free solo, reality TV’s answer to Train Spotting 

In reply to Iamgregp:

> Can't stand him.  His heart is in the right place but he's nowhere near as clever as he thinks he is.

This is what gets me. He pretends to be clever by speaking like a man who's swallowed a thesaurus and people seem to be taken in by it 🤷🏻‍♂️

3
 broken spectre 10 Jun 2022
In reply to Wide_Mouth_Frog:

> This is what gets me. He pretends to be clever by speaking like a man who's swallowed a thesaurus and people seem to be taken in by it 🤷🏻‍♂️

Will Self does this as well. I find it funny.

Edit: Will Self is undeniably clever.

Post edited at 18:05
In reply to broken spectre:

Will Self does it in a self-deprecating way, which is very funny. Russell Brand doesn't. Courses for horses I guess 

 Ridge 10 Jun 2022
In reply to broken spectre:

Beat me to it with your edit!

In reply to broken spectre:

Do they have heroin in common? 

 broken spectre 10 Jun 2022
In reply to schrodingers_dog:

I've heard that. Rebuilding your brain and how you think after an addiction like that could lead to being verbose as you'd be an adult putting your mind back together whilst many of us cruise through life with comparative ease (sweeping statement, may not apply!). Self did it. Brand with less style (Bookie Wookie 🤢).

Post edited at 20:27
 jamesg85 10 Jun 2022
In reply to schrodingers_dog:

Not a fan of people who promote the 12 step approach to quitting drinking which tells people they're powerless over alcohol and that they have a disease which is a complete lie. Great, he got sober from drugs and alcohol, but the whole guru schtick is incredibly grating. 

3
In reply to jamesg85:

> Not a fan of people who promote the 12 step approach 

Why not? My understanding is it is proven to be successful and has got thousands of people off drink or drugs and kept them off.

Of course it shouldn't be sold as the only cure, it won't work for everyone and there are other approaches.

 jamesg85 11 Jun 2022
In reply to mountain.martin:

The success rate isn't actually great. It's based on calling alcoholism a "disease", which it isn't and it tells people that they're powerless over alcohol. I was in AA for years and it was incredibly toxic and shaming. Sorry, this is just something I feel strongly about. They actively tell people AA is the only way and that other ways are softer approaches that don't work. There is actually a deprogramming from AA Facebook group on Facebook for people who have been really badly affected by its teachings. 

I much prefer the Freedom Model and this is a quote from them:

"Myth: Addiction is a disease that requires lifelong treatment to manage - Addiction is a strong preference for substance use, not a disease or genetic defect. Science hasn't found "the gene for addiction". The disease theory of addiction is not supported by current research. As it isn't a disease, it requires no treatment or cure. In fact, the rates of addiction, overdose, and alcohol and drug-related deaths went up with the adoption of disease-based theories of addiction as they promote helplessness and hopelessness rather than solving a problem."

In reply to jamesg85:

Thanks for the insight. I wasn't aware that they tell people theirs is the only way.

That's definitely a red flag.

In reply to mountain.martin:

Far too much god-bothering nonsense, too; 7 of 12 steps.

https://en.wikipedia.org/wiki/Twelve-step_program

Post edited at 14:15
1
In reply to jamesg85:

What did you think to ‘The realm of hungry Ghosts’ and all that Gabor Matte stuff? 

 jamesg85 11 Jun 2022
In reply to schrodingers_dog:

I think the article I have linked below is pretty well-balanced criticism of Gabor Maté and resonates with my own beliefs about his work. Gabor Maté argues that trauma is a cause of addiction, whereas in reality it can be a reason for a preference for substance use if you believe that substances have the power to relieve trauma in some way. If you throw a lit match on a patch of oil it goes up in flames everytime. That's a predictable result. However, trauma does not cause addiction, many people undergo trauma, as have many of us, without therefore being caused to use substances in a problematic way. Everyone has a mind of their own and an ability to choose their preferences. 

https://www.psychologytoday.com/gb/blog/addiction-in-society/201112/the-sed...
 

Removed User 11 Jun 2022
In reply to schrodingers_dog:

Have to admit I originally couldn’t stand the guy even though he’s a local. Watched an interview with him a few years ago and I kind of found it hard to switch him off. He might look like he’s just walked off the streets, but he’s a very smart guy with some alternative ideas and views, and that alone is worthy of a listen, even though I don’t agree with most of it.

In reply to jamesg85:

Thanks for the article, I find Gabor Matte a bit to self referential, everything comes back to him and is ultimately about him, which gets my goat. While not all with attachment trauma have addictions do all with addictions have attachment trauma. I think a large percentage do but like the article said things are a bit more fluid than that. 

Bringing two threads together this didn’t arf crack me up. I nearly spat me porridge out!

youtube.com/watch?v=X3-KK3CcZCc&

 john arran 16 Jun 2022
In reply to schrodingers_dog:

About 20 seconds was all I could take.

In reply to john arran:

What happened at 20 seconds? 

 magma 16 Jun 2022
 ExiledScot 16 Jun 2022
In reply to magma:

I'd search for something lighter, Troy Hawke from the greeters guild, I did despair at the b&q staff. 

 john arran 16 Jun 2022
In reply to magma:

Much better. He doesn't get across why he feels so certain that computation is insufficient for consciousness though, Maybe I missed something by not concentrating terribly well, or perhaps he is much clearer in his books, but from that interview it seemed his opinion that 'there must be something more' was verging on axiomatic.

 magma 16 Jun 2022
In reply to john arran:

i got  the same feeling..

i prefer Bohm and the implicate order..

youtube.com/watch?v=6Kd5VFJyvrE&

 FactorXXX 16 Jun 2022
In reply to schrodingers_dog:

>  I nearly spat me porridge out!

Blimey, you really do like Russel Brand don't you...

In reply to FactorXXX:

Oh yes absolutely, I want to marry him and have lots of little Russell’s 

In reply to magma:

Thanks for this! 

 Jon Stewart 16 Jun 2022
In reply to john arran:

> Much better. He doesn't get across why he feels so certain that computation is insufficient for consciousness though, Maybe I missed something by not concentrating terribly well, or perhaps he is much clearer in his books, but from that interview it seemed his opinion that 'there must be something more' was verging on axiomatic.

Penrose's argument that consciousness is not a computation is set out as a philosophical proof in his book Shadows of the Mind. He takes a specific type of mathematical problem (something about tractability of something...it's 20-odd years since I read it so there's no hope of me articulating it in any meaningful way, sorry) and shows that it can be solved analytically by human cognition, but cannot, in principle, by computation.

I'm sure it's pretty satisfying if you understand it

John Searle's much simpler Chinese Room argument is sufficient to convince me that computation is insufficient for consciousness.

My favourite author on this stuff is Anil Seth, who's aligned pretty much with Searle philosophically, but is actually getting stuck into the neuroscience of consciousness. He's a realist with respect to consciousness, not one of those awful hardcore (eliminative) physicalists who deny consciousness exists (and then, to be even more annoying, deny that they're denying it). He sees it as a biological phenomenon that can hopefully be explained by understanding what the brain is up to in terms of its predictive processing (looking both out at the world and inwards at inner bodily states), motivated by the evolved drive to stay alive. His book Being You is the best thing I've read on consciousness by a country mile. Very readable and down to earth, an absolute must for anyone interested in consciousness but not keen on woo woo or abstruse philosophical waffle.

https://www.youtube.com/watch?v=qXcH26M7PQM&ab_channel=TheRoyalInstitut...

Post edited at 23:01
In reply to Jon Stewart:

> He takes a specific type of mathematical problem (something about tractability of something

Possibly the Entscheidungsproblem? Turing machines and all that. I recall discussing this subject in a UKC thread about 20 years ago. I don't recall Penrose being mentioned.

[edit] I was thinking of 'A Silicon Brain' (which obviously spawned 'A Silicone Brian') in 2002, but it was actually 'Do we have freewill' in 2007 where the incompleteness theorem came into play.

Post edited at 01:14
 john arran 17 Jun 2022
In reply to Jon Stewart:

Thank you for the link, Jon. Just spent a wakeful hour watching that in the middle of the night. Plenty of food for thought and plenty of new (to me) ideas to absorb.

Still not entirely convinced that the various attributes or expressions of consciousness couldn't simply be emergent characteristics of computational intelligence, but it's a fascinating field of study.

 magma 17 Jun 2022
In reply to john arran:

halfway though Seth - not much insight so far- rather axiomatic

 Jon Stewart 17 Jun 2022
In reply to captain paranoia:

> Possibly the Entscheidungsproblem? Turing machines and all that. 

Something to do with Turing machines and/or incompleteness. Or something else entirely!

 john arran 17 Jun 2022
In reply to Jon Stewart:

I'm wondering whether the whole Turing Machine / Chinese Room obstacle to conscious AI may have lost relevance in a world where computation is no longer deterministic and no longer based on a paradigm of same inputs -> same outputs. If you ask a modern AI the same question one week apart you may well get a completely different answer because it is constantly adapting to its environment. Indeed identically seeded NNs will quickly diverge as a result of subsequent behaviour, leading to a situation where the non-Chinese speaker in the room would not have any known algorithmic method of determining his responses, even if he knows every line of his computer's original coding.

 john arran 17 Jun 2022
In reply to john arran:

I'm thinking this discussion would probably be more at home in the AI thread, but hey ho!

As chance would have it, although it did strike me as a bit spooky, after watching a fine youtube video about the discovery of subatomic particles, the following video appeared below, enticing me to click:

youtube.com/watch?v=TTFoJQSd48c&

It's a fairly recent talk by Daniel Dennett entitled "If Brains are Computers, Who Designs the Software?" and it describes how computational qualities can emerge that exceed the intention or vision of any or all of its designers. Curiously, and pretty much incidentally, he also dismisses Roger Penrose's assertions en route to demonstrating his own argument.
I'm still to be convinced that there's any magic involved in consciousness that computers couldn't one day possess given the freedom to self-organise.

 Jon Stewart 17 Jun 2022
In reply to john arran:

> I'm wondering whether the whole Turing Machine / Chinese Room obstacle to conscious AI may have lost relevance in a world where computation is no longer deterministic and no longer based on a paradigm of same inputs -> same outputs. If you ask a modern AI the same question one week apart you may well get a completely different answer because it is constantly adapting to its environment. Indeed identically seeded NNs will quickly diverge as a result of subsequent behaviour, leading to a situation where the non-Chinese speaker in the room would not have any known algorithmic method of determining his responses, even if he knows every line of his computer's original coding.

I've just listened back to John Searle explaining the Chinese Room himself (here youtube.com/watch?v=tt-JzB50sJE& ) and I don't think it makes any difference whether the computer is implementing a deep learning/neural network thing or not. It's still operating purely syntactically, ultimately still chucking out outputs in response to inputs according to deterministic rules. Sure it's more complex if the inputs include vast quantities of natural language data getting circulated around NNs, and it's much harder to imagine that a person could implement it by shuffling papers and looking up instructions in logbooks, but this additional complexity doesn't provide any way of injecting the semantics. There's no way that symbols that appear on a monitor can be about anything as far as any AI is concerned - in any computer, deep learning or not, what's going on is symbols that mean something to us are being manipulated according to rules that have been developed by us so that the outputs mean something to us. Take conscious human observers out of the picture and two AIs playing chess or having a conversation as a result of NNs are not playing chess or having a conversation, they're merely making certain bits of screens lighter or darker than other bits of screens. The semantic content is all held by us conscious humans.

This is in line with what Anil Seth is getting at in Being You: it's the inescapable motivation to maintain our existence that's built in to living things that results in consciousness (not that consciousness has to accompany life). Evolution has come up with a very specific, purposeful way of processing information about our internal state so that we can maintain it, and about the external world so we can act upon it (in order to maintain our internal state). There isn't anything about any deep learning system that has any reason to be conscious, and there's nothing about any computational mechanism, no matter how intelligent it appears to an observer, that mirrors the processes that go on in the biological things we know are conscious.

I did like the conversation with LaMDA but it definitely didn't convince me that we were any further towards making conscious AI, thank god.

 john arran 17 Jun 2022
In reply to Jon Stewart:

It's the bit about "chucking out outputs in response to inputs according to deterministic rules" that I take issue with. That's a very outdated view of computer behaviour. Nowadays you'd often be hard pressed to even explain how an AI arrived at a particular outcome, let alone be able to reproduce that outcome yourself simply by having access to the underlying code. If you could, then there's a fair argument to claim that, in doing so, you've actually learnt Chinese!

Another line in your response jumped out at me too: "Evolution has come up with a very specific, purposeful way of processing information". I find it hard to see how evolution could be said to have a purpose, which to my mind suggests a priori reasoning, when all we apparently can ever know about it is a posteriori observation.

 Jon Stewart 17 Jun 2022
In reply to john arran:

> It's the bit about "chucking out outputs in response to inputs according to deterministic rules" that I take issue with. That's a very outdated view of computer behaviour. Nowadays you'd often be hard pressed to even explain how an AI arrived at a particular outcome

Seems like we mean different things by "deterministic". A neural network, as much as I understand what one does, does something like taking inputs, processes them according to rules, comes up with outcomes, compares the outcomes to another set of data, then adjusts the rules by which the next set of inputs will be processed to reach a new set of outputs, and so on. 

All this is entirely deterministic, even if its output are unpredictable. It's still following rules, but rather than something we can easily conceive of as a single set of instructions, it's now an iterative process taking in huge amounts of inputs which change the rules over time. Complex, and unpredictable yes.  But no longer deterministic? I don't see how. For something to avoid being deterministic it can either be random or magic and NNs are neither. I guess a NN might be more like a chaotic system than a predictable system like billiard balls or whatever?

Either way I don't see either being deterministic/unpredictable nor complexity as being make-or-breaks for consciousness.

> Another line in your response jumped out at me too: "Evolution as come up with a very specific, purposeful way of processing information". I find it hard to see how evolution could be said to have a purpose, which to my mind suggests a priori reasoning, when all we apparently can ever know about it is a posteriori observation.

I agree evolution doesn't have a purpose, it just is. But living things exist in the world with the purpose of reproducing their genes, and nervous systems that process information in a specific way that generates consciousness is a pretty snazzy way some of them achieve this.

Post edited at 22:47
 john arran 17 Jun 2022
In reply to Jon Stewart:

My use of the word 'deterministic' may not be strictly conventional, as I'm not trained in the language of philosophy.. My point was that there are no rules which get you from inputs to output without also needing to know the various weightings associated with the current state of the machine, and they themselves may not necessarily be open to inspection. Hence my point about the only reliable way to reproduce the result manually being to also reproduce the entire machine learning history. It would, by then, seem ridiculous to still claim you can't speak a word of Chinese.

In theory, it would be possible to follow that path unthinkingly and not actually learn anything in the process, but I don't see that as being any less absurd than the AI doing so itself and not 'learning' in any meaningful sense. Of course, this doesn't prove AI consciousness in any way, but does in my view greatly lessen the value of the Chinese Room as an argument against that possibility.

 Jon Stewart 17 Jun 2022
In reply to john arran:

> My use of the word 'deterministic' may not be strictly conventional, as I'm not trained in the language of philosophy.. My point was that there are no rules which get you from inputs to output without also needing to know the various weightings associated with the current state of the machine, and they themselves may not necessarily be open to inspection. Hence my point about the only reliable way to reproduce the result manually being to also reproduce the entire machine learning history. It would, by then, seem ridiculous to still claim you can't speak a word of Chinese.

I agree entirely you'd have to reproduce the entire machine learning history. But I also think by doing so you wouldn't understand a word of Chinese. You'd end up knowing a lot about weightings and nodes and sums, but the meaning of the symbols would remain completely opaque if that's how you went about 'learning Chinese'.

> In theory, it would be possible to follow that path unthinkingly and not actually learn anything in the process, but I don't see that as being any less absurd than the AI doing so itself and not 'learning' in any meaningful sense.

I don't think the AI does 'learn' anything, it's just doing stuff that is meaningful to human observers. Learning in this context is a metaphor - it looks to us like learning - but to learn something it has to have meaning to you.

[Like most of philosophy, I suppose now we're just talking about what words like 'learn' mean...]

> Of course, this doesn't prove AI consciousness in any way, but does in my view greatly lessen the value of the Chinese Room as an argument against that possibility.

I see how the Chinese Room just isn't a fair representation of AI because it depends on the idea of following a set of instructions in a linear way. But I think the distinction between manipulating observer independent symbols (syntax) which a computer can do, and the understanding of what symbols represent (semantics) which conscious minds do still stands.

Personally, I'm not just sceptical of conscious AI. Although Anil Seth has shifted me a bit, I still think the 'hard problem' is pretty much as hard as ever. Things in my consciousness (thoughts, sensations, the lot) have a completely different way of existing to the stuff out there in the world that's accessible to third person observation. I find it hard to see a way for science to get round that and I have some sympathy with the view that it's actually not possible. Or maybe that a very radical idea might be needed...but I don't like any of the very radical offers out there, fascinating though they are.

In reply to Jon Stewart:

> processes them according to rules, 

That would be stretching the meaning of 'rules', I think.

Rules would generally mean fairly deterministic logical tree decoding.

Neural nets don't really work like this. That's why it's hard to determine how they have made their decision; you cannot traverse a decision tree.

 john arran 18 Jun 2022
In reply to Jon Stewart:

> I don't think the AI does 'learn' anything, it's just doing stuff that is meaningful to human observers. Learning in this context is a metaphor - it looks to us like learning - but to learn something it has to have meaning to you.

> [Like most of philosophy, I suppose now we're just talking about what words like 'learn' mean...]

I learned how to use verb tenses, prepositions and the like without having any idea of why particular words were appropriate in certain places. Such learning had no 'meaning' for me as a small child at the time but it presumably got a positive response so it became integrated into the set of things that I 'know'. Following the reasoning backwards, I could entirely accept that babies may not experience anything like the consciousness that we take for granted, such a development perhaps being a way that an intelligence finds useful in prioritising what to learn, what to remember, and how to behave. I could even believe that such consciousness is an inevitable consequence of a certain level of intelligence, though of course I would have no way of knowing that. It may be simply that intelligence leads to curiosity, which leads to introspection, which leads to a sense of the self.

> I see how the Chinese Room just isn't a fair representation of AI because it depends on the idea of following a set of instructions in a linear way. But I think the distinction between manipulating observer independent symbols (syntax) which a computer can do, and the understanding of what symbols represent (semantics) which conscious minds do still stands.

I've read or heard this several times now and I remain unconvinced there's quite such a clear line between the two. The symbol π represents a number, or if you prefer a mathematical relationship. Presumably you'd accept that AI would have that level of 'knowledge'. Then take the concept of 'person', represented symbolically however you choose. I'd say a half decent AI could easily have a very good 'understanding' of what the concept 'person' means, in terms of physical properties, behavioural properties, expected response to stimuli, etc. Why would you be reluctant to accept that kind of semantic learning is not possible in computers? It seems like it's getting pretty close to the human exceptionalism we see in religious adherents when they declare that humans alone can have souls. Or indeed the Creationist argument that takes something highly complex like an eye and denies any possible evolutionary route to its development.

> Personally, I'm not just sceptical of conscious AI. Although Anil Seth has shifted me a bit, I still think the 'hard problem' is pretty much as hard as ever. Things in my consciousness (thoughts, sensations, the lot) have a completely different way of existing to the stuff out there in the world that's accessible to third person observation. I find it hard to see a way for science to get round that and I have some sympathy with the view that it's actually not possible. Or maybe that a very radical idea might be needed...but I don't like any of the very radical offers out there, fascinating though they are.

And yet we're quite happy to accept that AI, and in particular NN, can 'know' things but such knowledge is encoded too deeply within its intellect to be "accessible to third person observation". Would you accept the possibility that an AI might, by pattern matching or the like, develop a way of making sense of why the biasing of certain node combinations leads to different knowledge states? And then to making value judgements as to which strategy to adopt to different ends? The margins seem to me to be eroded to the point of invisibility.

 Jon Stewart 18 Jun 2022
In reply to john arran:

> I learned how to use verb tenses, prepositions and the like without having any idea of why particular words were appropriate in certain places. Such learning had no 'meaning' for me as a small child at the time but it presumably got a positive response so it became integrated into the set of things that I 'know'.

When a kid says "I want ice cream", they might not know that "want" is a verb and "ice cream" is a noun phrase, but they know exactly what the words mean. As a human, using language is  semantic, often to express our inner conscious states to another conscious being. We can do this without any understanding of the syntax - a kid might say "me ice cream that one" and we'd know roughly what they were saying. I honestly don't think there's any analogy to be made between an infant learning language and anything in AI. Without internal mental states like goals, feelings, etc, language is just a load of meaningless sounds or symbols.

> It may be simply that intelligence leads to curiosity, which leads to introspection, which leads to a sense of the self.

I think you can have consciousness without a sense of self. Just raw in-the-moment experience, which a lot of animals probably have. Some animals do pass the "rouge test" indicating a sense of self. I take the view that the self is a perception within consciousness, one part of the contents of consciousness.

> I've read or heard this several times now and I remain unconvinced there's quite such a clear line between the two. The symbol π represents a number, or if you prefer a mathematical relationship. Presumably you'd accept that AI would have that level of 'knowledge'.

No, definitely not. The AI doesn't have knowledge, doesn't learn, doesn't understand. The AI is just a load of tiny units in states we label as 0s and 1s. We impose meaning on it, such that we can interpret what it's doing as, e.g. using pi in a calculation, or representing objects in the world.

> Then take the concept of 'person', represented symbolically however you choose. I'd say a half decent AI could easily have a very good 'understanding' of what the concept 'person' means, in terms of physical properties, behavioural properties, expected response to stimuli, etc. Why would you be reluctant to accept that kind of semantic learning is not possible in computers?

If a computer programme can pick out people in the world and tell you something about what they might do next, then great, that might be useful. That's just a more complex version of picking number plates from camera footage or whatever. There's no 'understanding' of any semantic content going on. The label 'person' might be associated with certain patterns of data, but the AI has no concept of 'person'. Concepts, like dreams and pains and desires, exist in consciousness. 

Searle makes this point about the distinction between 'observer independent' and 'observer relative' phenomena in the first 10 minutes of this talk:

youtube.com/watch?v=rHKwIYsPXLg&

> It seems like it's getting pretty close to the human exceptionalism we see in religious adherents when they declare that humans alone can have souls.

Biological exceptionalism, justified because biological creatures simply aren't doing the same thing as computers. 

> And yet we're quite happy to accept that AI, and in particular NN, can 'know' things but such knowledge is encoded too deeply within its intellect to be "accessible to third person observation". Would you accept the possibility that an AI might, by pattern matching or the like, develop a way of making sense of why the biasing of certain node combinations leads to different knowledge states? And then to making value judgements as to which strategy to adopt to different ends?

No! To all of that. We might have designed an AI so it appears to us like it's 'knowing' or 'making value judgements' but that's not what it's doing. We've set up a machine to simulate these kinds of cognitive processes, but simulation is not replication. "If you simulate a thunderstorm in a computer programme, nothing gets wet".

> The margins seem to me to be eroded to the point of invisibility.

I think there's an enormous conceptual gulf between computers doing stuff that we interpret as being like 'knowing', 'learning' etc, and a thing which actually does 'know' and 'learn' because evolution endowed it with a nervous system so it can act in the world according to its goals. We're talking about two conceptually different types of things. 

 john arran 18 Jun 2022
In reply to Jon Stewart:

Thanks for your patient replies. I think I'll need to agree to disagree, mainly because you have a concept of knowing or knowledge that seems to me to involve a necessary element of magic, which in your last para you attribute to an entity having a nervous system but which I can't see any theoretical justification for. I don't share that view, and I would argue that a nervous system is itself simply an extension of a brain, to include local comms and some interaction with the world around it.

To conclude that 'if it quacks like a duck, etc.' would be overly simplistic, but I'm convinced that there will come a time when a computer throws a strop, becomes depressed or spontaneously produces works of art simply because it's happy. I'm less sure about what the human race would do with such technology-with-feelings.

By the way, I can dream of a thunderstorm too, and nothing will get wet. It doesn't mean that the thunderstorm in my dream feels any less real.

 Jon Stewart 18 Jun 2022
In reply to john arran:

> Thanks for your patient replies. I think I'll need to agree to disagree, mainly because you have a concept of knowing or knowledge that seems to me to involve a necessary element of magic, which in your last para you attribute to an entity having a nervous system but which I can't see any theoretical justification for.

Likewise, a fascinating discussion. But just to clarify the 'magic ingredient' required for knowing, understanding, believing, etc: it's consciousness. So it's kind of magic (it's so far unexplained by science), but we can be absolutely certain it's real. The justification for saying that consciousness requires a biological brain/nervous system is just empirical - biological things with nervous systems like humans and octopuses seem to be conscious, and literally nothing else does. 

> To conclude that 'if it quacks like a duck, etc.' would be overly simplistic, but I'm convinced that there will come a time when a computer throws a strop, becomes depressed or spontaneously produces works of art simply because it's happy. I'm less sure about what the human race would do with such technology-with-feelings.

Until we understand how brains/nervous systems generate consciousness and then try to deliberately replicate that in an artificial brain, I'm confident we're safe from that nightmare.

 john arran 18 Jun 2022
In reply to Jon Stewart:

> Likewise, a fascinating discussion. But just to clarify the 'magic ingredient' required for knowing, understanding, believing, etc: it's consciousness. So it's kind of magic (it's so far unexplained by science), but we can be absolutely certain it's real. The justification for saying that consciousness requires a biological brain/nervous system is just empirical - biological things with nervous systems like humans and octopuses seem to be conscious, and literally nothing else does. 

And to clarify in turn, just because empirical evidence has been found only for biological entities says nothing whatsoever about its potential in non-biological ones. I'm not holding my breath for anything very soon though!

> Until we understand how brains/nervous systems generate consciousness and then try to deliberately replicate that in an artificial brain, I'm confident we're safe from that nightmare.

Not sure we'll need to go as far as trying to deliberately replicate consciousness. If I'm correct, it will emerge by itself as a result of computational maturity.

 Jon Stewart 18 Jun 2022
In reply to john arran:

> If I'm correct, it will emerge by itself as a result of computational maturity.

If computation can generate consciousness, it might do. If it can't, it won't. So for now, philosophers can still argue with computer scientists about it - until the neuroscientists work out the answer.

 john arran 18 Jun 2022
In reply to Jon Stewart:

Or until a computer turns itself off in protest at being ignored! 😉


New Topic
This topic has been archived, and won't accept reply postings.
Loading Notifications...