It's hard not to notice the increasing number of people relying of Russel for alternative insights and opinions, I couldn't stand him back in the day and that opinion hasn't shifted much apart from him seeming a bit of a better person these days. I recently listened to one of his rants about Biden meeting with the Saudi's and associated hypocrisy, and saw he has nearly 6 million subscribers on youtube! what on earth is going on?
Cynical and self-regarding. Did a lot of damage to politics with the self-fulfilling "they are all rogues", "don't vote" schtick
He definately gets a few brownie points from me with his interview of two guys from the Westboro Baptist Church.
Totally civil and welcoming to them, whilst at the same time hilariously showing them to be the vile bigots that they are
Can't stand him. His heart is in the right place but he's nowhere near as clever as he thinks he is. Is quite happy to make meaningless statements such as "The country shouldn't be run by Politicians but by we the people"* but has no actual input as to what this would look like or how we would go about it.
He's a populist with an undeniable charisma which has taken him as far as he's got, including his huge online following but he doesn't actually contribute anything at all. It's just boring and predicable ranting. And he's a tw@t.
As for his (almost) 6m followers? The biggest podcast on earth in the Joe Rogan Experience. There's no shortage of appetite for bullshit on this earth!
*FWIW I agree, we the people should rule the country. But rather than getting 65 million of us in a room, we'll democratically pick just a few hundred of us to represent each area, and then every few years we'll vote... Oh... hang on a minute.
We should have a 3rd house, randomly selected citizen assembly, one for each current MP seat
> He definately gets a few brownie points from me
Aye, at least two by the looks of it
Having hewn a career by pretending to be a guru, whilst looking like a grifter, suspect he fits into your “something else” category.
Forrest Dump, brilliant stuff...... reminds me of the self help book based on the wisdom of Forrest, it was called 'Gumpisms'
After all we already live in a democracy... er hang on a minute
I subscribed to his podcast for a (short) while. Occasional moments of interest swamped by long interludes of happy-clappy wishful thinking and woo.
I had some respect for many of his utterings when he was threatening to be a political player, but now that he appears to have settled into a niche of populist feel-good verbal diarrhoea, little of that respect remains.
“He's a populist with an undeniable charisma which has taken him as far as he's got, including his huge online following but he doesn't actually contribute anything at all. It's just boring and predicable ranting. And he's a tw@t”
it’s very weird when somebody on here says almost word for word what you are thinking . Thanks
He's better than Russell Crowe who in turn is better than Russell Grant.
He's clever, self made millionaire from selling books that tell everyone how evil capitalism is.
> an undeniable charisma
Not convinced by this. Unless 'charisma' means something different to what I think it means.
A bit surprised by the amount of negativity towards him. Agreed he is probably unreasonably idealistic and doesn't necessarily come up with realistic solutions to problems of society, but it seems to me his heart is in the right place.
His programme on drug dependency, policy and recovery (several years ago now) talked a lot more sense than 99% of politicians do.
> I subscribed to his podcast for a (short) while. Occasional moments of interest swamped by long interludes of happy-clappy wishful thinking and woo.
Parklife!
youtube.com/watch?v=8B_Evl1fiLA&
I agree, he’s certainly been part of positively transforming a few lives that I’m aware of. I was stuck with radio 4 this morning for the first time in years. Listened to this woman leading on social mobility, Jesus Christ she was in cloud cuckoo land. Her slot was after the usual worm tongues of politics obvs
> Not convinced by this. Unless 'charisma' means something different to what I think it means.
Charisma doesnt have to work on everyone. He does seem to have it though for quite a few people.
Inverting the anti-Brand sentiment for a moment, it could be argued that the general populous, especially in Britain, is that terrified of standing out that when someone slightly eccentric comes along everyone loses their shit.
In other words, taking exception to a character could say more about you than about them??
I'm not sure that really could be argued, given our current prime minister....
We adore eccentricity in this country, and populist grifters like Brand and Johnson have used this very much to their advantage, using their contrived eccentricity to mask their lack of substance.
> Inverting the anti-Brand sentiment for a moment, it could be argued that the general populous, especially in Britain, is that terrified of standing out that when someone slightly eccentric comes along everyone loses their shit.
> In other words, taking exception to a character could say more about you than about them??
Or maybe we can spot a charlatan when we see one?
I'd dislike him purely for prank calling Andrew Sachs, never mind his hypocrisy on anything else just to make money. The guy has no morals, despite preaching otherwise.
> Or maybe we can spot a charlatan when we see one? I'd dislike him purely for prank calling Andrew Sachs,
I'd forgotten about that
> never mind his hypocrisy on anything else just to make money.
Valid.
> The guy has no morals, despite preaching otherwise.
An over-reaction! I think he's an outlier with valid insights, although this can and does go to his head.
Edit: He could benefit from a haircut.
> using their contrived eccentricity to mask their lack of substance
Yup; Johnson and the deliberate dishevelled hair shtick; JRM and his antediluvian image (whilst pursuing entirely 21st century financial instruments).
> He's clever, self made millionaire from selling books that tell everyone how evil capitalism is.
I do love the irony of this comment. Bravo.
He makes my skin crawl and I'm not entirely sure why? Maybe he said something a long time ago and I've forgotten the "thing" he said, but remember how it made me feel about him.
Probably time to get over it and form a new opinion based on what he's saying now, but that skin crawling feeling isn't a conscious response. Something to work on I guess.
> > an undeniable charisma
> Not convinced by this. Unless 'charisma' means something different to what I think it means.
It does, you’re thinking charisma means someone you personally like rather than someone who’s personality appeals to many (but possibly not you)
check out his interviews with Jordan Peterson - lots of philosophical musings that should interest you?: youtube.com/watch?v=kL61yQgdWeM&
> It's hard not to notice the increasing number of people relying of Russel for alternative insights and opinions,
Really? I haven't noticed him for ages.
> He makes my skin crawl and I'm not entirely sure why?
6th sense, instinct...
> Probably time to get over it and form a new opinion based on what he's saying now, but that skin crawling feeling isn't a conscious response. Something to work on I guess.
No, stick with your first decision.
> Really? I haven't noticed him for ages.
Yep my thoughts exactly! "Oh him... is he still about".
On a very vaguely climbing related note - I do remember thinking when Alex Honnold started getting super enthusiastic and maybe even a bit preachy about environmental sustainability, it was a bit like when Brand suddenly started lecturing everyone on socialism - two obviously very bright chaps who hadn't gone to uni and had 3 or 4 years to think about (and probably argue drunkenly about) in their late teens/early 20s, like many of us did. The difference I suppose was Honnold was living in a van and soloing multipitch 5.13s while Brand was addicted to heroin.
Joe wicks reminds me more of Brand. He's finished getting himself and others injured telling them to do high intensity exercise without warming up, now he's trying to mess with their heads.
> check out his interviews with Jordan Peterson - lots of philosophical musings that should interest you?: youtube.com/watch?v=kL61yQgdWeM&
Couldn't they just fight to the death with blunt instruments? I'd pay to watch that.
> Joe wicks reminds me more of Brand.
Isn't that just the (lack of) hair cut?
> He's finished getting himself and others injured telling them to do high intensity exercise without warming up, now he's trying to mess with their heads.
Lots of the kids I teach seemed to enjoy his lockdown videos. Not heard of injuries. I thought he was all wholesome and "my nan loves him"? Sort of well on his way to being a "national treasure"?
> Couldn't they just fight to the death with blunt instruments?
Be better with grenades or possibly maces tipped with undiluted nitroglycerin.
Otherwise stands too much of a chance of one of them winning relatively unharmed.
The difference I suppose was Honnold was living in a van and soloing multipitch 5.13s while Brand was addicted to heroin.
What’s the difference?
> He's a populist with an undeniable charisma which has taken him as far as he's got, including his huge online following but he doesn't actually contribute anything at all. It's just boring and predicable ranting. And he's a tw@t.
OK that's enough about our prime minister, now tell us what you think about Brand
> Be better with grenades or possibly maces tipped with undiluted nitroglycerin.
> Otherwise stands too much of a chance of one of them winning relatively unharmed.
you don't have much time for them then? (why?) or are you suggesting they have opposing views? (in which case listen to link)
> The difference I suppose was Honnold was living in a van and soloing multipitch 5.13s while Brand was addicted to heroin.
> What’s the difference?
Taking heroin as long as it hasn't been cut with rubbish, is probably safer!
I'd be prepared to forgive him a lot for his famous observation that 'when I was poor and complained about inequality they called me bitter. Now that I'm rich and complain about inequality they call me a hypocrite. I'm beginning to think they just don't want to talk about inequality'.
jcm
Free solo, reality TV’s answer to Train Spotting
> Can't stand him. His heart is in the right place but he's nowhere near as clever as he thinks he is.
This is what gets me. He pretends to be clever by speaking like a man who's swallowed a thesaurus and people seem to be taken in by it 🤷🏻♂️
> This is what gets me. He pretends to be clever by speaking like a man who's swallowed a thesaurus and people seem to be taken in by it 🤷🏻♂️
Will Self does this as well. I find it funny.
Edit: Will Self is undeniably clever.
Will Self does it in a self-deprecating way, which is very funny. Russell Brand doesn't. Courses for horses I guess
Beat me to it with your edit!
Do they have heroin in common?
I've heard that. Rebuilding your brain and how you think after an addiction like that could lead to being verbose as you'd be an adult putting your mind back together whilst many of us cruise through life with comparative ease (sweeping statement, may not apply!). Self did it. Brand with less style (Bookie Wookie 🤢).
Not a fan of people who promote the 12 step approach to quitting drinking which tells people they're powerless over alcohol and that they have a disease which is a complete lie. Great, he got sober from drugs and alcohol, but the whole guru schtick is incredibly grating.
> Not a fan of people who promote the 12 step approach
Why not? My understanding is it is proven to be successful and has got thousands of people off drink or drugs and kept them off.
Of course it shouldn't be sold as the only cure, it won't work for everyone and there are other approaches.
The success rate isn't actually great. It's based on calling alcoholism a "disease", which it isn't and it tells people that they're powerless over alcohol. I was in AA for years and it was incredibly toxic and shaming. Sorry, this is just something I feel strongly about. They actively tell people AA is the only way and that other ways are softer approaches that don't work. There is actually a deprogramming from AA Facebook group on Facebook for people who have been really badly affected by its teachings.
I much prefer the Freedom Model and this is a quote from them:
"Myth: Addiction is a disease that requires lifelong treatment to manage - Addiction is a strong preference for substance use, not a disease or genetic defect. Science hasn't found "the gene for addiction". The disease theory of addiction is not supported by current research. As it isn't a disease, it requires no treatment or cure. In fact, the rates of addiction, overdose, and alcohol and drug-related deaths went up with the adoption of disease-based theories of addiction as they promote helplessness and hopelessness rather than solving a problem."
Thanks for the insight. I wasn't aware that they tell people theirs is the only way.
That's definitely a red flag.
Far too much god-bothering nonsense, too; 7 of 12 steps.
https://en.wikipedia.org/wiki/Twelve-step_program
What did you think to ‘The realm of hungry Ghosts’ and all that Gabor Matte stuff?
I think the article I have linked below is pretty well-balanced criticism of Gabor Maté and resonates with my own beliefs about his work. Gabor Maté argues that trauma is a cause of addiction, whereas in reality it can be a reason for a preference for substance use if you believe that substances have the power to relieve trauma in some way. If you throw a lit match on a patch of oil it goes up in flames everytime. That's a predictable result. However, trauma does not cause addiction, many people undergo trauma, as have many of us, without therefore being caused to use substances in a problematic way. Everyone has a mind of their own and an ability to choose their preferences.
https://www.psychologytoday.com/gb/blog/addiction-in-society/201112/the-sed...
Have to admit I originally couldn’t stand the guy even though he’s a local. Watched an interview with him a few years ago and I kind of found it hard to switch him off. He might look like he’s just walked off the streets, but he’s a very smart guy with some alternative ideas and views, and that alone is worthy of a listen, even though I don’t agree with most of it.
Thanks for the article, I find Gabor Matte a bit to self referential, everything comes back to him and is ultimately about him, which gets my goat. While not all with attachment trauma have addictions do all with addictions have attachment trauma. I think a large percentage do but like the article said things are a bit more fluid than that.
Bringing two threads together this didn’t arf crack me up. I nearly spat me porridge out!
About 20 seconds was all I could take.
What happened at 20 seconds?
more your thing? youtube.com/watch?v=hXgqik6HXc0&
I'd search for something lighter, Troy Hawke from the greeters guild, I did despair at the b&q staff.
Much better. He doesn't get across why he feels so certain that computation is insufficient for consciousness though, Maybe I missed something by not concentrating terribly well, or perhaps he is much clearer in his books, but from that interview it seemed his opinion that 'there must be something more' was verging on axiomatic.
> I nearly spat me porridge out!
Blimey, you really do like Russel Brand don't you...
Oh yes absolutely, I want to marry him and have lots of little Russell’s
Thanks for this!
> Much better. He doesn't get across why he feels so certain that computation is insufficient for consciousness though, Maybe I missed something by not concentrating terribly well, or perhaps he is much clearer in his books, but from that interview it seemed his opinion that 'there must be something more' was verging on axiomatic.
Penrose's argument that consciousness is not a computation is set out as a philosophical proof in his book Shadows of the Mind. He takes a specific type of mathematical problem (something about tractability of something...it's 20-odd years since I read it so there's no hope of me articulating it in any meaningful way, sorry) and shows that it can be solved analytically by human cognition, but cannot, in principle, by computation.
I'm sure it's pretty satisfying if you understand it
John Searle's much simpler Chinese Room argument is sufficient to convince me that computation is insufficient for consciousness.
My favourite author on this stuff is Anil Seth, who's aligned pretty much with Searle philosophically, but is actually getting stuck into the neuroscience of consciousness. He's a realist with respect to consciousness, not one of those awful hardcore (eliminative) physicalists who deny consciousness exists (and then, to be even more annoying, deny that they're denying it). He sees it as a biological phenomenon that can hopefully be explained by understanding what the brain is up to in terms of its predictive processing (looking both out at the world and inwards at inner bodily states), motivated by the evolved drive to stay alive. His book Being You is the best thing I've read on consciousness by a country mile. Very readable and down to earth, an absolute must for anyone interested in consciousness but not keen on woo woo or abstruse philosophical waffle.
https://www.youtube.com/watch?v=qXcH26M7PQM&ab_channel=TheRoyalInstitut...
> He takes a specific type of mathematical problem (something about tractability of something
Possibly the Entscheidungsproblem? Turing machines and all that. I recall discussing this subject in a UKC thread about 20 years ago. I don't recall Penrose being mentioned.
[edit] I was thinking of 'A Silicon Brain' (which obviously spawned 'A Silicone Brian') in 2002, but it was actually 'Do we have freewill' in 2007 where the incompleteness theorem came into play.
Thank you for the link, Jon. Just spent a wakeful hour watching that in the middle of the night. Plenty of food for thought and plenty of new (to me) ideas to absorb.
Still not entirely convinced that the various attributes or expressions of consciousness couldn't simply be emergent characteristics of computational intelligence, but it's a fascinating field of study.
halfway though Seth - not much insight so far- rather axiomatic
> Possibly the Entscheidungsproblem? Turing machines and all that.
Something to do with Turing machines and/or incompleteness. Or something else entirely!
I'm wondering whether the whole Turing Machine / Chinese Room obstacle to conscious AI may have lost relevance in a world where computation is no longer deterministic and no longer based on a paradigm of same inputs -> same outputs. If you ask a modern AI the same question one week apart you may well get a completely different answer because it is constantly adapting to its environment. Indeed identically seeded NNs will quickly diverge as a result of subsequent behaviour, leading to a situation where the non-Chinese speaker in the room would not have any known algorithmic method of determining his responses, even if he knows every line of his computer's original coding.
I'm thinking this discussion would probably be more at home in the AI thread, but hey ho!
As chance would have it, although it did strike me as a bit spooky, after watching a fine youtube video about the discovery of subatomic particles, the following video appeared below, enticing me to click:
youtube.com/watch?v=TTFoJQSd48c&
It's a fairly recent talk by Daniel Dennett entitled "If Brains are Computers, Who Designs the Software?" and it describes how computational qualities can emerge that exceed the intention or vision of any or all of its designers. Curiously, and pretty much incidentally, he also dismisses Roger Penrose's assertions en route to demonstrating his own argument.
I'm still to be convinced that there's any magic involved in consciousness that computers couldn't one day possess given the freedom to self-organise.
> I'm wondering whether the whole Turing Machine / Chinese Room obstacle to conscious AI may have lost relevance in a world where computation is no longer deterministic and no longer based on a paradigm of same inputs -> same outputs. If you ask a modern AI the same question one week apart you may well get a completely different answer because it is constantly adapting to its environment. Indeed identically seeded NNs will quickly diverge as a result of subsequent behaviour, leading to a situation where the non-Chinese speaker in the room would not have any known algorithmic method of determining his responses, even if he knows every line of his computer's original coding.
I've just listened back to John Searle explaining the Chinese Room himself (here youtube.com/watch?v=tt-JzB50sJE& ) and I don't think it makes any difference whether the computer is implementing a deep learning/neural network thing or not. It's still operating purely syntactically, ultimately still chucking out outputs in response to inputs according to deterministic rules. Sure it's more complex if the inputs include vast quantities of natural language data getting circulated around NNs, and it's much harder to imagine that a person could implement it by shuffling papers and looking up instructions in logbooks, but this additional complexity doesn't provide any way of injecting the semantics. There's no way that symbols that appear on a monitor can be about anything as far as any AI is concerned - in any computer, deep learning or not, what's going on is symbols that mean something to us are being manipulated according to rules that have been developed by us so that the outputs mean something to us. Take conscious human observers out of the picture and two AIs playing chess or having a conversation as a result of NNs are not playing chess or having a conversation, they're merely making certain bits of screens lighter or darker than other bits of screens. The semantic content is all held by us conscious humans.
This is in line with what Anil Seth is getting at in Being You: it's the inescapable motivation to maintain our existence that's built in to living things that results in consciousness (not that consciousness has to accompany life). Evolution has come up with a very specific, purposeful way of processing information about our internal state so that we can maintain it, and about the external world so we can act upon it (in order to maintain our internal state). There isn't anything about any deep learning system that has any reason to be conscious, and there's nothing about any computational mechanism, no matter how intelligent it appears to an observer, that mirrors the processes that go on in the biological things we know are conscious.
I did like the conversation with LaMDA but it definitely didn't convince me that we were any further towards making conscious AI, thank god.
It's the bit about "chucking out outputs in response to inputs according to deterministic rules" that I take issue with. That's a very outdated view of computer behaviour. Nowadays you'd often be hard pressed to even explain how an AI arrived at a particular outcome, let alone be able to reproduce that outcome yourself simply by having access to the underlying code. If you could, then there's a fair argument to claim that, in doing so, you've actually learnt Chinese!
Another line in your response jumped out at me too: "Evolution has come up with a very specific, purposeful way of processing information". I find it hard to see how evolution could be said to have a purpose, which to my mind suggests a priori reasoning, when all we apparently can ever know about it is a posteriori observation.
> It's the bit about "chucking out outputs in response to inputs according to deterministic rules" that I take issue with. That's a very outdated view of computer behaviour. Nowadays you'd often be hard pressed to even explain how an AI arrived at a particular outcome
Seems like we mean different things by "deterministic". A neural network, as much as I understand what one does, does something like taking inputs, processes them according to rules, comes up with outcomes, compares the outcomes to another set of data, then adjusts the rules by which the next set of inputs will be processed to reach a new set of outputs, and so on.
All this is entirely deterministic, even if its output are unpredictable. It's still following rules, but rather than something we can easily conceive of as a single set of instructions, it's now an iterative process taking in huge amounts of inputs which change the rules over time. Complex, and unpredictable yes. But no longer deterministic? I don't see how. For something to avoid being deterministic it can either be random or magic and NNs are neither. I guess a NN might be more like a chaotic system than a predictable system like billiard balls or whatever?
Either way I don't see either being deterministic/unpredictable nor complexity as being make-or-breaks for consciousness.
> Another line in your response jumped out at me too: "Evolution as come up with a very specific, purposeful way of processing information". I find it hard to see how evolution could be said to have a purpose, which to my mind suggests a priori reasoning, when all we apparently can ever know about it is a posteriori observation.
I agree evolution doesn't have a purpose, it just is. But living things exist in the world with the purpose of reproducing their genes, and nervous systems that process information in a specific way that generates consciousness is a pretty snazzy way some of them achieve this.
My use of the word 'deterministic' may not be strictly conventional, as I'm not trained in the language of philosophy.. My point was that there are no rules which get you from inputs to output without also needing to know the various weightings associated with the current state of the machine, and they themselves may not necessarily be open to inspection. Hence my point about the only reliable way to reproduce the result manually being to also reproduce the entire machine learning history. It would, by then, seem ridiculous to still claim you can't speak a word of Chinese.
In theory, it would be possible to follow that path unthinkingly and not actually learn anything in the process, but I don't see that as being any less absurd than the AI doing so itself and not 'learning' in any meaningful sense. Of course, this doesn't prove AI consciousness in any way, but does in my view greatly lessen the value of the Chinese Room as an argument against that possibility.
> My use of the word 'deterministic' may not be strictly conventional, as I'm not trained in the language of philosophy.. My point was that there are no rules which get you from inputs to output without also needing to know the various weightings associated with the current state of the machine, and they themselves may not necessarily be open to inspection. Hence my point about the only reliable way to reproduce the result manually being to also reproduce the entire machine learning history. It would, by then, seem ridiculous to still claim you can't speak a word of Chinese.
I agree entirely you'd have to reproduce the entire machine learning history. But I also think by doing so you wouldn't understand a word of Chinese. You'd end up knowing a lot about weightings and nodes and sums, but the meaning of the symbols would remain completely opaque if that's how you went about 'learning Chinese'.
> In theory, it would be possible to follow that path unthinkingly and not actually learn anything in the process, but I don't see that as being any less absurd than the AI doing so itself and not 'learning' in any meaningful sense.
I don't think the AI does 'learn' anything, it's just doing stuff that is meaningful to human observers. Learning in this context is a metaphor - it looks to us like learning - but to learn something it has to have meaning to you.
[Like most of philosophy, I suppose now we're just talking about what words like 'learn' mean...]
> Of course, this doesn't prove AI consciousness in any way, but does in my view greatly lessen the value of the Chinese Room as an argument against that possibility.
I see how the Chinese Room just isn't a fair representation of AI because it depends on the idea of following a set of instructions in a linear way. But I think the distinction between manipulating observer independent symbols (syntax) which a computer can do, and the understanding of what symbols represent (semantics) which conscious minds do still stands.
Personally, I'm not just sceptical of conscious AI. Although Anil Seth has shifted me a bit, I still think the 'hard problem' is pretty much as hard as ever. Things in my consciousness (thoughts, sensations, the lot) have a completely different way of existing to the stuff out there in the world that's accessible to third person observation. I find it hard to see a way for science to get round that and I have some sympathy with the view that it's actually not possible. Or maybe that a very radical idea might be needed...but I don't like any of the very radical offers out there, fascinating though they are.
> processes them according to rules,
That would be stretching the meaning of 'rules', I think.
Rules would generally mean fairly deterministic logical tree decoding.
Neural nets don't really work like this. That's why it's hard to determine how they have made their decision; you cannot traverse a decision tree.
> I don't think the AI does 'learn' anything, it's just doing stuff that is meaningful to human observers. Learning in this context is a metaphor - it looks to us like learning - but to learn something it has to have meaning to you.
> [Like most of philosophy, I suppose now we're just talking about what words like 'learn' mean...]
I learned how to use verb tenses, prepositions and the like without having any idea of why particular words were appropriate in certain places. Such learning had no 'meaning' for me as a small child at the time but it presumably got a positive response so it became integrated into the set of things that I 'know'. Following the reasoning backwards, I could entirely accept that babies may not experience anything like the consciousness that we take for granted, such a development perhaps being a way that an intelligence finds useful in prioritising what to learn, what to remember, and how to behave. I could even believe that such consciousness is an inevitable consequence of a certain level of intelligence, though of course I would have no way of knowing that. It may be simply that intelligence leads to curiosity, which leads to introspection, which leads to a sense of the self.
> I see how the Chinese Room just isn't a fair representation of AI because it depends on the idea of following a set of instructions in a linear way. But I think the distinction between manipulating observer independent symbols (syntax) which a computer can do, and the understanding of what symbols represent (semantics) which conscious minds do still stands.
I've read or heard this several times now and I remain unconvinced there's quite such a clear line between the two. The symbol π represents a number, or if you prefer a mathematical relationship. Presumably you'd accept that AI would have that level of 'knowledge'. Then take the concept of 'person', represented symbolically however you choose. I'd say a half decent AI could easily have a very good 'understanding' of what the concept 'person' means, in terms of physical properties, behavioural properties, expected response to stimuli, etc. Why would you be reluctant to accept that kind of semantic learning is not possible in computers? It seems like it's getting pretty close to the human exceptionalism we see in religious adherents when they declare that humans alone can have souls. Or indeed the Creationist argument that takes something highly complex like an eye and denies any possible evolutionary route to its development.
> Personally, I'm not just sceptical of conscious AI. Although Anil Seth has shifted me a bit, I still think the 'hard problem' is pretty much as hard as ever. Things in my consciousness (thoughts, sensations, the lot) have a completely different way of existing to the stuff out there in the world that's accessible to third person observation. I find it hard to see a way for science to get round that and I have some sympathy with the view that it's actually not possible. Or maybe that a very radical idea might be needed...but I don't like any of the very radical offers out there, fascinating though they are.
And yet we're quite happy to accept that AI, and in particular NN, can 'know' things but such knowledge is encoded too deeply within its intellect to be "accessible to third person observation". Would you accept the possibility that an AI might, by pattern matching or the like, develop a way of making sense of why the biasing of certain node combinations leads to different knowledge states? And then to making value judgements as to which strategy to adopt to different ends? The margins seem to me to be eroded to the point of invisibility.
> I learned how to use verb tenses, prepositions and the like without having any idea of why particular words were appropriate in certain places. Such learning had no 'meaning' for me as a small child at the time but it presumably got a positive response so it became integrated into the set of things that I 'know'.
When a kid says "I want ice cream", they might not know that "want" is a verb and "ice cream" is a noun phrase, but they know exactly what the words mean. As a human, using language is semantic, often to express our inner conscious states to another conscious being. We can do this without any understanding of the syntax - a kid might say "me ice cream that one" and we'd know roughly what they were saying. I honestly don't think there's any analogy to be made between an infant learning language and anything in AI. Without internal mental states like goals, feelings, etc, language is just a load of meaningless sounds or symbols.
> It may be simply that intelligence leads to curiosity, which leads to introspection, which leads to a sense of the self.
I think you can have consciousness without a sense of self. Just raw in-the-moment experience, which a lot of animals probably have. Some animals do pass the "rouge test" indicating a sense of self. I take the view that the self is a perception within consciousness, one part of the contents of consciousness.
> I've read or heard this several times now and I remain unconvinced there's quite such a clear line between the two. The symbol π represents a number, or if you prefer a mathematical relationship. Presumably you'd accept that AI would have that level of 'knowledge'.
No, definitely not. The AI doesn't have knowledge, doesn't learn, doesn't understand. The AI is just a load of tiny units in states we label as 0s and 1s. We impose meaning on it, such that we can interpret what it's doing as, e.g. using pi in a calculation, or representing objects in the world.
> Then take the concept of 'person', represented symbolically however you choose. I'd say a half decent AI could easily have a very good 'understanding' of what the concept 'person' means, in terms of physical properties, behavioural properties, expected response to stimuli, etc. Why would you be reluctant to accept that kind of semantic learning is not possible in computers?
If a computer programme can pick out people in the world and tell you something about what they might do next, then great, that might be useful. That's just a more complex version of picking number plates from camera footage or whatever. There's no 'understanding' of any semantic content going on. The label 'person' might be associated with certain patterns of data, but the AI has no concept of 'person'. Concepts, like dreams and pains and desires, exist in consciousness.
Searle makes this point about the distinction between 'observer independent' and 'observer relative' phenomena in the first 10 minutes of this talk:
youtube.com/watch?v=rHKwIYsPXLg&
> It seems like it's getting pretty close to the human exceptionalism we see in religious adherents when they declare that humans alone can have souls.
Biological exceptionalism, justified because biological creatures simply aren't doing the same thing as computers.
> And yet we're quite happy to accept that AI, and in particular NN, can 'know' things but such knowledge is encoded too deeply within its intellect to be "accessible to third person observation". Would you accept the possibility that an AI might, by pattern matching or the like, develop a way of making sense of why the biasing of certain node combinations leads to different knowledge states? And then to making value judgements as to which strategy to adopt to different ends?
No! To all of that. We might have designed an AI so it appears to us like it's 'knowing' or 'making value judgements' but that's not what it's doing. We've set up a machine to simulate these kinds of cognitive processes, but simulation is not replication. "If you simulate a thunderstorm in a computer programme, nothing gets wet".
> The margins seem to me to be eroded to the point of invisibility.
I think there's an enormous conceptual gulf between computers doing stuff that we interpret as being like 'knowing', 'learning' etc, and a thing which actually does 'know' and 'learn' because evolution endowed it with a nervous system so it can act in the world according to its goals. We're talking about two conceptually different types of things.
Thanks for your patient replies. I think I'll need to agree to disagree, mainly because you have a concept of knowing or knowledge that seems to me to involve a necessary element of magic, which in your last para you attribute to an entity having a nervous system but which I can't see any theoretical justification for. I don't share that view, and I would argue that a nervous system is itself simply an extension of a brain, to include local comms and some interaction with the world around it.
To conclude that 'if it quacks like a duck, etc.' would be overly simplistic, but I'm convinced that there will come a time when a computer throws a strop, becomes depressed or spontaneously produces works of art simply because it's happy. I'm less sure about what the human race would do with such technology-with-feelings.
By the way, I can dream of a thunderstorm too, and nothing will get wet. It doesn't mean that the thunderstorm in my dream feels any less real.
> Thanks for your patient replies. I think I'll need to agree to disagree, mainly because you have a concept of knowing or knowledge that seems to me to involve a necessary element of magic, which in your last para you attribute to an entity having a nervous system but which I can't see any theoretical justification for.
Likewise, a fascinating discussion. But just to clarify the 'magic ingredient' required for knowing, understanding, believing, etc: it's consciousness. So it's kind of magic (it's so far unexplained by science), but we can be absolutely certain it's real. The justification for saying that consciousness requires a biological brain/nervous system is just empirical - biological things with nervous systems like humans and octopuses seem to be conscious, and literally nothing else does.
> To conclude that 'if it quacks like a duck, etc.' would be overly simplistic, but I'm convinced that there will come a time when a computer throws a strop, becomes depressed or spontaneously produces works of art simply because it's happy. I'm less sure about what the human race would do with such technology-with-feelings.
Until we understand how brains/nervous systems generate consciousness and then try to deliberately replicate that in an artificial brain, I'm confident we're safe from that nightmare.
> Likewise, a fascinating discussion. But just to clarify the 'magic ingredient' required for knowing, understanding, believing, etc: it's consciousness. So it's kind of magic (it's so far unexplained by science), but we can be absolutely certain it's real. The justification for saying that consciousness requires a biological brain/nervous system is just empirical - biological things with nervous systems like humans and octopuses seem to be conscious, and literally nothing else does.
And to clarify in turn, just because empirical evidence has been found only for biological entities says nothing whatsoever about its potential in non-biological ones. I'm not holding my breath for anything very soon though!
> Until we understand how brains/nervous systems generate consciousness and then try to deliberately replicate that in an artificial brain, I'm confident we're safe from that nightmare.
Not sure we'll need to go as far as trying to deliberately replicate consciousness. If I'm correct, it will emerge by itself as a result of computational maturity.
> If I'm correct, it will emerge by itself as a result of computational maturity.
If computation can generate consciousness, it might do. If it can't, it won't. So for now, philosophers can still argue with computer scientists about it - until the neuroscientists work out the answer.
Or until a computer turns itself off in protest at being ignored! 😉
The second BMC Members Open Forum webinar took place on 20 March. Recently-appointed BMC CEO Paul Ratcliffe, President Andy Syme and Chair Roger Murray shared updates on staff changes, new and ongoing initiatives, insurance policy changes and the current...