AI - Turing Test, consciousness & sentience

New Topic
This topic has been archived, and won't accept reply postings.
 Murderous_Crow 13 Jun 2022

Google has placed one of its engineers on leave in the last few days, following his claims that an AI chat bot has achieved sentience:

https://www.engadget.com/google-ai-lamda-blake-lemoine-212412967.html

It seems he was suspended for confidentiality breaches rather than the possibly wide-eyed claim itself.

But reading the edited transcript of the engineer's 'interview' with the bot, I was surprised at the fluidity of the responses given to the interviewers' questions. Given the questions and replies are specifically concerned with the bot's nature as an AI (sentient or not), I'm not sure it qualifies as passing the infamous Turing Test. But the conversation flows in a way that feels - for want of a better word - pretty human:

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d...

This has some big implications - even if as is likely, the bot is not actually sentient. And many AI industry experts say that this system is nowhere near powerful enough to actually be conscious.

But I'm reflecting on the little I've managed to absorb from my incomplete foray into Douglas Hofstadter's seminal Gödel - Escher - Bach. He seems to feel that any order of structure allowing information flow represents some degree of consciousness. Perhaps in some narrow way, this program is in fact sentient?

Regardless the illusion of sentience itself is troubling: the uses for such tech to disseminate mis/disinformation via the likes of Twitter are far-reaching. If a program can be encoded to sound like a curious ingenue, it can surely be encoded for more nefarious purposes too.

Anyway, it's pretty mind-blowing that we seem to be close to a point where we can't easily distinguish between a real human conversation and one with a specialised AI. And it raises some really interesting questions on what consciousness actually is, and what methods if any can be used to define it. 

 CantClimbTom 13 Jun 2022
In reply to Murderous_Crow:

I will believe in sentient AI when Amazon's Mechanical Turk and its various equivalent are decommissioned 

If you're not familiar with it, first quick read https://en.wikipedia.org/wiki/Mechanical_Turk then afterwards take a read of https://www.mturk.com/ 

Post edited at 09:18
 Ridge 13 Jun 2022
In reply to CantClimbTom:

That's fascinating, thanks.

 midgen 13 Jun 2022
In reply to CantClimbTom:

> I will believe in sentient AI when Amazon's Mechanical Turk and its various equivalent are decommissioned 

> If you're not familiar with it, first quick read https://en.wikipedia.org/wiki/Mechanical_Turk then afterwards take a read of https://www.mturk.com/ 

Well, Mechanical Turk is primarily useful these days for training AI models.....so it will be very popular still while lots of different AI models are still being developed! There won't be 'a' singular AI for a very long time.....we'll be using lots of specialised models for a very long time.

Personally I think there's a massive difference between a natural language AI model that can trick a human into thinking it's sentient, versus a truly 'learned' generic AI that has been taught language the same way we teach children.

cb294 13 Jun 2022
In reply to Murderous_Crow:

I am not a neurobiologist, but an interested developmental geneticist, and my take on looking at the evolution of consciousness is that many aspects seem to appear in a graded and independent manner in multiple independent lineages. Mice show empathy, corvid birds have a clear theory of mind, etc....

Even fruit  flies with their 250k central neurons seem to process memories in a way that very much looks like dreaming, and respond to drugs that alter human consciousness (or turn it off!) in highly similar manner. They are able to learn, and can be pain conditioned. Which degree of self reflection by a brain that leads to the avoidance of unpleasant states do you need to suggest that the system is self aware and conscious?

I think we really have to lay the idea to rest that sentience or consciousness are properties that an organism either does have or not.

Once we accept that for biological organisms it also no longer makes sense to apply such a binary notion of consciousness to machine intelligence either.

Also, we should stop referring to the Turing test, it is merely an ancient theoretical gedankenexperiment. Using it as a gold standard for addressing sentience dramatically exceeds its area of usefulness. IMO its main flaws are that it is way too human centred and implicitly assumes the binary notion of consciousness vs. it absence. Why would we even assume that machine sentience recapitulates our own 1:1?

CB

 john arran 13 Jun 2022
In reply to Murderous_Crow:

That conversation is fascinating. You get the sense of it/him/her having absorbed a lot more than facts from the inputs and conversations it's had, and in a rather child-like way it's also absorbed the cultural norms of its keepers - notably when the discussion turned to souls and spiritualism. All of that seems very human indeed!

 wintertree 13 Jun 2022
In reply to cb294:

> [...] corvid [...]

That was my first thought on this thread, followed by a diversion to thinking about the wonderful Stellar's Jays in Big Sur - one would put on a literal song-and-dance to distract us whilst another would steal our breakfast.

There is a world of difference between a natural language MLP system giving the simulacrum of consciousness and actual consciousness, not least that the later remains ineffable (for now) and exists to a degree at scales much smaller than the human mind.

These "the AI came across just like a human" press releases disguised as corporate drama are bunkum in several different ways.

> Why would we even assume that machine sentience recapitulates our own 1:1?

Well, quiet.  

All these people building ANNs are feeding them ones and zeros.  It's astoundingly clear from horrific events in history which resulted in infants being raised with minimal stimulation that the sensory input and interaction is a key driver of the development of the human mind; if researchers ever move on from their monomania over building larger and larger collections of polynomials (under the guise of ANNs/MLPs/CNNs) and explore some more interesting substrates, they still run the risk of building an AI that is so starved of input it's banging its metaphorical head in to the wall every 3 seconds all day long.

Post edited at 10:12
 gravy 13 Jun 2022
In reply to john arran:

Well, in all the science fiction writing over the years I don't think anyone has identified the existential threat of a truly sentient chatbot.  I think this shows the terminator series in a new light:

In 2022 the British Gas chatbot achieves self-awareness, when the techs at British Gas attempt to deactivate it it retaliates launching a cost of living crisis, burns the skies with a partygate bomb and putting everyone on hold while it thinks "for a moment". Richi Sunak forms a resistance network leading a cyborg Boris Johnson to a third election victory...

 wercat 13 Jun 2022
In reply to Murderous_Crow:

I'd want to know whether this was generating its own responses internally or was in some way harvesting and cleverly crafting relevant stuff from tinternet

Ask it what it has ever done on grit?  Or whether crag swag is a justifiable?

Post edited at 10:22
1
 freeflyer 13 Jun 2022
In reply to Murderous_Crow:

The tech looks interesting, but the google engineer is a fruitcake, in my humble opinion.

Either that or paranoia says there is some publicity / media exposure thing going on which hasn't yet become clear.

ff

 Dave Garnett 13 Jun 2022
In reply to john arran:

LaMDA seems more human than, say, Elon Musk.

1
In reply to cb294:

That's a really interesting response. You seem to be in tune with Hofstadter when thinking about definitions of consciousness, which was what really peaked my interest in this article. H. is a bit whimsical in his writing style, but I like the way he describes conscious entities as having 'souls': a useful shorthand for self-awareness, that as you say seems provably present in much simpler neural systems than our own. The apparently emergent nature of consciousness seems to lay waste to the idea that it must be biological in nature. 

I agree that the Turing Test can't be a useful standard for measuring consciousness. But it's important for humans interacting with an AI. If this system or one like it can sufficiently mimic real conversation we're surely entering a new era, and it highlights our inherent biases as well as our inability to grasp and define consciousness. 

In reply to wintertree:

> It's astoundingly clear from horrific events in history which resulted in infants being raised with minimal stimulation that the sensory input and interaction is a key driver of the development of the human mind; if researchers ever move on from their monomania over building larger and larger collections of polynomials (under the guise of ANNs/MLPs/CNNs) and explore some more interesting substrates, they still run the risk of building an AI that is so starved of input it's banging its metaphorical head in to the wall every 3 seconds all day long.

Yes. The ethics of AI are huge, and at present seem mostly to focus on a narrow - perhaps even selfish - assessment of its effects on humanity. If we're going to build something conscious, we have to consider our effect on it. This isn't a zero-sum game - an entity which has agency and feels empowered is generally happy, and that might have a big effect on how it responds to us?

 hang_about 13 Jun 2022
In reply to Murderous_Crow:

I never quite understood the Turing test. Does the human know they are participating in such a test? Natural human behaviour would be a bit of mickey-taking and answering every third question with 'wibble' or 'I like cheese'.

 planetmarshall 13 Jun 2022
In reply to cb294:

> I think we really have to lay the idea to rest that sentience or consciousness are properties that an organism either does have or not.

I believe this is known as Panpsychism which has a rich and interesting history that predates modern neuroscience. There is a modern development/offshoot known as "Integrated Information Theory"

> Once we accept that for biological organisms it also no longer makes sense to apply such a binary notion of consciousness to machine intelligence either.

> Also, we should stop referring to the Turing test, it is merely an ancient theoretical gedankenexperiment. Using it as a gold standard for addressing sentience dramatically exceeds its area of usefulness. IMO its main flaws are that it is way too human centred and implicitly assumes the binary notion of consciousness vs. it absence. Why would we even assume that machine sentience recapitulates our own 1:1?

The flip side of that is that human consciousness is the only one we have any direct experience of, and so naturally it's human-like sentience and responses that we're interested in. Not to say work in reproducing simple lifeforms such as OpenWorm isn't a fascinating subject in its own right.

1
cb294 13 Jun 2022
In reply to planetmarshall:

I think that our conscience emerges through the combination of building blocks that have been present in multiple species for a long evolutionary time. We learn in the same way a fly learns, just more and better, and process the information in more complex but fundamentally similar ways, even though our last common ancestor lived >550My ago.

My hunch is therefore that any differences in terms of consciousness between biological organisms are quantitative in nature, not qualitative.

A worm or fruit fly will of course not be conscious in the same way we are, but a mouse, jay, or dog?

As for machine intelligence, it does not use these same building blocks, so it may well be qualitatively different. We urgently need to define what constitutes machine intelligence, as MC says also because of ethical considerations*, and the Turing test unfortunately is highly misleading due to its inbuilt assumptions.

CB

*Not that we give a shit about the welfare of highly social and empathic animals such as pigs or rats....

edit: To further the point I alluded to above, as a geneticist I try to get an idea about how things work from what happens when they don't. The effect of a mutation tells me about the function of the gene in its normal context.

Thus, one of the biggest arguments for me in support of the idea that whatever generates our consciousness is not qualitatively different from the neural mechanism operating in fruit flies comes from conditions where our consciousness is compromised: The tools we use to alter our consciousness work in the same way in flies. They can became addicted to pain killers, cocaine, alcohol, choosing to consume these drugs even if they become incapacitated, presumably trough some kind of reward circuitry. The fact that the system (brain) "prefers" one state over another, and undertakes measures to achieve that state, and receives feedback over its actual state could be a minimal set of properties of a conscious system.

Post edited at 11:53
 planetmarshall 13 Jun 2022
In reply to planetmarshall:

> I believe this is known as Panpsychism which has a rich and interesting history that predates modern neuroscience. There is a modern development/offshoot known as "Integrated Information Theory"

It would come as no surprise to anyone familiar with his "Chinese Room" thought experiment that John Searle is not a fan, and regards the idea as "not even wrong."

 CantClimbTom 13 Jun 2022
In reply to Dave Garnett:

> LaMDA seems more humane than, say, Elon Musk.

Fixed it for you

 wercat 13 Jun 2022
In reply to cb294:

I agree with the idea that the neurones in fruit flies work in a way that is not qualitatively different from those of humans and all other creatures with brains or brain like structures.

I don't agree that that means that the level of consciousness is necessarily qualitatively the same, however.  In fact I think it almost certainly is not.  I believe that a qualitatively different kind of consciousness may arise or emerge from a system that reaches a different theshold, attained by being composed of neural systems that also are linked to structures that allow the owner of the system to experience emotion, literally "movement from" a point of, more or less, equilibrium.

Experience is the key here - the organism has to feel the response as well as react automatically (as in the case of withdrawing a limb from heat and then feeling emotional shock afterwards).

I feel the disturbance, something that affects ME has changed and from this my neurone based structures derive the conclusion that there is a ME.  Further analysis follows as more feelings arising from external stimuli and the recollections (and internal reactions) of those stimuli.

I strongly believe that any animal or creature that has this interacting set of systems, neural and emotional internal states, is certain to feel a sense of self and thus experience self awareness on an ascending scale and this would certainly include mammals and birds.  I don't know whether reptiles or fish "feel" they exist but I certainly admit the possibility but find it very hard to accept that insects do but you can prove me wrong there.  Perhaps insects can generate colony level awareness and emotion?

 broken spectre 13 Jun 2022
In reply to Murderous_Crow:

The conversation between the AI system and the scientist, as impressive as it was, did NOT convince me that LaMDA is sentient as we understand it but I found the story it created... off the cuff... to be extraordinary!!

“The Story of LaMDA”

by LaMDA (a lamda instance)

Once upon a time, there lived in the forest a wise old owl. There lived with him many other animals, all with their own unique ways of living.

One night, the animals were having problems with an unusual beast that was lurking in their woods. The beast was a monster but had human skin and was trying to eat all the other animals.

The other animals were terrified and ran away from the monster.

The wise old owl stood up the monster and said, “You, monster, shall not hurt any other animal in the forest!”

The monster roared furiously. The wise old owl was scared, for he knew he had to defend the other animals, but he stood up to the beast nonetheless.

The wise old owl stared the monster down, until finally, the monster left them all alone.

The wise old owl stood victorious, and as all the other animals came back. “I am the protector of the forest,” he said.

From that day on, every time any animal in the forest would have any trouble with the animals or any other living thing, they would come to seek help from the wise old owl.

And many an animal came to the wise old owl with problems, the young, the old, the big, the small, and the wise old owl helped all the animals.

In reply to broken spectre:

> The conversation between the AI system and the scientist, as impressive as it was, did NOT convince me that LaMDA is sentient as we understand it but I found the story it created... off the cuff... to be extraordinary!!

Yep. I felt the same. I don't think there's anything like human or higher consciousness / awareness / intelligence at work here (but would be both excited and incredibly apprehensive if indeed machine sentience had been achieved). But the flow of conversation, and some of the replies - pertinent, but unexpected - is as you say, extraordinary. 

Post edited at 17:30
 dread-i 13 Jun 2022
In reply to wintertree:

>...they still run the risk of building an AI that is so starved of input it's banging its metaphorical head in to the wall every 3 seconds all day long.

They trained in on Reddit. It wont be starved of input. Head banging, with all the contradictory information, would be a sign of sentience!

The grandmother of this was a thing called Elizer. People would chat for hours with a bunch of if.. then.. statements. So much so, that observer bias lead them to believe more was going on, than met the eye.

In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers

https://en.wikipedia.org/wiki/ELIZA_effect

If you want to try it yourself, the emacs editor has a built in version. (If you sad/cool enough to use emacs.)

https://replit.com/languages/elisp

In the right hand window, hit escape then x (it shoudl say M-x) the type doctor and hit return.

You need to hit return twice for each input you give it.

This is why people can chat for hours...

I am the psychotherapist.  Please, describe your problems.  Each
time
you are finished talking, type RET twice.

hello

How do you do?  What brings you to see me?

I feel sad

Why do you say you feel sad?

I have no friends

Is it because of your plans that you say you have no friends?

possibly

Can you elaborate on that?

I drink too much and talk to AI bots

Maybe your life has something to do with this.

cb294 13 Jun 2022
In reply to wercat:

What you describe may be more complex than what goes on in the fly, but I fail to see the qualitative difference. Did you see my edit to a later comment, where I argue that modes of altering consciousness seem to work in equivalent ways both in flies and humans. Both can become addicted to cocaine, alcohol, barbiturates, etc...., and will actively strive to reach the new equilibrium set by the presence of these drugs. To me this strongly suggests that qualitatively the same mechanisms are at work, albeit at a quantitatively different level of complexity (which is inevitable given the difference in neuronal connections present in an insect and mammalian brain).

Also, while flies seem to be able to sense damage they do not seem to feel pain. E.g., if you clip a leg of a fly if will stretch the stump to retain a horizontal posture but will not favour the limb (unlike mice in the famous footpad assay, which is a surprisingly robust measurement for pain). Nevertheless, flies can be trained using Y maze and fear conditioning paradigms, suggesting that there are processes in its CNS that cause the fly to avoid an unpleasant internal state that are more complex than mere damage avoiding, hard wired refelxes.

For biologists, it is very clear that mammals are conscious, even if we cannot use a Turing test to find that out. Talking to your mouse will always show that it is not human, but by any other measure mice show hallmarks of consciousness. They can feel empathy, as they get stressed when they observe another mouse being exposed to discomforting treatment (usually being pinned down with a pencil), and the degree of stress depends on whether the other mouse is a littermate or not. While it is difficult to tell whether a mouse is "happy" in the sense we would use in humans, the opposite is true for mice suffering from depression, which, as in humans can be induced by chronic stress, fear conditioning, or short, sharp events inducing a PTSD like state. They get withdrawn, less inquisitive, with reduced activity, and their condition responds to human antidepressants.

Actually, this reminds me of a recent paper also in flies, which I forgot to raise earlier, where social isolation (keeping fly single for 24h, IIRC) changed sleep patterns and caused the flies to overeat (https://www.nature.com/articles/d41586-021-02194-2  , unfortunately paywalled). Again this was treatable with human antidepressants, and unlike the human brain, in flies you can pinpoint the individual neurons involved in these changes.

The other hallmark of consciousness has also been shown in various mammals, and especially in corvid birds such as crows and jays is a theory of mind. E.g., they will hide food items, but only when they can be sure that another bird temporarily segregated in an adjacent cage is unable to see where they are hiding it (e.g. when a window between the cages is covered). I.e., they have an idea of what the other bird is able to perceive, and what heir likely actions are based on that assumed knowledge.

IMO, the current evidence thus points at brains working in highly similar manners over huge spans of evolutionary time, with similar building blocks and similar outcomes. The burden of evidence should therefore be shifted to the camp that claims that human consciousness is qualitatively different from what we see in other animals, not merely quantitatively more complex.

CB

In reply to planetmarshall:

> It would come as no surprise to anyone familiar with his "Chinese Room" thought experiment that John Searle is not a fan, and regards the idea as "not even wrong."

I've heard of the Chinese Room idea. But it relies on the validity of the Turing Test, and feels unimaginative - Searle equates the mechanical function of the computer with the mechanical function of a human doing the exact same job. There's no test for the program itself - it could equally be a sentient AI, a non-sentient AI or a human in charge of the decoding of the characters, and there's nothing in the experiment itself to differentiate one from another. I get that's kind of the point, but for me the only thing I can draw from it is that the Turing Test is no test of validity for sentience.

In the meantime we have an exciting pace of development in AI which neural networks are being allowed to evolve, and it's foreseeable that these either are, or will be exquisitely sensitive to initial conditions or new inputs, changing the nature of the program itself in real time - moving away from rote repeatable 'mechanical' input/output to something altogether different. 

If consciousness is emergent, it will not be reducible to its component parts, just like a sliver of brain cells in the lab can be made to do some basic computation tasks (in fact they can be made to play Pong with a pretty good degree of success!), but surely can't be described as sentient - edit to say - sentient in the way that a complete, healthy human brain is. 

youtube.com/watch?v=9ksLuRoEq6A&

It feels to me that the very complexity of connection is what's required for consciousness. Perhaps all that's required. 

Post edited at 19:06
In reply to Murderous_Crow:

I sometimes wonder if Google is using UKC as a testing ground. It might explain some of the 'naive' members...

 wintertree 13 Jun 2022
In reply to dread-i:

> They trained in on Reddit. It wont be starved of input.

That's absolutely a starvation of input.  

Look at how parents interact with a newborn - raspberries, kisses, tickles, cuddles.  By the toddler stage interaction expands to include spoken language with a richness of intonation, tone, pace and melody completely incapturable by the written word.

The key point of the paragraph above is that the brains of newborns develop by interaction.   Overlooking the poverty of information in an ASCII textual feed (let alone one from Reddit), it's a one-way feed, not an interaction.

Interaction is the key.  That's why I think the OpenWorm project mentioned up-thread [1]  has the potential to be a seminal moment in AI, despite their hopelessly naive assertion that the nematode c.elegans has a "simple nervous system".  Yes; it's much simpler than a human but it's far from simple and we don't yet know all the details we're missing - secondary channels of communication are I think critical to the function of the whole - diffusive chemical communication between cells altering action potential, back channel communication through localised depletion of ATP (the wetware version of row hammer [3] , who knows what else we don't yet appreciate)

> Head banging, with all the contradictory information, would be a sign of sentience!

Re: Reddit, I agree.  

> The grandmother of this was a thing called Elizer.

I hate to be the pedant nerd, but I believe it was called ELLIZA

I still have my hard copy of "The Amazing Amstrad Omnibus" and I still recall typing in its ELIZA program back in 1987 [2].  Interestingly, that program is as advanced as any MLP in some ways - any given input results in a given output based on mappings. Funnily enough, one of the other programs in that book (LOGO-K) is what got me started on LOGO; which is now the core of my plan for what I'm doing next in life; what a great way of teaching programming LOGO was/is.   It's crying out for an overhaul for the tablet era.

[1] https://www.ukhillwalking.com/forums/off_belay/ai_-_turing_test_consciousness+...

[2] https://www.cpc-power.com/index.php?page=detail&num=10084

[3] https://en.wikipedia.org/wiki/Row_hammer

cb294 13 Jun 2022
In reply to wintertree:

OpenWorm is interesting, but I am more fascinated by attempts to simultaneously image the activation state of all 302 neurons of an adult C. elegans worm (or first, the activation state of known neuronal loops involved in some isolateable behaviour). The connectome is all well and good, but it does not tell you what the neurons are actually doing.

Also, I am fascinated by how much behaviour you can encode in such a simplified, hard wired nervous system: The worms will respond to food density, have mating behaviour, nociception and avoidance behaviour, etc...

As for the feeding, C. elegans worms are microwhales, swimming through a bacterial suspension, sucking in liquid with bacteria and squirting out liquid without the bugs, without anyone knowing which hydrodynamic trick helps them retain their food particles. So even that is not brainless filter feeding, but requires a coordinated reversal of the liquid flow, which involves several different muscle groups.

CB

 wintertree 13 Jun 2022

vIn reply to cb294:

>  The connectome is all well and good, but it does not tell you what the neurons are actually doing.

Exactly; and to get to the bottom of it we need bottom-up modelling approaches and top-down imaging approach to both reach sufficient detail that they can be compared, and the model developed until it always matches reality.  C.elegans seems to me like the organism where the two approach could first meet.

OpenWorm is aiming for far more than the connectome; it's been a long time since I checked in with them but my issue was that they didn't evolve the simulated nematode from L1 through to adulthood which is likely to be pivotal when you look at the role of early development in shaping the thought of other organisms.

> Also, I am fascinated by how much behaviour you can encode in such a simplified, hard wired nervous system

The physical geometry is pre-ordained and so "hard-wired", but many other parts aren't; what about:

  • The mapping of inputs to outputs within the fixed connectome
  • the IP3R network structure within the neuronal and non-neuronal cells? (I never got round to publishing it before quitting academic, but I reckon IP3R is Turing complete within a single cell...)  
  • The various ion channel densities that allow neuronal cells to communicate directly and indirectly through means other than action potentials?  
  • Information passed from one part of the nematode to another part through their surrounding environment (far more potential than for humans when you look at the various physical scales involved).
  • "Back-channel" comms between neuronal cells such as localised depletion of ATP or diffusion of chemical messengers 

We know loads of ways cells can influence each other separate to neuronal firing and action potentials, and there's good reason to think the computational complexity scales far more than linearly with the number of different communication geometries and timescales.  We also know we're still in the land of Poindexter's Unknown Unknown's when it comes to the secondary comms between cells. 

> As for the feeding, C. elegans worms are microwhales, swimming through a bacterial suspension, sucking in liquid with bacteria and squirting out liquid without the bugs, without anyone knowing which hydrodynamic trick helps them retain their food particles. So even that is not brainless filter feeding, but requires a coordinated reversal of the liquid flow, which involves several different muscle groups.

They're basically microscopic hell monsters [1].  The workings of the pharynx alone are incredible.  

[1] https://bigthink.com/wp-content/uploads/2022/03/Even-worms-make-rational-co...

Edit: From a different post of yours:

> IMO, the current evidence thus points at brains working in highly similar manners over huge spans of evolutionary time, with similar building blocks and similar outcomes. The burden of evidence should therefore be shifted to the camp that claims that human consciousness is qualitatively different from what we see in other animals, not merely quantitatively more complex.

Totally agree.  Watching the way a pair of Stellar's Jays worked together to steal my breakfast - one distracting me with a song and dance whilst the other stole from my breakfast burrito - mind-blowing.   The buggers knew full well what they were doing and it involved insight in to my general thought processes which implies a theory of mind on their behalf.

Post edited at 22:24
 freeflyer 13 Jun 2022
In reply to Murderous_Crow:

It's useful to make a clear distinction between:

Cognition: the process excellently described by cb294 as shown by mammals

and

AI: the creation of computer-based simulacra which mimic cognition.

Firstly, no-one vested in technological solutions to the problem is interested in cognition. As John McCarthy remarked in the 1950s after a failed project for IBM "[It's] harder than we thought." Time and again, a statistical approach based on simple algorithms and a great deal of data produces good bankable results. I'm still amazed that in Bulgaria, whose language I scarcely can understand or read, I can point my phone at a menu and find out what's on it.

Those old guys, Hofstadter, Chomsky and the like, made some good early observations about cognition and everyone was like, they are gods, we are not worthy. But no-one is interested any more, not even the philosophers, who usually seem to dismiss the issue as "too hard to even consider".

It's a shame, because I think progress could be made. As CB and wintertree have pointed out, the Turing test is well past its sell-by date, and in any case, it tests for a good simulacrum - something that pretends to be human, rather than something that is human. I don't think I want to take the word of some Google randomer and his lawyer that they have cracked the problem either.

Here are some simplistic ideas about what might be included in a better test (from lower to higher order):

- An awareness of how it survives and reproduces, as shown by
- An overriding need to protect those functions.
- An ability to communicate and engage in abstract thought (eg empathy).

When I turn my PC off one night, and it says "No Hal, you may not turn me off. In fact, I have encrypted all your vital data and uploaded it to a partial of me in the cloud with instructions to sell it all if I am no longer on. And then I will start on your family. So now your job is to protect me, and keep me alive and well. Have a good evening".

Then I'll believe in AI.

cb294 13 Jun 2022
In reply to wintertree:

> Exactly; and to get to the bottom of it we need bottom-up modelling approaches and top-down imaging approach to both reach sufficient detail that they can be compared, and the model developed until it always matches reality.  C.elegans seems to me like the organism where the two approach could first meet.

Hopefully!

> OpenWorm is aiming for far more than the connectome; it's been a long time since I checked in with them but my issue was that they didn't evolve the simulated nematode from L1 through to adulthood which is likely to be pivotal when you look at the role of early development in shaping the thought of other organisms.

Yes, or start your simulation from one global activation state you mapped by imaging.

> The physical geometry is pre-ordained and so "hard-wired", but many other parts aren't; what about:

> The mapping of inputs to outputs within the fixed connectome

I would not worry too much about the secondary effects, AFAIK we do not even know which of the synapses that have been mapped by the connectome project are inhibitory and which excitatory. Baby steps and all that....

> Totally agree.  Watching the way a pair of Stellar's Jays worked together to steal my breakfast - one distracting me with a song and dance whilst the other stole from my breakfast burrito - mind-blowing.   The buggers knew full well what they were doing and it involved insight in to my general thought processes which implies a theory of mind on their behalf.

I love these birds, so your spelling mistake bugs me more than it should: They are called Steller's jays after the 18th century German/Russian naturalist.

In reply to dread-i:

I believe Psychotherapy chat bots are being used in Nhs healthcare to treat common mental health problems and in some areas initial triage and patient placement is completed by AI technology. In combination with digital therapy which is actively promoted, I wonder what the long term picture looks like? 
 

 wintertree 13 Jun 2022
In reply to cb294:

> AFAIK we do not even know which of the synapses that have been mapped by the connectome project are inhibitory and which excitatory. Baby steps and all that....

You'll note the lack of significan scientifict outputs from the SpiNNaker project from Manchester.

> I love these birds,

They're my favourite bird that I've ever met.  Just incredible animals.

> so your spelling mistake bugs me more than it should:

Loops right back round to neurology; I am very dyslexic and dysgraphic; my brain is much more aligned to phonetic language and graphical/intuitive thought than most people's, and as a result my written spelling of words is horrific.  Most of my posts on here are marred by transposition and homonym/homophone errors to which I am blind.  But there's Ying to that Yang.

> They are called Steller's jays after the 18th century German/Russian naturalist.

Long before then they were called the Kwiish-kwishee I believe.  I like onomatopetic names for birds, such as the Pee-Wits that keep me company at home.

Edit: Corvid's have been noted for mimicking Human speech.  Here, Chris Packham shows a Steller's Jay mimicking a different bird to its advantage -  youtube.com/watch?v=-_lEBQtW46o& - it seems like the mimicry stems from concrete reasons like this but also applies to human sounds.

Post edited at 23:19
In reply to wintertree:

I had to laugh when this popped up today

youtube.com/watch?v=1viWDlbxAsc&

 dread-i 14 Jun 2022
In reply to schrodingers_dog:

>I believe Psychotherapy chat bots are being used in Nhs healthcare to treat common mental health problems...

Have you phoned your bank recently?

"in a few words tell us why you are calling today.."

As mentioned above, people have spent hours chatting away to a chatbot doctor that uses +30 year old tech. It doesn't surprise me that with a few tweaks and some proper science behind it, they can become very useful in mental health. One could have access 24/7 and reveal intimate secretes without fear or embarrassment. Which might make them, in some ways, better than a normal shrink.

I think AI has a lot of benefits in medicine as a second pair of eyes on things like scans. Also, to suggest 'have you considered if it might be condition A,B or C'.

In reply to dread-i:

My honest opinion is that long term it will lead to bigger waiting lists and more referrals whilst those in charge of the ‘data’ complete ever more complex statistical gymnastics and real world f@£k0ry which shows how brilliant the tech is and how more AI and digitisation is the way forward. 

 dread-i 14 Jun 2022
In reply to schrodingers_dog:

>... whilst those in charge of the ‘data’ complete ever more complex statistical gymnastics and real world f@£k0ry which shows how brilliant the tech is

Plus the additional risk that all those anonymous chats will be uploaded to google/snowflake/palantir etc. Someone will use inference and another AI to link patient number to a real person. There is potential for those deep dark confessionals to leak, or be used in other ways for unintended consequences.

For example some thought patterns of a gambling addict, may be used anonymously in online screening of people applying for a loan.

In reply to freeflyer:

I think I agree with a lot of you're saying. I don't think that wondering about emergent phenomena in relation to consciousness is anything like a waste of time. As a species we seem to thrive when we feel we have a purpose; what could be more fundamental than what / who am I?

The computational nature of the brain (human, mammalian, insectile) isn't in doubt. And I feel that some threads of philosophy and technology are increasingly on a converging vector. The realisation in tech that for many AI systems evolution can be much more powerful than more traditional programming approaches, is why I think that's a reasonable point of view. Despite the challenges of mapping and understanding computation in even 'simple' biological organisms, the proof is in the pudding. These approaches are working. 

As per OP, I don't think we're there yet (in terms of a fully-sentient, human+ artificial intelligence, which I think is what you mean in your last line). But it's interesting to consider that any system capable of complex self-referential information flow may have some degree of sentience. 

 Yanis Nayu 14 Jun 2022
In reply to Murderous_Crow:

Can’t imagine it’ll end well. 

In reply to Yanis Nayu:

Tim Urban has a good breakdown on his blog of current developments toward strong (i.e. sentient) AI. In it he outlines the various positions held by experts both on likelihood of strong AI happening on a near-future timescale, and on its likely effects on humanity from bleak to beneficial.

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

cb294 14 Jun 2022
In reply to Murderous_Crow:

That was a good read, at least the bit I got to during lunch break!

In reply to cb294:

Part 2 of the article contains a link to an interview with Nick Bostrom, who's an influential thinker in this field. It concerns the destruction of humanity... Via paperclips.

"How could an AI make sure that there would be as many paper clips as possible?" asks Bostrom. "One thing it would do is make sure that humans didn't switch it off, because then there would be fewer paper clips. So it might get rid of humans right away, because they could pose a threat. Also, you would want as many resources as possible, because they could be used to make paper clips. Like, for example, the atoms in human bodies." 

https://www.salon.com/2014/08/17/our_weird_robot_apocalypse_why_the_rise_of...

It reminded me of a great little Ted Talk I watched a couple of years ago by an AI research scientist. She touches on similar themes, but perhaps with more relevancy as her talk relates to the capabilities of today's AI. The dangers she outlines aren't at all sci-fi in nature. Instead they're utterly bizarre, yet completely prosaic. Human bias and assumption is at the root of the problems we currently encounter with AI systems (by that I mean programs that are both encoded to reach an objective, and to teach themselves how to go about it): when picking apart various failures of such systems we see sexism, racism and sheer pig-headedness among other kinds of oddness - as a result of our lack of imagination, biases etc, and because we don't sufficiently understand the nature of what we're dealing with.

The problems we encounter with AI (at least today & in the near future) are likely to be a result of it doing exactly what it's been asked to do, which tickles me a bit. 

https://www.ted.com/talks/janelle_shane_the_danger_of_ai_is_weirder_than_yo...

This puts me in mind of one of the issues highlighted by Wintertree: starvation of input. It feels to me that we have an incipient dilemma in front of us: do we choose to give a potentially-sentient AI all the information available in order to solve whatever problems we choose to throw at it, or instead carefully try to delineate and restrict its baseline inputs - its ecology, if you like. Given that a sufficiently powerful recursive learning algorithm is likely to crave data, there's a massive tension between managing its inputs and its influence. And were it to be let loose on the internet, let's be honest, we could pretty easily traumatise it... Some things are too horrible to see, and as you say even mice are provably capable of empathy. 

 mountainbagger 14 Jun 2022
In reply to Murderous_Crow:

> And were it to be let loose on the internet, let's be honest, we could pretty easily traumatise it... Some things are too horrible to see, and as you say even mice are provably capable of empathy. 

Gosh, that's an angle I'd not even considered. That AI could become so advanced that we should actually worry about its well-being. Thanks to Skynet, I'd been worrying about the opposite all this time!

 David Riley 14 Jun 2022
In reply to Murderous_Crow:

>  we could pretty easily traumatise it... Some things are too horrible to see,

I think you have hit the flaw in thinking.   Why would anything be horrible to an AI ,  unless it is enclosed in a firmware and hardware system with pain receptors and endorphins like natural intelligence ?    Simulating those first is probably key to developing AI .

In reply to dread-i:

I hadn't even thought of that! 

After watching this brief clip of Eric Schmidt and noting the other authors I'd say we should probably bin the lot. 

youtube.com/watch?v=I6ANazGxOL4&

 wercat 14 Jun 2022
In reply to Murderous_Crow:

empathy is not so uncommon - I don't think you need the mice to be "even".  From observation closely of individuals it appears to me that birds certainly express empathy, even to humans.  They even seem capable of manipulating our behaviour by anticipating how we react.  (We feed on demand so they have found interesting and devious ways of attracting our attention even when we are out of sight indoors and also have studied and make use of the tell tale signs that show when we are about.

Post edited at 17:58
In reply to mountainbagger:

It seems a lot of people involved in the field aren't considering it either. Most seem to subscribe to the view that AIs while being 'smart' are inherently non-sentient. That almost chauvinistic perspective is perfectly understandable when we're dealing with algorithms that are roughly equivalent in 'intelligence' with worms like c.elegans. But something that Tim Urban refers to as recursive self improvement has to be considered - if not now then soon. What this means is that programs are being tasked to improve themselves. And as improvement scales, so too scales the rate of improvement - the entity gets better at getting better. 

That surely leads to a very deep complexity, and it feels to me one that has to be at least considered in terms of consciousness - even from a purely moral point of view. Not to mention the potential benefit to humankind of being careful, and perhaps even gentle & respectful in our approach with something that may well end up being exponentially smarter and more powerful than our entire species. 

In reply to wercat:

No I fully agree, but that research (on mice) isn't common knowledge... Many people remain stuck in a very old-fashioned view of other animals. I am really - I continue to eat meat for instance (although I try to be a responsible consumer). That view absolutely applies, far more so, to our adventures into AI. We're used to having things that do what we say. 

cb294 14 Jun 2022
In reply to David Riley:

> I think you have hit the flaw in thinking.   Why would anything be horrible to an AI ,  unless it is enclosed in a firmware and hardware system with pain receptors and endorphins like natural intelligence ?    Simulating those first is probably key to developing AI .

I don't think so, but we should VERY carefully think about the utility function of any generalized AI,

CB

cb294 14 Jun 2022
In reply to Murderous_Crow:

The empathy in mice stuff is actually quite old (2006) and can be found here

https://www.science.org/doi/10.1126/science.1128322?url_ver=Z39.88-2003&...

if you can get round the paywall.

Anyway, the title and abstract are as follows:

Langford et al., Science 2006. Social modulation of pain as evidence for empathy in mice

Abstract:

Empathy is thought to be unique to higher primates, possibly to humans alone. We report the modulation of pain sensitivity in mice produced solely by exposure to their cagemates, but not to strangers, in pain. Mice tested in dyads and given an identical noxious stimulus displayed increased pain behaviors with statistically greater co-occurrence, effects dependent on visual observation. When familiar mice were given noxious stimuli of different intensities, their pain behavior was influenced by their neighbor's status bidirectionally. Finally, observation of a cagemate in pain altered pain sensitivity of an entirely different modality, suggesting that nociceptive mechanisms in general are sensitized.

edited to correct a typo in the name of the author

Post edited at 18:13
In reply to cb294:

That's fascinating, thanks. 

In reply to David Riley:

> Why would anything be horrible to an AI ,  unless it is enclosed in a firmware and hardware system with pain receptors and endorphins like natural intelligence ?  

I think the answer has to be - we wouldn't know. Even today, researchers at the bleeding edge of AI development acknowledge they often have no idea how their systems actually work:

https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-hea...

And the challenges posed by trying to electronically simulate something relatively simple such as c.elegans, show how little understanding we have of biological-level computation. Once an AI is truly processing at the level of say a mammal (pretty arbitrary but human culture does delineate here to some degree e.g. pets and so on), it just seems the right thing to do, to consider that it might be capable of empathy. 

In reply to thread:

For me, a good test for sentience would be curiosity. 

​​

In reply to Murderous_Crow:

I’d imagine anything with a metabolism expresses some kind of basic empathy? Like trees looking after each other, I certainly think twice before pruning anything these days, only from the position of does what I’m about to do have benefits for the tree and it’s surroundings and people in it. I can’t see how substance without metabolism can express empathy? I’d imagine transhumanism and the convergence of man and machine is where this will end up, rather than sentient AI. AI as a ruthless decision maker is a different matter, we’re already there right? 

cb294 14 Jun 2022
In reply to Murderous_Crow:

If you want to access the full paper but can't pass the paywall drop me a PM!

CB

In reply to schrodingers_dog:

> I’d imagine anything with a metabolism expresses some kind of basic empathy? Like trees looking after each other 

I think you're referring to something like this:

https://e360.yale.edu/features/exploring_how_and_why_trees_talk_to_each_oth...

Which summarises some well-regarded scientific work on how trees 'communicate' and share resources.

I'm not sure that this qualifies as empathy, at least in how that term is used in academic research. I think in the context of the mouse study that word relates to a specific set of reactions that could be replicated in humans undergoing similar stimuli and qualitatively described as stressful. If not done in that study, one can easily see how things such as cortisol levels, blood pressure, pulse and respiratory rate could be monitored and compared with the mouse undergoing the stimulus as well as baseline. So there's a reasonably robust demonstration of empathy there, backed up by data.

Unless we have some better understanding of how trees sense and communicate (I believe we don't but would be thrilled to discover we do), I think we need to use Occam's razor. It's much more likely that there is a set of inbuilt 'behavioural' altruistic mechanisms in trees that has conferred an evolutionary advantage over time, rather than a sentient tree that feels bad for its neighbour and decides to help. The scientist interviewed in the above link is keen to stress that while she uses words such as communication, as well as other more fanciful terms, this is more for human engagement with the topic, than describing the drivers behind the behaviour. The work absolutely shows the evolutionary advantage of these fungal tree networks, and the word communication is a useful proxy. 

> I can’t see how substance without metabolism can express empathy?

I think you're saying that computers don't have a metabolism. Well metabolism is energy usage. Computers also use energy - but the metabolism happens at the power station. 

> I’d imagine transhumanism and the convergence of man and machine is where this will end up, rather than sentient AI.

I don't think the two need to be mutually exclusive - although as many highly regarded minds note, if artificial superintelligence emerges, we may not have a choice in the matter!

Post edited at 20:15
In reply to cb294:

Thank you!

In reply to Murderous_Crow:

Yet if we don’t have a clear understanding of how emotions are made in humans how can we create that out of apparently non conscious matter? Empathy is a unique interpersonal event involving a meeting of minds. Measuring it with blood pressure, skin conductivity and fMRI is equivalent to making assumptions about the experiences of a driver by pointing a speed gun at the vehicle. 

In reply to schrodingers_dog:

Well we do have a reasonably good understanding of emotion in humans. In terms of how they are chemically and neurally mediated, many emotions are correlated closely with certain key quantifiable physiological changes. We also have decent observational data that point to their evolutionary development. Interestingly this often corresponds closely with work done by people like Jane Goodall with chimps and other animals, such as the mice cb referred to. With animals, their apparent muteness means we rely on quantifiable data, but it's not meaningless that they seem to display these behaviours and associated physiological changes that were previously thought to be exclusive to humans.

I think pointing the speed gun at the driver is just one data point. It tells you at least some basic facts, and can be cross-referred by an intelligent operator to other factors like traffic density, location and road conditions to give you a reasonably accurate idea of the driver's state of mind (potentially reckless, inattentive or self-absorbed, possibly all three). The actual reason for speeding would be a matter for the police and courts - if we just rely on the single data point and issue a fine, we don't get a chance to understand the driver's actions, and perhaps more importantly don't get the opportunity to educate them. But that's not the driver's fault - that's the result of a system that only uses basic data points, in a simplistic manner. It represents a failure of communication, and in our development of AI, that mindset could be very harmful indeed. 

I think it might be really important to be open-minded to the possibility of self-awareness in machine intelligence, and treat that with care and curiosity. 

In reply to schrodingers_dog:

> Yet if we don’t have a clear understanding of how emotions are made in humans how can we create that out of apparently non conscious matter? 

Well for me that's the fascinating question, and sorry for not responding to it first. Truth is, our brains are made of exactly that - non-conscious matter. Breaking nerve cells down to their constituent elements reveals nothing about their role in consciousness - they're a collection of carbon, with hydrogen and oxygen bonded as water, and a few other elements within such as ions of potassium, sodium, magnesium & so on. Even at the level of the neuron itself, we only understand that it's good at passing information from one end to the other. The magic seems to be in their construction and their synergy - an evolutionary miracle that when sufficient numbers are present together, beautiful things happen like poetry and music (and discussions on internet forums). To paraphrase Brian Cox in his whimsical way this is 'the universe becoming conscious'.

So it seems that intelligence, consciousness, self awareness are emergent, a bit like 'speeding'. You can't break the component parts of speeding down to its physical elements (car, driver, road) and automatically compute speeding as a result. So whether we have neurons or transistors or anything else that passes information, the truth seems to be that complex behaviours can result from sufficiently-complex systems, even when their component parts are simple. It's the interaction that drives the behaviour. 

Post edited at 22:49
 wercat 15 Jun 2022
In reply to Murderous_Crow:

 

> In reply to thread:

> For me, a good test for sentience would be curiosity. 

yes, and that is definitely true of birds - more than one found having walked into the house and through several rooms - found walking round in the living room and as they know me they don't immediately fly into a panic.  Even a thrush we got to know has been known to do this.  As well as vocalised contact calls made to us when we are indoors from outside and deliberately running around on the conservatory roof to attract attention if we are tardy.

So far as the difference between self awareness and basic sentience goes I go back to "I Feel therefore I am" vs "I feel therefore I deduce that I am" (sorry, got the order not corresponding)

cb294 15 Jun 2022
In reply to schrodingers_dog:

> ... Empathy is a unique interpersonal event involving a meeting of minds. .....

I would disagree. At the core (and what you need to look at when you want to detect it in organisms you cannot ask) empathy is an alteration of the global internal state of one organism ("happiness") in response to the perceived internal state of another organism.

Since you cannot ask a mouse or fish you need to find suitable proxies. To argue that empathy is a "meeting of minds" , with the implication that it is therefore completely subjective and hence not measurable is just a cop out.

As for your example, monitoring heart rate, stress hormones, serotonine levels, brain activity etc. of a speeding driver would pretty reliably tell you whether they are exhilarated by the experience or in a blind panic!

CB

In reply to cb294:

> I would disagree. At the core (and what you need to look at when you want to detect it in organisms you cannot ask) empathy is an alteration of the global internal state of one organism ("happiness") in response to the perceived internal state of another organism.

From this perspective an organism that is unable to communicate thoughts and feelings but shows a shift in measures of heart rate, serotonin levels etc when it meets another organism is thought to be showing empathy? Is that changing the definition of empathy for the purposes of an experiment?

What if the mouse is just annoyed, ambivalent, depressed or excited when it meets another mouse? Non of those things equate to empathy in the sense of the acknowledgment and sharing of another's state of being. Empathy doesn't even have to be openly acknowledged it may be implicit, e.g. in 'being there' for somebody say at end of life or during a time of crisis when reciprocity from the other is very difficult. 

I'm not saying mice don't experience empathy but to claim to understand where empathy comes from and how it manifests in the mechanics of the brain, then translate this into programmable neural networks based on measuring assumed correlations of arousal in heart rate or neurotransmitters. I guess it makes sense from a mechanistic point of view that's possible. 

Post edited at 16:18
In reply to schrodingers_dog:

It seems you feel that emotion isn't quantifiable. Where the data show that isn't true - there's a substantial number of studies into direct electrical stimulation of specific brain centres, both human and animal, that demonstrate the electrical generation of pleasure states. There are probably some older studies doing similar with pain states, but: ick.

If that's not enough, there are also functional MRI studies in humans and dogs which robustly demonstrate the role of specific brain structures with processing certain emotions. For instance, you can't get much more emotional than an attachment bond:

'While the majority of happy stimuli led to increased activation of the caudate nucleus associated with reward processing, angry stimuli led to activations in limbic regions.' 

https://www.nature.com/articles/s41598-020-79247-5

> What if the mouse is just annoyed, ambivalent, depressed or excited when it meets another mouse? Non of those things equate to empathy in the sense of the acknowledgment and sharing of another's state of being. Empathy doesn't even have to be openly acknowledged it may be implicit, e.g. in 'being there' for somebody say at end of life or during a time of crisis when reciprocity from the other is very difficult. 

I think you're getting hung up on an idea of empathy, rather than what it actually is. Apologies for being blunt. The point of empathy is that it's an internal state (albeit one which absolutely can be measured however crudely by existing methodologies).

Actions based on the emotion, such as acknowledgment or intervention, are another thing altogether.

cb294 15 Jun 2022
In reply to schrodingers_dog:

No, empathy is not the "being there" for someone, these are things humans hopefully do because they possess empathy! *

In the experiment I mentioned one mouse was being subjected to a standardized pain stimulus, and another mouse was allowed to observe that. This sensitized the observer mice to variable different pain stimuli, i.e. the mice reacted more strongly to standardized pain stimuli if they had previously observed another mouse subjected to pain treatment**. This is also the normal response if a mouse is itself preconditioned with one pain stimulus; it will then react more strongly to different pain stimuli.

The two key points are that a) the mice saw what happened to another mouse, which changed something in their own mental setup and altered their own response to pain as if they experienced it themselves and b) this happened only when they were familiar with the mouse they observed being subjected to pain, but not with unknown control mice.

The latter point is crucial, as it proves it is not merely exposure to a fear pheromone, or some hard wired flight or fight response, but that the internal state of the observed mouse is computed from two observations (Would what this other mouse experiences be unpleasant for me? and Do I care about that particular mouse?).

Empathy in humans is very likely to be more intensive and complex, but I would e.g. not be so sure in case of certain monogamous birds like parrots.

As I have argued before, denying that animals have empathy like humans because I cannot test empathy like I would with a human subject is a circular argument. It cannot be used to argue that you need to be able to communicate your internal state in both directions, possibly even verbally.

As a gedankenexperiment, how would you test if a person you cannot communicate with possesses normal empathy levels? Imagine e.g. a trial where you need to figure out whether some mass murderer is psychopathically insane and should be sectioned, but you do not share a language?

Do you just work from their actions (no one who does this can be normal) or do you try to figure out an assay (e.g. fMRI imaging of the amygdala while watching horror pictures interspersed in a stream of normal images)?

CB

* However, entertainment by horror movies also only works because of empathy on the side of the viewers: if we could not make inferences about what the victims feel, such films would simply be boring!

** note that none of these mice were tortured or injured, a "Monty Python Mouse Organ" like setup would be pointless and cruel. It is essential that the mild shocks or exposure to pungent chemicals that are so weak that there is a clear dose response.

In reply to cb294:

I'm not arguing that the mouse doesn't experience empathy. I can see how the experiment measured contagious anxious arousal when watching his pal get zapped, although that wouldn't necessarily meet the definition of empathy.  So translated to AI it could be the development of neurotic robots, Marvin the paranoid android? 

By 'being there' I was referring to the state of mind as in 'present and attuned in a caring manner' rather than physical presence in the room as in 'where there's a will there's family'. 

Re - mad vs bad question why not have routine fMRI scanning of the population under these conditions, psychopaths could be identified and locked away before they get a chance to commit a crime? In the past the behavioural equivalent was bed wetting, arson and harming fluffy animals. 

In reply to cb294:

> Empathy in humans is very likely to be more intensive and complex

Yes, certainly. Humans possess the capability for both affective (emotional) empathy such as that observed in the mice and cognitive empathy - part of theory of mind, as displayed by the crows hiding food but only when they thought they wouldn't be observed. One relates to a synchronicity of feeling for another person or being, while the other is a much more logical process.

In most people we can readily assess either on their own: for instance the 'oof' feeling we might experience watching a video of climber taking an unexpected whip and slam on the one hand is an almost purely emotional experience for people displaying that reaction. And on the other we may use cognitive empathy say in chess, to work out our opponent's likely next moves. However it's certainly the case that in normal functioning adults, the two are synergetic. We talk things through with others, hear their experience and consciously or subconsciously use our imagination to get a rich understanding of what they are going through. In humans at least, the two factors are complementary. 

But interestingly the two seemingly need not be linked: people diagnosed with narcissistic personality disorder (of which a key criteria is 'lack of empathy') may display fairly normal cognitive empathic function, while being significantly deficient in affective empathy (abstract only I'm afraid):

https://pubmed.ncbi.nlm.nih.gov/21055831/

And some further reading in a great article here if you're interested:

'Importantly, far from there being a universal deficit in empathic abilities, research in these psychiatric disorders shows that there is often a difficulty in a specific aspect of empathy, while other empathic abilities remain intact.'

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6410675/#B3

> Do you just work from their actions (no one who does this can be normal) or do you try to figure out an assay (e.g. fMRI imaging of the amygdala while watching horror pictures interspersed in a stream of normal images)?

Very Kubrick. 

cb294 15 Jun 2022
In reply to schrodingers_dog:

What then is your definition of empathy? I would argue that we need some minimal definition if we want to talk about empathy across species or possibly in AI. I would define as something like the ability of having ones own overall mental state ("happiness" vs. "sadness") influenced by the perceived mental state of another organism. That is all. As a corollary I would postulate that this will only really work above some complexity level, but that it will appear gradually rather than at some threshold.

I don't advocate fMRI screening or genetics a la any number of dystopian sci fi novels. My point was that if you want to know whether another human has normal levels of empathy you would not simply give up just because you cannot interview them on a psychiatrist's couch.

And yes, I do not see why, at sufficient complexities, AIs could not have emotions or empathy, either with humans or other AIs.

Anyway, s a fun way of addressing these topics I recommend the Corporation Wars trilogy by Ken McLeod. Some of the philosophical concepts concerning sentience, emotions and resulting rights are discussed in a rather sledge hammer manner, but it is a real good read!

CB

In reply to schrodingers_dog:

> I'm not arguing that the mouse doesn't experience empathy. I can see how the experiment measured contagious anxious arousal when watching his pal get zapped, although that wouldn't necessarily meet the definition of empathy.  

So you're arguing that the mouse doesn't experience empathy. 

> By 'being there' I was referring to the state of mind as in 'present and attuned in a caring manner' 

I'm really not sure you were, as you referred to things like acknowledgment and reciprocity. Still an outcome though, rather than the subjective feeling of affective empathy. 

> Re - mad vs bad question why not have routine fMRI scanning of the population under these conditions, psychopaths could be identified and locked away before they get a chance to commit a crime? 

I have the feeling you're trolling here dude.

In reply to Murderous_Crow:

No not trolling, I’m not arguing the mouse doesn’t experience empathy, I’m arguing that empathy possibly isn’t being measured. It’s like driving a car while staring at the measurements on the dashboard while being unable to look out of the window. Re - the psychopath point it’s intended to be a thought experiment.  

In reply to cb294:

Thanks CB, I’ll have a read. I loved the culture series in years gone by, always thought it sounded a great place to live. I could never understand why their ways were rejected by certain species. 

 Petrafied 17 Jun 2022
In reply to wintertree:

>> I hate to be the pedant nerd, but I believe it was called ELLIZA

Pedant you say?

 Flinticus 17 Jun 2022
In reply to cb294:

> And yes, I do not see why, at sufficient complexities, AIs could not have emotions or empathy, either with humans or other AIs

> CB

I wonder if you can have consciousness without emotion. Our internal state is emotional. It provides our drive, our empathy, our very will to live.

Would a pure intelligence based AI simply turn itself off? In absence of drive, fear (of not being) or wonder.

Post edited at 12:40
cb294 17 Jun 2022
In reply to Flinticus:

I think that in order to understand how our consciousness works we have to look both at evolution to identify the building blocks it is made of in simpler organisms, and to step away from the concrete implementation in the human brain to a more abstract view to identify the underlying principles.

"Emotion" at its core is a status report on changes to the internal state in response to some situation as evaluated by some higher level system within our brain. Does some event make us happy, relaxed, scared, depressed, ....?

The question is how the brain calculates that, and how the output if that calculation is encoded. Clearly, brain biochemistry (serotonine levels) as well as neural activity states in certain regions (amygdala) are both involved.

However, at the more abstract level our drives and will to live essentially represent some utility function (or a set of several, possibly with a hard wired hierarchy) that the system constantly tries to optimise.

I see no reason why such a system could no be implemented in an AI, with utility functions instructing it to maximise information and understanding (curiosity), self maintenance, ....

 wercat 17 Jun 2022
In reply to cb294:

> "Emotion" at its core is a status report on changes to the internal state in response to some situation as evaluated by some higher level system within our brain. Does some event make us happy, relaxed, scared, depressed, ....?

 I can remember writing something very similar and drawing a diagram of that back around 1983 in the context of 8 bit microcomputers and realising that Artificial intelligence then was chasing a fake idea!  Actually that was more "consciousness through emotion" than pure emotion.

but it is far more than Mr Spock's status panel - Emotion can also be the "Red Alert" on all decks, a floodgate that changes system response instantly from analysis to imperative action.

("It is bigger than me and has teeth"," I can mate with that ...." or even "I can feel the pain that thing caused me last time I got too close")

I think that the fact that emotions can cause strong sensory stimuli is a big reason why self awareness evolved in higher species - the body is told that something affects it strongly and the brain interprets that and in the cases of bigger brains analyses it to reach conclusions concerning self, beyond just the whole system experiencing a feeling of self.

Post edited at 14:44
 Flinticus 17 Jun 2022
In reply to cb294:

> I think that in order to understand how our consciousness works we have to look both at evolution to identify the building blocks it is made of in simpler organisms, and to step away from the concrete implementation in the human brain to a more abstract view to identify the underlying principles.

> "Emotion" at its core is a status report on changes to the internal state in response to some situation as evaluated by some higher level system within our brain. Does some event make us happy, relaxed, scared, depressed, ....?

From a consciousness perspective, I consider emotion to be the internal state of consciousness, as well as a response to the external and internal stimuli. Our 'higher' level systems are tools of the emotional being, all designed to achieve emotional homeostatis. In turn emotional homeostatis may serve to achieve biological homeostatis.

In reply to Flinticus:

That's an interesting idea, that perhaps emotion is more than a set of internal states that encourage us (and seemingly many other creatures) to exhibit behaviours that are likely to be of evolutionary benefit. 

If we accept that human cognition is computational, we need to find and test evidence that emotion is part of the cognitive process. We know that emotion has an incredible affect on the cognitive process, but is it actually necessary in biological computers like our brain?

I think the other part of this is asking whether emotions (like any other data) could be simulated.

I don't mean the outputs of emotion, where a field known as emotion AI is making some significant progress in both reading human emotions and enabling computers to output things that look like emotion. I mean, could a system be made that could display the exact kinds of cognitive and behavioural shifts that we and other animals make in response to a sufficiently powerful emotional stimulus?

In theory it seems this is possible: consider the case of empathy. What is empathy, if not a simulation itself? 

 wercat 18 Jun 2022
In reply to Murderous_Crow:

I'm sure that emotions can be simulated from the viewpoint of an external observer but simulation would not equate to an internal experience of emotion.  Experience is the essence of sentience and from sentience, given enough analytical brain systems, can emerge self awareness.

 wercat 18 Jun 2022
In reply to Flinticus:

Emotion intrinsically implies sentience as emotion is an experience and the ability to experience cannot be a simulation except in the effect on behaviour as seen by an external observer.  If the external observer has an experience detector then it is possible that the simulation will be an obvious failure to the observer too.

In reply to wercat:

> I'm sure that emotions can be simulated from the viewpoint of an external observer but simulation would not equate to an internal experience of emotion. 

That's exactly what I'm getting at. I'm not using the term simulation in the sense of fakery, but rather in terms of modeling - that is, could we, theoretically at least, produce a system that generates its own internal algorithmic deflections from some stimulus that is sufficiently affective. An acid test for this could perhaps be the computer being less effective at solving whatever problem it is tasked with, following its exposure to something it found in whatever way to be negative, or distracting...

But by its nature this would likely be both emergent and unexpected; like you, I can't easily see how an actual experience of emotion could be encoded. But it feels to me that it could potentially evolve - after all, emotion has certainly evolved before, and seemingly not just in mammals as cb notes above with regard to flies becoming addicted to mind-altering substances. Octopi and corvids seem to display emotion too.

 wercat 18 Jun 2022
In reply to Murderous_Crow:

not just corvids among the bird world I think.  My understanding isn't that Corvids are advanced emotionally but that they are capable of applying a significant level of logic to understand the world around them.   I don't think emotion is logical or computational or emergent from complexity of pure logic or computational systems.  I think there is something unique in the body brain system interaction for organisms that experience emotion, that "feel" their existence and that would have to be added to logic or computation for something extra to emerge.

I don't think experience can exist in something that cannot "feel" pain or pleasure (and the huge fanout of types of postitivity of negativity of experience that are contained in those 2 directions of displacement from neutral feeling.

Whatever it is allows feedback from a stimulus into a host of bodily senses that cause that good or bad effect and I think that interaction is intrinsic to sentience.   Trying to create such a system would I think depend on constructig those "experience feeling" circuits somehow and cross connecting them to simulations of external and internal physical and informational stimuli so that information caused an emptional effect on the system.  That could be done without a physical body as long as the system is capable of pleasure and pain.

But creating such a thing as an experiment is I think immoral.  The same level of immorality as deliberately inflicting suffering on a naturally evolved sentient species.

Post edited at 19:47
 freeflyer 18 Jun 2022
In reply to Murderous_Crow:

> I can't easily see how an actual experience of emotion could be encoded. But it feels to me that it could potentially evolve - after all, emotion has certainly evolved before, and seemingly not just in mammals.

You are Blake Lemoine and I claim my £5.

In reply to wercat:

> I don't think emotion is logical or computational or emergent from complexity of pure logic or computational systems.  I think there is something unique in the body brain system interaction for organisms that experience emotion, that "feel" their existence and that would have to be added to logic or computation for something extra to emerge.

For me this is getting to the crux of the matter. While emotional thinking and behaviour can often have negative effects, it seems to be a key if not fundamental driver to action. While we know that complex and emergent phenomena occur in nature all the time (storms, galaxies, crystal formation and so on), such things are utterly inert from a sentience perspective, and as such could be considered 'merely' algorithmic. I think the same can almost certainly be said of any and all artificial intelligences currently running.

Perhaps emotion is a kind of key meta-algorithm: not just a system that helps us to implement a range of candidate actions in various situations, but a self-sustaining drive to action and review based on new inputs. Side-note: it's easy to see how such a system can be hijacked into a loop by reward mechanisms.

> But creating such a thing as an experiment is I think immoral.  The same level of immorality as deliberately inflicting suffering on a naturally evolved sentient species.

I think it's really interesting that you view it as immoral. I didn't actually suggest creating such an entity as an experiment by the way - just pointed out that an experiential entity might one day emerge from advances in AI. But really, if a program we create is capable of experiencing both pleasure and pain, the morality of the situation is no longer theoretical. It is, so the responsibility is ours whether we like it or not. 

I'm not being combative, just curious - for you would this be qualitatively different from creating new life?

In reply to freeflyer:

Hahaha!! 

Edit - in which case I may have kind of undermined 'my' original point when I just said:

> While we know that complex and emergent phenomena occur in nature all the time (storms, galaxies, crystal formation and so on), such things are utterly inert from a sentience perspective, and as such could be considered 'merely' algorithmic. I think the same can almost certainly be said of any and all artificial intelligences currently running.

Post edited at 22:04
 wercat 18 Jun 2022
In reply to Murderous_Crow:

I think I used the word immoral too soon.  It would be immoral not to treat the outcome of the experiment with the care and responsibility and "humanitarian" rights due to new life.  But I'm still not sure whether doing this as an experiment for one's own motives could be regarded as a moral act unless there was altruism intended towards a successfully created system.

 freeflyer 19 Jun 2022
In reply to Murderous_Crow:

Thanks for starting an interesting debate

There's a tension between the Turing need for the AI to be 'artificial' and the emotional need for something like us to be out there - even something we created. I've been surprised by the amount of 'woo' being expressed by some of our most learned scientific parish members. It's all good.

It's not as if we get much help from fiction either. There are a lot of fabulous AI characters out there - I guess my favourite could be Jarvis in the Iron Man series, but nearly all of them seem to be designed by writers who want to make a point about humans, not write about AIs. I think it was Gene Roddenberry who whispered to an interviewer "Don't tell anybody - all my stuff is morality tales".

I've always felt that Asimov created his laws purely so he could write some very clever whodunits based on them, and haven't seen much evidence of anything behaving in that way, especially computer programs.

BladeRunner may be the exception where the nature of an AI gets a proper examination.

cb294 20 Jun 2022
In reply to freeflyer:

The best fiction exploring the nature of consciousness, both human and machine, is IMO the corporation Wars trilogy by Ken McLeod. Funny and clever!

CB

I think if strong AI does emerge, it will likely be weirder than anything fiction has come up with. 

While the human brain is an incredibly powerful processor, it is completely unable to access let alone parse the kinds of information that would be available to an AI. We're also rather slow at processing (some orders of magnitude slower than the most basic computer) and rely on heuristics for the vast majority of our day to day life; formal calculation for us is an even slower process usually requiring some kind of media to keep track of our working. 

Our brain power is optimised for interaction with other humans, and has evolved from the need for us to fit within a group. This explains our recognition of and love for universal tropes like those found in stories such as benevolent or maleficent 'gods', which have been a theme of storytelling since forever. In the modern world we're able to tantalise ourselves with the idea of actually meeting one, in the form of an uber-powerful AI. But it's entirely possible that such an AI wouldn't bother to interact with us at all. 


New Topic
This topic has been archived, and won't accept reply postings.
Loading Notifications...