News:

The Toadfish Monastery is at https://solvussolutions.co.uk/toadfishmonastery

Why not pay us a visit? All returning Siblings will be given a warm welcome.

Main Menu

A Question of Ethics

Started by Griffin NoName, August 21, 2012, 08:30:04 PM

Previous topic - Next topic

Bluenose

Umm, I have at least one of the books on Kindle.  Try here
Myers Briggs personality type: ENTP -  "Inventor". Enthusiastic interest in everything and always sensitive to possibilities. Non-conformist and innovative. 3.2% of the total population.

pieces o nine

Quote from: Griffin NoName on September 13, 2012, 01:36:42 AM
Quote from: Bluenose on September 12, 2012, 11:24:16 PM
So it is with moist other things.  

Wet DNA ?  :mrgreen:
I missed that -- I was too enchanted with
Quote from: Bluenose on September 12, 2012, 11:24:16 PM
To pick up on Bib's point,
;)

Thank you, Blue. I feel my speelink airs are usually just weird. I appreciate entertaining ones!
"If you are not feeling well, if you have not slept, chocolate will revive you. But you have no chocolate! I think of that again and again! My dear, how will you ever manage?"
--Marquise de Sevigne, February 11, 1677

Sibling Zono (anon1mat0)

Back to topic.

At some point in our genetic understanding we may be able to do better than broad links between genes and diseases (right now we know some of those by simple statistical matching but no understanding of the process, ie, A--> ?? --> B), and we may be able to do broad generalizations but those still will be more for basic understanding than a real mechanical understanding. That's why the way to understand this processes is via simulation, although even then random events can make significant differences, so it wouldn't be absurd to say that there is a X% chance of this or that condition developing even if you simulate the whole replication in vivo.

We are quite far from that point but it is quite likely we'll get there at some point.
---
Simulation has ethical ramifications in itself, I was reading that there is work happening at this moment so simulate a full brain, they are capable of simulate a simple invertebrate's brain (that is, every single neuron in that brain with the proper interconnections) right now, but they estimate that with the continual improvement of computing power in 15 to 20 years it will be possible to simulate a full human brain in a computer simulation. What are the ethical concerns of AI?
Sibling Zono(trichia Capensis) aka anon1mat0 aka Nicolás.

PPPP: Politicians are Parasitic, Predatory and Perverse.

Griffin NoName

It's life Jim, but not as we know it.
Psychic Hotline Host

One approaches the journey's end. But the end is a goal, not a catastrophe. George Sand


Bob in a quantum-state-of-faith

Quote from: Sibling Zono (anon1mat0) on September 13, 2012, 05:34:06 PM
Back to topic.

At some point in our genetic understanding we may be able to do better than broad links between genes and diseases (right now we know some of those by simple statistical matching but no understanding of the process, ie, A--> ?? --> B), and we may be able to do broad generalizations but those still will be more for basic understanding than a real mechanical understanding. That's why the way to understand this processes is via simulation, although even then random events can make significant differences, so it wouldn't be absurd to say that there is a X% chance of this or that condition developing even if you simulate the whole replication in vivo.

We are quite far from that point but it is quite likely we'll get there at some point.
---
Simulation has ethical ramifications in itself, I was reading that there is work happening at this moment so simulate a full brain, they are capable of simulate a simple invertebrate's brain (that is, every single neuron in that brain with the proper interconnections) right now, but they estimate that with the continual improvement of computing power in 15 to 20 years it will be possible to simulate a full human brain in a computer simulation. What are the ethical concerns of AI?


One of the episodes of Star Trek, Next Gen had exactly that:  a seemingly harmless game that Riker had got ahold of, was worrisome to the crusher kid & his almost-girlfriend.   So they took one of the games, and ran it through a simulated human brain in fast-time, to see what the long-term consequences it had.

My point in bringing it up, is that they simulated the whole human cortex, with it's billions of interconnections, using their non-sentient super-computer.

Possible?  Sure.

But-- what of the simulated brain?  Did it have a personality?  Was it self-aware, even if only being a simulation?   The show ignored those questions, but they occurred to me....
Sometimes, the real journey can only be taken by making a mistake.

my webpage-- alas, Cox deleted it--dead link... oh well ::)

Roland Deschain

Bob, ST:TNG handles sentience of artificial life, both android and hologram, in many other episodes, as do DS9 and Voyager. TNG focuses on android AI because it has Data, whilst Voyager focuses on Hologram AI, as it has the Doctor. In the episode you refer to, the simulation was non-sentient. I'm not sure how much you know about the inner workings of the ST universe, but the people made with holographic technology can be made to be self-aware or not, and have their inquisitiveness reduced to levels well below the threshold of sentience. This is like switching on and off sentience, to a degree.

There's a particular episode of TNG, where a Starfleet research scientist wants to study Data by opening him up and taking him apart. Captain Picard decides to argue for Data's sentience and rights as an individual. Patrick Stewart loved these types of episodes, and I believe he directed this one himself, although I may be wrong, because of the complicated ethics involved. At what point does something become sentient, and at what point do we begin to care for it?

Voyager handled Holographic sentience in greater depth than TNG due to the Doctor, as I mentioned before, and it was the Doctor who started this off (conversations about his rights). People on board were rude to him, treating him as a literal tool. Some switched him off without consulting him, others left him running without thinking. The Doctor gained the right to switch himself off, and through Kes' work, started to gain a little respect. He then integrated himself into the officers' briefings, and eventually gained a little freedom when holo-emitters were installed in other parts of the ship (he was also free to use the holodeck).

One of the more ingenious ideas used with the Doctor was when he was allowed to learn, as this led to a number of in-depth stories, looking at the consequences of him becoming more like his flesh and blood shipmates. There was even an episode where he disobeyed Janeway's orders, and she reprimanded him just like any other member of the crew.

Another episode handled the running of a simulation of an entire Irish town on the holodeck, and how the holograms learned that they were not "real". It ended up with Janeway agreeing to run the program continuously, both as a means for the crew to let off steam in their downtime, and as a way to please the holograms who had pleaded to be allowed to live "normally". Yet another episode handled renegade holograms who had escaped captivity, but I believe that was related to the Doctor's misdemeanour in the previous paragraph.

This is the great thing with science fiction; it can quite easily handle complex ethical issues that may be too hard to handle if set in the present day. Fantastical devices can be introduced into the story that would be impossible to introduce into it if it were set in our own timeframe. Add a raygun and a FTL ship, and you're set. ;)
______________

Quote from: Bob in a quantum-state-of-faith on September 12, 2012, 07:09:39 PM
I was thinking of that movie--- alas, I found it too depressing to watch all the way through to the end, so I don't know how it turned out (I presume the "normal" eventually gets caught).

The thing that movie gets wrong, and that many people get wrong is that DNA is not a blueprint.  Not even close. As Zono hinted at, there is situational effects happening within the womb which play as much an effect as DNA does.
You need to watch the end of the film. It's one of my favourites. It's a slow burn, yes, but more than worth watching in its entirety.

You're wrong about the movie getting the "DNA = Blueprint" argument wrong. The movie actually argues for the other side, and quite effectively. Yes, it deals with the ethical problems involved with that type of society, but it also covers the loss of individual spirit and drive to succeed, and how we can overcome our own weaknesses, effectively saying there is no such thing as pre-determinism. I'm not sure how far you got, but that's the overall message to me.
"I love cheese" - Buffy Summers


Swatopluk

Although it is slightly off-topic: Once can go one step further. In theory a computer can simulate a brain but also anything a computer can do can (in theory) be replicated by a purely mechanical device e.g. some enlarged version of Babbage's engines or (an extreme case) a system of pulleys and ropes (as A.K.Dewdney has shown*). Would be a wee bit slow but we'd have to conclude that (in theory) a collection of rope and pulleys could be sentient.

*Computer Recreations, Scientific American April 1988, Vol 258, #4, p.118-121
Knurrhähne sind eßbar aber empfehlen würde ich das nicht unbedingt.
The aspitriglos is edible though I do not actually recommend it.

Sibling Zono (anon1mat0)

A mechanical processor would be the size of a city. A mechanical human brain would be the size of a planet.

Go ask Deep Thought. :D
Sibling Zono(trichia Capensis) aka anon1mat0 aka Nicolás.

PPPP: Politicians are Parasitic, Predatory and Perverse.

Roland Deschain

Sentience, I believe, requires a level of self-awareness, and is not a specific point, but rather a sliding scale. Being human, of course we would attribute a specific point as where a being, whether mechanical or organic, would necessitate us to say, "Yes, this is a sentient being," but this doesn't mean we would attribute any less respect to a being below this (see chimps and how people fight against them being experimented against).

To the question of whether a system of levers and pulleys could become self-aware, no matter its size, I suppose that all depends upon its ability to store information, and how it performs when interacted with. You could say that it will never become sentient, as it needs input from us to work, but then don't we need input from the cells inside us? It's bizarre, but not totally unfeasible.

Quote from: Sibling Zono (anon1mat0) on September 15, 2012, 10:51:38 PM
A mechanical processor would be the size of a city. A mechanical human brain would be the size of a planet.

Go ask Deep Thought. :D
Isn't this all getting a little too Messianic? ;)
"I love cheese" - Buffy Summers


Bob in a quantum-state-of-faith

Quote from: Roland Deschain on September 15, 2012, 09:47:25 PM
Bob, ST:TNG handles sentience of artificial life, both android and hologram, in many other episodes, as do DS9 and Voyager. TNG focuses on android AI because it has Data, whilst Voyager focuses on Hologram AI, as it has the Doctor. In the episode you refer to, the simulation was non-sentient. I'm not sure how much you know about the inner workings of the ST universe, but the people made with holographic technology can be made to be self-aware or not, and have their inquisitiveness reduced to levels well below the threshold of sentience. This is like switching on and off sentience, to a degree.

I watched both series rather completely.  And I was bothered then, and still am, that they seemed willing to create seemingly sentient beings who were switched off-- not self-aware.

And I had to ask-- how?

I don't think it's possible to separate sentience and self-awareness.  Given a sufficiently complex sentience, self-awareness would be an automatic emergent property, I'd wager.  So the simulated brain-- if true to a real human brain, would be both sentient and self-aware, if only briefly (during the run of the simulation).

To try to simulate a human brain without that, would not be, in fact, a simulation, but an approximation at best, a sort of facsimile or cartoon.   And, since the game in that episode depended on a real, sentience to interact with the game's mechanics (and the emotional feedback that was used to brainwash the individual), then a cartoon or facsimile simulation would not have told Crusher anything useful.   Which was my complaint all along-- unless that brain was de-facto a real human brain-- even if only in a virtual space-- then it was inhumane to dismiss it without a thought, once it had been created.

If it was not sentient?  It wasn't a simulation at all.

I think the writers could have dodged that one better than they did-- they were usually quite clever at foreseeing consequences of what they wrote, within the context of the ST universe.   For example, they could have written that the game affected the physical chemistry of the brain's basic workings--- and simple physical chemistry could be simulated with what we have today.  But what they wrote, was that the game actually affected the thinking/emotional parts of the brain--something that would only occur in a sentient, self-aware simulation.

Another thing that sometimes bothered me, was that the ST writers assumed that sentience was a binary state.  Clearly, with studies of our own planet, it's hardly that-- there are degrees of sentience and self-awareness, it's not a simple switch you can just toggle on and off-- even dogs have a pretty good sense of self-awareness and sentience (to name one example).

But it's fascinating to contemplate mechanical sentience-- and I think entirely possible.   We are only now on the cusp of massively parallel--processing, in the electronics industry.  Through evolution, nature invented the parallel-processor millions of years ago, in the form of neural networks.

I think, once massively-large parallel-processing power becomes cheap enough, sentience and self-awareness will simply emerge, aka James P Hogon's The Two Faces of Tomorrow  (link to an ebook copy here  it's an excellent read, if not a really deep one.)
Sometimes, the real journey can only be taken by making a mistake.

my webpage-- alas, Cox deleted it--dead link... oh well ::)

Griffin NoName

Quote from: Bob in a quantum-state-of-faith on September 16, 2012, 12:43:40 AM

I don't think it's possible to separate sentience and self-awareness.  Given a sufficiently complex sentience, self-awareness would be an automatic emergent property, I'd wager.  So the simulated brain-- if true to a real human brain, would be both sentient and self-aware, if only briefly (during the run of the simulation).

Either we have totally different understandings of "self-awareness" or it must have several meanings, which I need explaining.

My usgae:

1. Being in tune with one's emotions and feelings to the extent of understanding why one acts as one does. Being emotionally intelligent.

the number of people who fail this test never ceases to amaze me

2. Understanding that one is a physical body - eg. apes who recognise themselves in mirrors - elephants do too.

To be sentient to me just means a seperate independent individual who acts out stuff and is aware of stuff. eg Like any animal can look after their baby, they may be acting instinctively in terms of drive, but they must also be sentient.

Seems like I am going down a different path than you with this.
Psychic Hotline Host

One approaches the journey's end. But the end is a goal, not a catastrophe. George Sand


Sibling Zono (anon1mat0)

My tiels may fail the mirror test if untrained but they seem perfectly self aware, or as self aware as a 2-3 year old kid.

I dare to say that many animals are sentient at least to a degree, the uncomfortable thought is that we may be eating sentieng beings on a regular basis.

As for ST's iterations on AI they are a bit short on details but it would seem that any holodeck persona is fundamentally a full fledged simulation, not a behavioral bot, and everytime one is generated his/her memories are part of them.

In a simulation the main questions are: would we know if we are simulated if our memories are seamless and complete? And, Would we notice if the simulation ends?
Sibling Zono(trichia Capensis) aka anon1mat0 aka Nicolás.

PPPP: Politicians are Parasitic, Predatory and Perverse.

Griffin NoName

Quote from: Sibling Zono (anon1mat0) on September 16, 2012, 02:42:59 AM
In a simulation the main questions are: would we know if we are simulated if our memories are seamless and complete? And, Would we notice if the simulation ends?

Always a question that gives me the creeps. When I think about it, I get dizzy. What if I am just a simulation. Actually I always stop on the thought that if I am a simulation (stopped at death) then pain is not real; makes going to the dentist easier. ::)
Psychic Hotline Host

One approaches the journey's end. But the end is a goal, not a catastrophe. George Sand


Bob in a quantum-state-of-faith

Sentience goes hand-in-glove with free will, I think.

That is:  does the dog wander about the yard solely because his instincts drive him from point to point?  Or does he possess an innate curiosity about his yard, and is actually interested in seeing what's there, what has changed from day to day and so on.

A cockroach is insentient-- it moves strictly based on a short list of stimulus-response engines.  But a dog is not; a dog can be sad, happy, curious, inquisitive, bored, lonely, content-- the same sorts of things a human can be, come to think of it.

And different dogs have different responses to a given set of stimuli in the same yard; each follows his own bent--his own free will, deciding what he wants to go sniff closer or what he will just ignore.

So, too with humans; we decide this; we decide that-- based on some internal self-dialog (that may be just below the conscious mind) of free will.

That, to me, is sentience:  doing what one wishes with the world as it's presented.

A robot cannot do this; it must follow it's internal rules, regardless if those rules have meaning or no--- not unlike that nonsentient cockroach, come to think of it.   A cockroach could be thought of as a kind of biological robot, following it's instinctive set of instructions.  Sure, to a very-very limited degree, the cockroach may learn a bit;  so too, can a robot if it's programmed correctly.

But a cockroach will never stroll across the kitchen floor because it's curious about what's on the other side of the room.

But a dog might.

So, too, might a human.

I cannot pin it down any better than that, unfortunately, apart from examples-- it's [sentience] kinda like prawn:  you know it when you see it.   ::)

Sometimes, the real journey can only be taken by making a mistake.

my webpage-- alas, Cox deleted it--dead link... oh well ::)

Bob in a quantum-state-of-faith

Quote from: Griffin NoName on September 16, 2012, 05:49:52 AM
Quote from: Sibling Zono (anon1mat0) on September 16, 2012, 02:42:59 AM
In a simulation the main questions are: would we know if we are simulated if our memories are seamless and complete? And, Would we notice if the simulation ends?

Always a question that gives me the creeps. When I think about it, I get dizzy. What if I am just a simulation. Actually I always stop on the thought that if I am a simulation (stopped at death) then pain is not real; makes going to the dentist easier. ::)

Would it matter in the end?  Oblivion is the same regardless if we're in a really complicated virtual reality, or an actual one.  May as well live and act as if it's real.

Sometimes, the real journey can only be taken by making a mistake.

my webpage-- alas, Cox deleted it--dead link... oh well ::)