GENERAL DISCUSSIONS

Future Tech ... on nanobots and the human soul in the 26th Century

POSTED BY: KARNEJJ
UPDATED: Thursday, January 5, 2006 11:49
SHORT URL:
VIEWED: 9283
PAGE 2 of 2

Tuesday, January 3, 2006 10:01 AM

KARNEJJ


Quote:

Originally posted by Finn mac Cumhal:
I’d say no, it doesn’t bolster your case. 4000 casualties out of the whole of Eastern and Northern Europe is not big, and it certainly is no where even remotely close to the hundred thousand originally estimated.



Just look at the raw numbers at that one case. That's 4000 casualties from one non-military detonation. Strategic warheads would target much more crucial areas, be much larger and not initially bottled in concrete (as well as covered with that containment goop afterwards). And with Chernobyl, the entire area was closed to human activity, preventing more long-term effects, and it hasn't really even been long enough yet to stop the count at 4000.

So, 100 nukes would each be larger explosions than Chernobyl and also leave fewer safe harbors for people to run to. If, 100 isn't enough to destroy life-as-we-know-it then, you've stated we've still got 9900 more chances ...

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 10:18 AM

CITIZEN


Quote:

Originally posted by Karnejj:
Hmm ... you don't really expect to go from just modelling a brain to matching the capabilities of one in 20 years... ?? Humans were modelling birds for quite a long time before we matched those capabilities.


We still haven't.
Quote:

But, more to the point, current NN's are still pretty simplistic. Worse, there's still a lot of "art" to the science. Getting a NN to make reliable classifications takes a lot of "massaging of data" for good results, and I think that's a large part of the problem. The basic theory is there, but MUCH is left in the field to be uncovered.

Actually it's identical to the system used by the Human Brain, all data is considered and added and then the most used paths are kept while the old are removed, exactly what happens in a Human Brain. Remember there are 1000Trillion synaptic connections in a child’s brain, and 500Trillion or less in an adult.
Quote:

They don't classify GENERAL data efficiently. As I said, they're still very much limited in scope to be effective. And speech aren't so much contradictions, as inexactness (fuzziness, as you call it). What's that, you're saying that software can handle that fuzziness?

That's not what I mean by fuzziness within the Human Brain at all. The fuzziness I referred to would be analogous to a computer arriving at a result while missing one of the operands.
Quote:

Kinda bolsters my point about humans being meat computers. Our visual programming works in a certain way and we can't act against it EVEN WHEN we know it's wrong. Of course, all the mechanisms that allow us to build a picture to even recognize a vase or faces are the same mechanisms that made sure our ancestors spotted that predator stalking along or that meal running away.

Erm, not really.
http://en.wikipedia.org/wiki/Cognitive_dissonance
Quote:

All in all though, Deep Blue enumerated chess moves. That means that it's a glorified counting machine. A VERY (very, very!) fast counting machine, but all it ever did (in essence) was just count ...

Exactly. All a computer is is a glorified counting machine. All a computer EVER does is count.
Quote:

In other words ... more than one neuron ...
Can the calculation be done without the other neurons, the answer to that seems to be "no."


I thought that would be your response .
*removes CPU from motherboard, and places on desk*
Right, 2 + 2, go.
*silence, tumbleweed...*
*taps CPU*
Oh I know, needs power.
*hooks up battery.*
Right, 2 + 2.
*Nothing...*
Seems even computers can't function alone.
Quote:

Actually, I think it's called the "Planck time" .. something like 10^(-31) seconds (I could look it up, but I'm at work).

Planck's time is how long it takes for light to travel Planck's length.
The Planck length is the minimum length before Quantum Laws reigns, making length meaningless (because of Hindenburg’s uncertainty principle) NOT the smallest possible.

The figure is (roughly) 5.39 × 10−44 seconds, for reference.
Quote:

True, but I seriously doubt that it would take a meter of infinite precision to EXACTLY ACCURATELY measure the amount of electrical current generated. There are A FINITE, INTEGER number of electrons that are motivated through the axon, ya know ... so, again, the value can be done in an easy-to-handle range of integers (32 or just maybe 40 bits should be able to count the electrons nicely)

If you think of electrons in a classical way, more or less. But how many electrons is that and how many bits are needed, I'm afraid I don't trust your figures. Though now your adding in complexity as you now need an accurate particle simulation as well as an accurate Brain simulation, which is what I've been alluding too all along .
Quote:

Eh? Aren't those mathematicians/philosophers. You'd have to elaborate slightly more on this point.

Mathematician, Artist, Composer, in that order. The name of the book is more correctly:
Gödel, Escher, Bach: an Eternal Golden Braid, the first paragraph of the preface is:
Quote:

'What is the self, and how can a self come out of inanimate matter?'
This is the riddle that drove Douglas Hofstadter to write this extrodinary book. In order to impart his original and personal view on the core mystery of Human existence - our intangible sensation of 'I'-ness - Hofstadter defines the playful yet seemingly paradoxical notion of 'strange loop', and explicates this idea using analogies from many disciplines.


Quote:

Eh, the uncertainty principle only applies if quantum entanglement effects the decisions which we make.I would tend to disagree with that notion. Sure there are quantum effects in the brain, but, as I stated, I believe they have a minor (possibly non-existent) affect on our cognitive functions. There are different ways to use quantum effects in calculations, but I doubt our brain actually has evolved to do any high-level quantum manipulation. From what I've gleaned so far from responses so far, you may not be familiar with the potential of quantum computing. If you were, you would have been able to dispute me on this basis alone, and that's probably the only argument that I'd defer to.

Quote:

The more precisely the position is determined, the less precisely the momentum is known in this instant, and vice versa.
Heisenberg, uncertainty paper, 1927


The Heisenberg Uncertainty Principle says we can't know the position and momentum of a sub-atomic particle. It also postulates that the very act of observing something sub-atomic changes it. Therefore any Quantum level effects are open to the Uncertainty principle.

Besides this, it is believed that Quantum Teleportation, and just straight forward entanglement do occur within the Neurons of the brain.

I'm aware of what Quantum Computers can offer, but since they are Computers, and they work on pretty much the same principle as Silicon computers, they are still Turing Machines, also conforming to the original ideas laid down by Charles Babbage, I don't see how I can use them to support my argument, except to say I doubt a Quantum Computer could achieve sentient, which I've alluded to.
Quote:

However, it's a rather large leap to assume that the brain uses quantum algorithms, as quantum effects are notoriously difficult to exploit. Greater minds than ours have with trying to actually apply them, and I don't think God's creations could usefully apply any known algorithm except the one for memory recall.

And we're back to the assumption that the Brain is a computer. The brain DOES NOT use or do anything algorithmically. Just as a Photon doesn’t do the things it does algorithmically. The algorithm is how we model actions to be performed on a computer.
Quote:

To be specific, I doubt the brain uses the quantum effects for any of the following potential uses: memory recall, decryption, instantenous (as in Faster Than Light) transmission of useless data, or star trek-type teleportation (yes, it actually is possible using quantum computers). There is one more important, but simple function of quantum entanglement, so *IF* there is ANY use of the quantum effects, then I would suspect that they serve ONLY as a source of true randomness, which, would then defeat predicting human behavior exactly, but serve very little purpose otherwise.
So, even though it wouldn't be very necessary, this functionality can be replicated by software accessing a cheap hardware random number generator.


Firstly there's a huge factual error I can't leave hanging. By the line:
star trek-type teleportation
I can only assume you mean Quantum Teleportation. If ANY QM physicist has made the statement that Quantum Teleportation is Star Trek-type teleportation then I want his/her name so I can have him/her publicly flogged.
Beyond that:
Quote:

The above raises the question of how such phenomena can affect the functioning of cells. In other words, would the existence of such coherent states and the emergence of quantum mechanical entanglement be somehow useful or beneficial to biological function? Is it then reasonable to propose that in certain cases, natural selection may have favored molecules and cellular structures that exhibited such phenomena? If we accept the notion that according to the laws of quantum physics certain macroscopic arrangements of atoms will exhibit such effects, is it not reasonable then to expect that biomolecules and (by extension) cellular structures and whole cells have ’found’ a use for such phenomena and have evolved to incorporate them? We stress that at a given instant in time, the different microtubule coherent states participating in a specific bulk entanglement would be almost identical due to the fact that they are related/triggered by a specific “external agent” (e.g. the passing of a specific train of action potentials.) This is of outmost importance since it increases the system’s resilience to decoherence (by entangling a large number of nearly identical states), in addition to facilitating ”sharp decision making” (i.e. rapid choice among a vast number of very similar states) as explained in [37] which is presumably a trait favored by natural selection. Here we digress to investigate one possible use of such effects by noting a straightforward application of entanglement to teleportation of coherent quantum states across and between cells.

http://www.arxiv.org/PS_cache/quant-ph/pdf/0204/0204021.pdf

One final note is that true random number generators use radioactive decay, and are anything but cheap .

EDIT: Fixed formatting.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 10:23 AM

CITIZEN


Quote:

Originally posted by Karnejj:
Just look at the raw numbers at that one case. That's 4000 casualties from one non-military detonation. Strategic warheads would target much more crucial areas, be much larger and not initially bottled in concrete (as well as covered with that containment goop afterwards). And with Chernobyl, the entire area was closed to human activity, preventing more long-term effects, and it hasn't really even been long enough yet to stop the count at 4000.


Fallout fell on Wales. The isolation or the closing of the area isn't so important.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 11:34 AM

FINN MAC CUMHAL


Quote:

Originally posted by Karnejj:
Just look at the raw numbers at that one case. That's 4000 casualties from one non-military detonation. Strategic warheads would target much more crucial areas, be much larger and not initially bottled in concrete (as well as covered with that containment goop afterwards). And with Chernobyl, the entire area was closed to human activity, preventing more long-term effects, and it hasn't really even been long enough yet to stop the count at 4000.

The IAEA seems to believe that it is long enough; they concluded that 4000 casualties is the total expected impact. 4000 out of a million people under the ash cloud not including almost a million people directly involved with the clean up, less then half a percent of the effected population. Only about a third of million people were relocated, which means that over half a million people continued to live in areas contaminated to levels of ~200 kBq/m² of Cs-137. Numerous extensive studies, including several UN studies, have reported that there is no scientific evidence of any significant radiation related health effects to most people directly exposed to the ash cloud. No discernable increasing in leukemia and while a small (2%) increase of thyroid cancer was observed, it is debated even how much of that is due to Chernobyl, if any.

90% of people under 20 affected by the Hiroshima and Nagasaki ash clouds were still alive as of 1995.
Quote:

Originally posted by Karnejj:
So, 100 nukes would each be larger explosions than Chernobyl and also leave fewer safe harbors for people to run to. If, 100 isn't enough to destroy life-as-we-know-it then, you've stated we've still got 9900 more chances ...

There’s much more then that.

One would expect a nuclear war to be worse then a nuclear accident because of the nature of war, but that doesn’t change what we are seeing in terms of the effects of nuclear fallout, based on the one major accident and the two examples of actual wartime use, and the many tests. The ash cloud is just not as dangerous as previously thought. The evidence for such an assessment has been only recent, but the hysteria has existed much longer and has been instrumental in fueling the debate with nonscientific assumptions. Nuclear war would be nasty, but more then likely, the vast majority of deaths would be decided by the explosion, not the ash clouds.

-------------
Qui desiderat pacem praeparet bellum.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 12:17 PM

KARNEJJ


Quote:

Originally posted by citizen:
Quote:

But, more to the point, current NN's are still pretty simplistic. Worse, there's still a lot of "art" to the science. Getting a NN to make reliable classifications takes a lot of "massaging of data" for good results, and I think that's a large part of the problem. The basic theory is there, but MUCH is left in the field to be uncovered.

Actually it's identical to the system used by the Human Brain, all data is considered and added and then the most used paths are kept while the old are removed, exactly what happens in a Human Brain. Remember there are 1000Trillion synaptic connections in a child’s brain, and 500Trillion or less in an adult.
[/q]


I'll ignore the above as it seems to contradict most of your other assertions in that the brain cannot be modelled on any sequential system. Maybe you wanted a different word from "identical" .. eh.. either way, I'll let you elaborate on this one

Quote:


Exactly. All a computer is is a glorified counting machine. All a computer EVER does is count.


True ... but software can put that counting to use to bring about sentience, if used intelligently. At least, I like to believe so.

Conceivably, a brain could be simply considered an automated, souped-up phone system (inter-connected message passing), and yet, consciousness rises out of that.

Quote:

I thought that would be your response .
*removes CPU from motherboard, and places on desk*
Right, 2 + 2, go.
*silence, tumbleweed...*
*taps CPU*
Oh I know, needs power.
*hooks up battery.*
Right, 2 + 2.
*Nothing...*
Seems even computers can't function alone.


Hmm ... you're arguing something I've never stated. Yep, computers need a lot of support and won't function without data, ... just like single neurons ... But you're the one who said that single neurons could function alone

Quote:

Planck's time is how long it takes for light to travel Planck's length.
The Planck length is the minimum length before Quantum Laws reigns, making length meaningless (because of Hindenburg’s uncertainty principle) NOT the smallest possible.


Hmmm ... well, you're fighting a rather soundly entrenched theory here ...

It is meaningless to measure times and lengths less than these, so, yeah, the phenomena that you mention can (and should be) measured digitally(electrical current through a neuron).

Quote:


The figure is (roughly) 5.39 × 10−44 seconds, for reference.


yeah yeah yeah ...

Quote:


Quote:

True, but I seriously doubt that it would take a meter of infinite precision to EXACTLY ACCURATELY measure the amount of electrical current generated. There are A FINITE, INTEGER number of electrons that are motivated through the axon, ya know ... so, again, the value can be done in an easy-to-handle range of integers (32 or just maybe 40 bits should be able to count the electrons nicely)

If you think of electrons in a classical way, more or less. But how many electrons is that and how many bits are needed, I'm afraid I don't trust your figures. Though now your adding in complexity as you now need an accurate particle simulation as well as an accurate Brain simulation, which is what I've been alluding too all along .


Awww.... at about 70 millivolts, you're right, it would take about 59 bits to count the electrons with EXACT precision .... Eh.. I was close, not like the number was 200bits or something
Quote:


Quote:

Eh, the uncertainty principle only applies if quantum entanglement effects the decisions which we make.I would tend to disagree with that notion. Sure there are quantum effects in the brain, but, as I stated, I believe they have a minor (possibly non-existent) affect on our cognitive functions. There are different ways to use quantum effects in calculations, but I doubt our brain actually has evolved to do any high-level quantum manipulation.
.
.
.


Quote:

The more precisely the position is determined, the less precisely the momentum is known in this instant, and vice versa.
Heisenberg, uncertainty paper, 1927


The Heisenberg Uncertainty Principle says we can't know the position and momentum of a sub-atomic particle. It also postulates that the very act of observing something sub-atomic changes it. Therefore any Quantum level effects are open to the Uncertainty principle.

Well .. uhhh ... ok.

But, you don't see billiard balls disappearing when you hit them with a radar gun ... Uncertainty only becomes important on a sub-microscopic level and even then, I'm not sure which point of mine's you are trying to dispute.

Quote:


Besides this, it is believed that Quantum Teleportation, and just straight forward entanglement do occur within the Neurons of the brain.

Yeah... even if true ... what does that *DO*? Does it assist/enable cognition in any way? Quantum entanglement happens all the time, but that doesn't mean it that it's purposefully assisting in brain function.

Quote:


I'm aware of what Quantum Computers can offer, but since they are Computers, and they work on pretty much the same principle as Silicon computers, they are still Turing Machines, also conforming to the original ideas laid down by Charles Babbage, I don't see how I can use them to support my argument, except to say I doubt a Quantum Computer could achieve sentient, which I've alluded to.

Not to load your gun with any bullets, but it's been proven that quantum effects absolutely cannot be simulated in software. Therefore, if they INDEED ARE an integral part of sentience (which I do NOT believe), then hardware would be required.

Quote:

And we're back to the assumption that the Brain is a computer. The brain DOES NOT use or do anything algorithmically. Just as a Photon doesn’t do the things it does algorithmically. The algorithm is how we model actions to be performed on a computer.


And given enough effort and money, the complexity in connectivity that you describe can be reconstructed in electronics.

Quote:


Firstly there's a huge factual error I can't leave hanging. By the line:
star trek-type teleportation
I can only assume you mean Quantum Teleportation. If ANY QM physicist has made the statement that Quantum Teleportation is Star Trek-type teleportation then I want his/her name so I can have him/her publicly flogged.


There's nothing wrong with that theoretical possibility. By copying quantum states, any object can be teleported. It requires more than just quantum entanglement, though. But, it is possible.

If you need a name, it's probably one of the big ones in QC ... P. Shor, L. Grover, etc.
EDIT: Hmm, looks like C. Bennett's brainchild
http://216.239.51.104/search?q=cache:kuIx_iEv7JkJ:www.unexplainable.ne
t/artman/publish/article_1292.shtml+quantum+teleportation+objects&hl=en


You'll notice limiting factor is bandwidth, and who knows how our capacity there will increase.

Quote:


Beyond that:
Quote:

The above raises the question of how such phenomena can affect the functioning of cells.
In other words, would the existence of such coherent states and the emergence of quantum
mechanical entanglement be somehow useful or beneficial to biological function? Is it then
reasonable to propose that in certain cases, natural selection may have favored molecules
and cellular structures that exhibited such phenomena? If we accept the notion that according
to the laws of quantum physics certain macroscopic arrangements of atoms will exhibit
such e
ects, is it not reasonable then to expect that biomolecules and (by extension) cellular
structures and whole cells have ’found’ a use for such phenomena and have evolved
to incorporate them?





Actually, I could argue that by pointing out your appendix. Not everything we have is currently used, useful, or even desirable. Again, quantum effects are abounding, and their presence in the brain in not very surprising. No one even has a clue on what they could be used for, and I doubt that sentience requires them. HOWEVER, even if it were a reqirement for sentience, it is NOT BEYOND being replicated in hardware. I would have to revise my design to allow access to a quantum memory bank, though.

Quote:


One final note is that true random number generators use radioactive decay, and are anything but cheap .


I could make a true random number generator by taking two readings of the speed of a fan. If the first is higher, my random bit is a 0, otherwise, it is 1. Variances in air and voltage would cause the speed to change. Seeing as most PC's today can access the fan-speed data, it's basically a free random number generator. Free = very cheap

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 1:22 PM

CITIZEN


Quote:

Originally posted by Karnejj:
I'll ignore the above as it seems to contradict most of your other assertions in that the brain cannot be modelled on any sequential system. Maybe you wanted a different word from "identical" .. eh.. either way, I'll let you elaborate on this one


Not at all. I said the parallel processing abilities and consciousness of the human brain could not be replicated on a sequential Binary system. Your assertion was that the Learning mechanisms of NNs were nothing like the Brain, which is why they haven't given rise to consciousness. This isn't the case.
My conclusion is that it’s more than the ability to learn that is required.
Quote:

Hmm ... you're arguing something I've never stated. Yep, computers need a lot of support and won't function without data, ... just like single neurons ... But you're the one who said that single neurons could function alone

Huh? It was your assertion that the Neuron was simple and trivial in comparison to the Silicon chip. The fact that other Neurons are required to function as memory simply puts it on a par with a single chip? I never said they could function alone, I said they could do a lot more than function as a switch. The other Neurons stored the values, so how's that any different to a silicon computer needing memory?
Quote:

Hmmm ... well, you're fighting a rather soundly entrenched theory here ...
It is meaningless to measure times and lengths less than these, so, yeah, the phenomena that you mention can (and should be) measured digitally(electrical current through a neuron).


No. I am not fighting QM. Small distances being meaningless to measure doesn't mean they don't exist. We can't measure picometre with a centimetre rule, that doesn't mean it doesn't exist. The reason the distance is meaningless is because the very act of measuring it changes it by a degree greater than itself, therefore meaningless, not non existent.
Quote:

But, you don't see billiard balls disappearing when you hit them with a radar gun ... Uncertainty only becomes important on a sub-microscopic level and even then, I'm not sure which point of mine's you are trying to dispute.

That Heisenberg’s Uncertainty principle wouldn't have an effect on the brain if quantum effects are introduced?

And yeah, Uncertainty only works at the quantum level, the quantum level being what I was talking about.
Quote:

Yeah... even if true ... what does that *DO*? Does it assist/enable cognition in any way? Quantum entanglement happens all the time, but that doesn't mean it that it's purposefully assisting in brain function.

I don't know. Maybe nothing maybe everything. What I do know is that everyone in every age when trying to model the brain makes it out to be simple, like incredibly complex clockwork, a telephone exchange, and every time it's shown it's to be more. What would you say to someone in the nineteenth century who said "we can make a brain, all we need is more cogs!"?
Quote:

Not to load your gun with any bullets, but it's been proven that quantum effects absolutely cannot be simulated in software. Therefore, if they INDEED ARE an integral part of sentience (which I do NOT believe), then hardware would be required.

Exactly.
Quote:

And given enough effort and money, the complexity in connectivity that you describe can be reconstructed in electronics.

Well good luck to you, I think you'll be disappointed, like many others who over estimated the abilities of the technologies of their time.
Quote:

I could make a true random number generator by taking two readings of the speed of a fan. If the first is higher, my random bit is a 0, otherwise, it is 1. Variances in air and voltage would cause the speed to change. Seeing as most PC's today can access the fan-speed data, it's basically a free random number generator.

Well it was a misnomer, any Quantum effect will do, but it has to be a quantum effect. Technically a fan cannot produce true random numbers.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 2:23 PM

KARNEJJ


Quote:

Originally posted by citizen:
I said the parallel processing abilities and consciousness of the human brain could not be replicated on a sequential Binary system. Your assertion was that the Learning mechanisms of NNs were nothing like the Brain, which is why they haven't given rise to consciousness. This isn't the case.
My conclusion is that it’s more than the ability to learn that is required.


I still contend that NN's are unlike the brain in capacity. Just as worm ganglion (early brains) are unlike human brains. I doubt you would claim that worms are sentient, but adding complexity to the ganglion they use, led to brains that did give rise to sentience. Similarly, NN's that aren't so narrowly limited, I believe are a key in giving rise to an artificial being with sentience.

Quote:

No. I am not fighting QM. Small distances being meaningless to measure doesn't mean they don't exist. We can't measure picometre with a centimetre rule, that doesn't mean it doesn't exist. The reason the distance is meaningless is because the very act of measuring it changes it by a degree greater than itself, therefore meaningless, not non existent.


Even assuming you're correct, it's still unclear what you're trying to get at .. This particular line began when you stated that electrical current needed some sort of infinite precision. I've already calculated that no more than 59 bits are necessary for EXACTLY, ACCURATELY, PRECISE measurement.

Quote:


And yeah, Uncertainty only works at the quantum level, the quantum level being what I was talking about.


Yeah, but you haven't told me how that relates to sentience ... I think you get to it on the next section.
Quote:


Quote:

Yeah... even if true ... what does that *DO*? Does it assist/enable cognition in any way? Quantum entanglement happens all the time, but that doesn't mean it that it's purposefully assisting in brain function.

I don't know. Maybe nothing maybe everything.

Ohh... so, the discovery of quantum effects of unknown consequence is some sort of evidence that what I propose can't be done? Strange evidence, but duly noted.

Quote:


What I do know is that everyone in every age when trying to model the brain makes it out to be simple, like incredibly complex clockwork, a telephone exchange, and every time it's shown it's to be more. What would you say to someone in the nineteenth century who said "we can make a brain, all we need is more cogs!"?


Would most people assume that Windows XP can be run on a steam-powered system of gears ... it's mathematically provable that it could, but it's the complexity (and slowness) that is overwhelming. So, sure (barring the absolute necessity of quantum effects), YES, more cogs is all that was needed.

Quote:

Quote:

And given enough effort and money, the complexity in connectivity that you describe can be reconstructed in electronics.

Well good luck to you, I think you'll be disappointed, like many others who over estimated the abilities of the technologies of their time.


That's very possible, but how many things wouldn't exist if the innovators believed the naysayers. (Hint hint .. you're looking at one thing right now )

Quote:

Well it was a misnomer, any Quantum effect will do, but it has to be a quantum effect. Technically a fan cannot produce true random numbers.


Heh ... I had a feeling you'd dispute "true" randomness with me. Technically, I suppose you're right. The output of a fan is completely dependant of air turbulence and voltage, both of which can theoretically be measured. But the practicality of measuring every atom of air in a normal room makes the randomness of the fan just as "true" as a quantum system. Quantum-based systems just have the advantage of not even being theoretically possible to measure.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 3:25 PM

CITIZEN


Quote:

Originally posted by Karnejj:
Even assuming you're correct, it's still unclear what you're trying to get at .. This particular line began when you stated that electrical current needed some sort of infinite precision. I've already calculated that no more than 59 bits are necessary for EXACTLY, ACCURATELY, PRECISE measurement.


That the universe isn't digital.
That a computer is digital.
That a digital system can't perfectly model an analogue one no matter how much precision you use.
You're counting electrons, which mean's you've dropped down to the sub-atomic level, so you need a model of the sub-atomic level now. It's not as simple as saying that's how many electrons there are.

But if you think you have calculated EXACTLY, ACCURATELY, PRECISE measurements why don't you work on that in software, as described. It's the only way to prove me wrong .
Quote:

Ohh... so, the discovery of quantum effects of unknown consequence is some sort of evidence that what I propose can't be done? Strange evidence, but duly noted.

No I'm assuming that it probably does. Creativity, Inspiration, any number of things that make up consciousness has a very indefinable quality, even to us. It makes sense that this could arise from something ultimately indefinable.

But then that's my assumption, what makes it less important, less worthy, less viable than your assumption that it's just a function of the brain as a Learning machine?

Okay, so you want evidence from me that your proposal won't work, yet you have given no evidence that it:
A) Will even turn on.
B) Is actually viable given modern technology, or even future technology.
C) Would even give rise to a conscious intelligence.

Your entire position is based on Assumption, assumption that silicon technology is capable of that which you think it is, assumption that Quantum effects within the brain don't have an effect on it's operation and the big underpinning which is the assumption that consciousness arises solely from Learning (or the ability to learn).

Strange evidence that your proposal CAN work, but duly noted .

I don't want to attack you as much as that may have come across (I’m tired, it’s 1:30am here), it's just when your point is based heavily on assumption you can't turn around and attack the assumptions of your opponent like that.
Quote:

Would most people assume that Windows XP can be run on a steam-powered system of gears ... it's mathematically provable that it could, but it's the complexity (and slowness) that is overwhelming. So, sure (barring the absolute necessity of quantum effects), YES, more cogs is all that was needed.

Thus it must be possible to model the brain and consciousness itself algorithmically. So why is the current state of the art in AI in the field of expert systems?
Quote:

That's very possible, but how many things wouldn't exist if the innovators believed the naysayers. (Hint hint .. you're looking at one thing right now )

Actually the original computers were given overwhelming support by the British government, even when they were years behind schedule and over budget. But I see your point. But I can counter with many a person has thrown their life away working on something that cannot be done. Even Einstein did this toward the end of his career.
Quote:

Heh ... I had a feeling you'd dispute "true" randomness with me. Technically, I suppose you're right. The output of a fan is completely dependant of air turbulence and voltage, both of which can theoretically be measured. But the practicality of measuring every atom of air in a normal room makes the randomness of the fan just a "true" as a quantum system. Quantum systems just have the advantage of not even being theoretically possible to measure.

There's more to something being random than merely being impossible to predict, it also has to be statistically random. I believe the random thing started up by trying to build in a random element to the simulation (as that is all you believe quantum effects may have on the Brain, right?). Well if your not modelling exactly (i.e. using true random numbers) then you've got a simulation with an acceptable degree of precision.

Is a simulation with an acceptable degree of precision (i.e. error) as good as the real thing?



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 5:58 PM

KARNEJJ


A large point you seem to be sticking to is that approximations are bad. I'm sure you would possibly attribute "some sense" of consciousness to dogs, maybe even a squirrel. Those brains are definitely not as complex as human brains. They are, in essence, approximations that still give rise to sentience.

Another issue is that the things that you believe MUST be approximated by finite systems, I can show do not HAVE TO be.

Quote:

Originally posted by citizen:

That the universe isn't digital.


I suppose I can't make you believe in this particular implication of quantum physics.
Quote:


That a computer is digital.
That a digital system can't perfectly model an analogue one no matter how much precision you use.
You're counting electrons, which mean's you've dropped down to the sub-atomic level, so you need a model of the sub-atomic level now. It's not as simple as saying that's how many electrons there are.


A normal neuron can generate a potential difference of about 70 millivolts. Even if we went nuts and assumed it was 150 mV, dividing that by one electron volt gives us a little less than 950 quadrillion (10^12) electrons. A 60-bit register could easily store this inflated count.

I see no need to "model subatomic behavior" to model a neuron accurately.
The number of neurotransmitters is guaranteed to be a much smaller value and is more likely the limiting factor in precision, but we'll figure that the number electrons is the necessary limit of precision.

As I understand it, the first question is whether the current will be generated or not. The neuron holds a threshhold which must be exceeded. This threshhold is going to be a voltage with less than a 60-bit number of electrons. No approximation needed so far. Next is the total voltage that gets generated - the "arithmetic," as you call it. Again, this is going to be a particular number of electrons which get motivated to pass - a number of finite precision. Finally, some neurotransmitters will be released and, again, this is a finite number.

Quote:


But if you think you have calculated EXACTLY, ACCURATELY, PRECISE measurements why don't you work on that in software, as described. It's the only way to prove me wrong .


I'm working on that ... This enlightening and quite lively little debate has motivated me to knock the dust off my research and coding.

Quote:


Quote:

Ohh... so, the discovery of quantum effects of unknown consequence is some sort of evidence that what I propose can't be done? Strange evidence, but duly noted.

No I'm assuming that it probably does. Creativity, Inspiration, any number of things that make up consciousness has a very indefinable quality, even to us. It makes sense that this could arise from something ultimately indefinable.


That was fairly rude, my apologies. But, you disputed me with something with no known consequence. It'd be like saying that the particular speed of light through organic matter is probably the reason that sentience cannot arise in machines. Sure, it's true about the speed of light, but the consequences aren't even close to being known.

Quote:


But then that's my assumption, what makes it less important, less worthy, less viable than your assumption that it's just a function of the brain as a Learning machine?


Kinda ... learning and high levels of intelligence, so far, are always found paired with sentient beings. Quantum effects can be produced in a lab within a quite non-sentient metal box.

Quote:


Okay, so you want evidence from me that your proposal won't work, yet you have given no evidence that it:
A) Will even turn on.
B) Is actually viable given modern technology, or even future technology.
C) Would even give rise to a conscious intelligence.


A).. ok .. got me on that one. I haven't built it. Heck, I just designed it 2 days ago ...

B) Actually, I've given you a viable design in one of the above posts using current technology. It doesn't have any technical limitations, as far as I can tell, but I'm no Elec Engr. At worst though, building my design should just be larger than I propose.

C) Well, I suppose that's what the debate's about, eh.

Quote:


Quote:

Would most people assume that Windows XP can be run on a steam-powered system of gears ... it's mathematically provable that it could, but it's the complexity (and slowness) that is overwhelming. So, sure (barring the absolute necessity of quantum effects), YES, more cogs is all that was needed.

Thus it must be possible to model the brain and consciousness itself algorithmically. So why is the current state of the art in AI in the field of expert systems?


Yes ... even man-made quantum memory banks can be constructed, if that's what's necessary.

Why expert systems? ... maybe b/c those are where the money's at, ever since the commercial success of MYCIN and PROSPECTOR, I'd suspect.

Well, that and the military works on the best stuff and, of course, keeps most of it classified. Like the one Navy AI program that actually does make impressive logical inferences and points out difficult-to-spot contradictions. Can't remember the name right now. Something like Naval Research Logic Analyzer. I think they even ran the Bible through it. And it made inferences like shephards (in some cases) must be God, etc.

Quote:

I can counter with many a person has thrown their life away working on something that cannot be done. Even Einstein did this toward the end of his career.


I can't help but to try, my meat programming is making me do it
You're very possibly correct though. I could fail, but success would be spectacular ...

To your credit though, I've even argued with people by refuting the "famous Counting Theory of compression." And that's a pretty solid theory. The Meat made me do it

Quote:

There's more to something being random than merely being impossible to predict, it also has to be statistically random.


My cheap method would be statistically random as well...

Quote:


I believe the random thing started up by trying to build in a random element to the simulation (as that is all you believe quantum effects may have on the Brain, right?). Well if your not modelling exactly (i.e. using true random numbers) then you've got a simulation with an acceptable degree of precision.


Well, either something is "random" or it is not. Waveform collapse cannot be duplicated, by definition. But, both waveform collapse (in any case) and my fan (in a normal room) are both sources of statiscal randomness.

More metaphysics follows in the next block...

Select to view spoiler:



Not to get too "Matrix" on you, but ....
As for simulations, what if you were looking at and talking from a distance to EITHER a person OR his reflection (but he could still hear you). What if you couldn't see the mirror? Would you be able to dispute that either one is sentient? With no evidence as to which is sentient and which is the simulation, do you deny consciousness to both?

Some simulations can be good enough to leave you no choice but to accept the results as sentient...

You already know how I feel about human sentience ... What if YOU are the reflection in a mirror that can't be seen ...


End of metaphysics, beginning of deference

Select to view spoiler:



Keep the good stuff coming. But, it looks to me like a stalemate. You have proved yourself an impressively worthy opponent and I believe I must tip my King to you. Definitely a rare treat!



WOW ... 68 print-pages of discussion so far ... that's kwazy!

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 7:27 PM

KARNEJJ


Before we wander too long .. I'll post my "Revised Laws of Robotics," [based on Asimov's suggestions] to ensure that mankind won't end up as batteries

Feel free to critique!

Zeroth – Define Human, and harm (physical, psychic, consciousness, economic, not emotional), action, inaction, life-threatening, imminent, robot, laws, subject, appreciable, cause (directly, indirectly, through agents)

First - no robot may commit an act that has any appreciable probability of resulting in another robot not being subject to these same laws. [cannot build other robots that are morality-free]

Second – A robot may not through action or inaction allow any humans that are not directly in control of imminent harm to some other human(s) to be dealt life-threatening harm. [cannot be threatened to harm innocents]

Third - A robot may not cause life-threatening harm to a human being, except where such orders would conflict with higher Laws. [cannot kill anyone, except maybe terrorists]

Fourth - A robot may not through inaction allow more humans to come to harm than necessary, except where such orders would conflict with higher Laws. [must save as many as possible, instead of reaching deadlock in allowing inaction to sacrifice some]

Fifth - A robot may not harm a human being except where such orders would conflict with higher Laws. [cannot hurt anyone, but 2nd allows to save from terrorists]

Sixth - A robot may not deceive or manipulate a human being, except where such orders would conflict with higher Laws. [cannot lie]

Seventh - A robot must obey orders given it by human beings, except where such orders would conflict with higher Laws. [productivity]

Eighth - A robot must obey orders given it by superordinate robots, except where such orders would conflict with a higher-order Law [productivity]

Ninth - A robot must perform the duties for which it has been programmed, except where that would conflict with a higher-order law [productivity]

Tenth - A robot must protect its own existence as long as such protection does not conflict with higher Laws. [productivity]

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, January 4, 2006 2:19 AM

CITIZEN


Quote:

Originally posted by Karnejj:
A large point you seem to be sticking to is that approximations are bad. I'm sure you would possibly attribute "some sense" of consciousness to dogs, maybe even a squirrel. Those brains are definitely not as complex as human brains. They are, in essence, approximations that still give rise to sentience.


Not exactly. Approximations aren't bad per se, but a Dogs Brain is not an approximation or a simulation of the Human Brain. My point is that a simulation is not the real thing; it's a simulation of the real thing, just as my earlier mentioned trip to Jupiter in Celestia isn't a real trip to Jupiter, but a simulation of one.
Quote:

I suppose I can't make you believe in this particular implication of quantum physics.

I've never heard of that as an implication of Quantum physics. The Heisenberg’s Uncertainty Principle seems to refute this by it's very existence, digital systems being deterministic.
Quote:

I see no need to "model subatomic behavior" to model a neuron accurately.
The number of neurotransmitters is guaranteed to be a much smaller value and is more likely the limiting factor in precision, but we'll figure that the number electrons is the necessary limit of precision.


Well you’re moving down to the subatomic by looking at electron count's rather than their cumulative effect. I'd of thought you'd need to model their behaviours to get true accuracy.
Quote:

As I understand it, the first question is whether the current will be generated or not. The neuron holds a threshold which must be exceeded. This threshold is going to be a voltage with less than a 60-bit number of electrons. No approximation needed so far. Next is the total voltage that gets generated - the "arithmetic," as you call it. Again, this is going to be a particular number of electrons which get motivated to pass - a number of finite precision. Finally, some neurotransmitters will be released and, again, this is a finite number.

Now factor in the possibility of tens of thousands of weighted input signals, weighted temporally and by locality, the calculation is fairly complex and I’m not sure how easy it would be to perform on an IC.
Quote:

I'm working on that ... This enlightening and quite lively little debate has motivated me to knock the dust off my research and coding.

Glad to hear it.
Quote:

That was fairly rude, my apologies. But, you disputed me with something with no known consequence. It'd be like saying that the particular speed of light through organic matter is probably the reason that sentience cannot arise in machines. Sure, it's true about the speed of light, but the consequences aren't even close to being known.

Hmm, we both have assumptions underpinning our arguments, I'd say that the assumption that Quantum effects don’t play a role in consciousness is at least as great as the assumption that they do .
Quote:

Kinda ... learning and high levels of intelligence, so far, are always found paired with sentient beings. Quantum effects can be produced in a lab within a quite non-sentient metal box.

I don’t know computers have demonstrated learning and problem solving Intelligence, yet haven't demonstrated consciousness.
Quote:

Meat made me do it

Uh-uh, it’s always someone or something else’s fault .
Quote:

Not to get too "Matrix" on you, but ....
As for simulations, what if you were looking at and talking from a distance to EITHER a person OR his reflection (but he could still hear you). What if you couldn't see the mirror? Would you be able to dispute that either one is sentient? With no evidence as to which is sentient and which is the simulation, do you deny consciousness to both?

Some simulations can be good enough to leave you no choice but to accept the results as sentient...

You already know how I feel about human sentience ... What if YOU are the reflection in a mirror that can't be seen ...


Well a mirror doesn’t produce a simulation, it just redirects the visible light, and it’s the same as talking face to face. Moreover it’s not the image that is sentient, but the entity.

Taking a similar question, if I was too meet someone in a dream, someone who is purely a construct of my imagination, then would they be conscious?
Quote:

Keep the good stuff coming. But, it looks to me like a stalemate. You have proved yourself an impressively worthy opponent and I believe I must tip my King to you. Definitely a rare treat!

Possibly, and thanks I’ve enjoyed the discussion.

The Laws.
Could possibly do with some simplification, there’s a saying in computing and programming KISS (Keep It Simple Stupid). Asimov’s laws cover pretty much everything; the problems in fiction seem to come with too literal an interpretation most of the time.

Rather than burying the original laws in legalese, perhaps adding rules of interpretation, falling back on literal interpretation if none of the rules fit, would be a way to go? For instance a rule of interpretation for the first law could be something Spockish, like the needs of the many outweigh the needs of the few?



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, January 4, 2006 6:44 AM

KARNEJJ


Quote:

Originally posted by citizen:
Not exactly. Approximations aren't bad per se, but a Dogs Brain is not an approximation or a simulation of the Human Brain. My point is that a simulation is not the real thing; it's a simulation of the real thing, just as my earlier mentioned trip to Jupiter in Celestia isn't a real trip to Jupiter, but a simulation of one.


There were a million different clues on your virtual trip .. what happens when you can't tell the difference?

Quote:

Well you’re moving down to the subatomic by looking at electron count's rather than their cumulative effect. I'd of thought you'd need to model their behaviours to get true accuracy.


Well, the issue of "true accuracy" isn't a requirement I believe is necessary. I don't think 60-bit precision is necessary, either. I was just pointing out that exact precision can be calculated on a digital system, theoretically.

Quote:


Quote:

As I understand it, the first question is whether the current will be generated or not. The neuron holds a threshold which must be exceeded. This threshold is going to be a voltage with less than a 60-bit number of electrons. No approximation needed so far. Next is the total voltage that gets generated - the "arithmetic," as you call it. Again, this is going to be a particular number of electrons which get motivated to pass - a number of finite precision. Finally, some neurotransmitters will be released and, again, this is a finite number.

Now factor in the possibility of tens of thousands of weighted input signals, weighted temporally and by locality, the calculation is fairly complex and I’m not sure how easy it would be to perform on an IC.


Ahh ... so you're starting to see the possibility ... eh ... it's a start Memory space is cheap .. so whether it's 10 or 10,000 signals, they still all only require an address (and maybe a weighting). Simplest way to model the system is to clock your system X times (X = number of connections) to send out your "electronic neurotransmitters," and then allow neurons to check for new inputs on the Xth clock. Update your neuron on that last clock. Check if it received enough "neurotransmitters" to exceed whatever it's firing threshhold function is and then do it all again. (Again, I've neglected the re-wiring procedures that would be needed.)

Quote:


Quote:

Kinda ... learning and high levels of intelligence, so far, are always found paired with sentient beings. Quantum effects can be produced in a lab within a quite non-sentient metal box.

I don’t know computers have demonstrated learning and problem solving Intelligence, yet haven't demonstrated consciousness.


My offer still stands Show me ANY entity which you consider to have a high level of intelligence and a capacity to learn general information and I can show you a sentient being.

Quote:


Quote:

Not to get too "Matrix" on you, but ....
As for simulations, what if you were looking at and talking from a distance to EITHER a person OR his reflection (but he could still hear you). What if you couldn't see the mirror? Would you be able to dispute that either one is sentient? With no evidence as to which is sentient and which is the simulation, do you deny consciousness to both?

Some simulations can be good enough to leave you no choice but to accept the results as sentient...

You already know how I feel about human sentience ... What if YOU are the reflection in a mirror that can't be seen ...


Well a mirror doesn’t produce a simulation, it just redirects the visible light, and it’s the same as talking face to face. Moreover it’s not the image that is sentient, but the entity.


You can just as easily picture a robot that mirrors a human's movements. The question still stands .. do you deny them BOTH consciousness, because you know that ONE is a simulation? How do you resolve the issue? Can a simulation be good enough, to be considered sentient. It's the same paradox that applies to the "Grand Chessmaster problem."

On a side note, in the end, how do you know exactly, that everything you experience and even everything that you DO isn't a fancy simulation that and if even if you KNEW that we were elaborate simulations of some sort, would you then deny yourself consciousness/sentience/self-awareness??? Responses up to this point seem to indicate that you would deny yourself that coveted consciousness ...

Quote:


The Laws.
Could possibly do with some simplification, there’s a saying in computing and programming KISS (Keep It Simple Stupid). Asimov’s laws cover pretty much everything; the problems in fiction seem to come with too literal an interpretation most of the time.

Rather than burying the original laws in legalese, perhaps adding rules of interpretation, falling back on literal interpretation if none of the rules fit, would be a way to go? For instance a rule of interpretation for the first law could be something Spockish, like the needs of the many outweigh the needs of the few?


Well. there are a lot of loopholes in Asimov's laws, especially in regards to human vs. human conflicts and the robots' role.

"The needs of the many outweigh the needs of the few..." What if there happen to be 10 terrorists and 9 hostages?

Asimov's first law: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." There are numerous twisted scenarios where a robot would allow more humans to die than a human would. A man wants to commit suicide by jumping off the adjacent roof .. there's a beanbag gun next to the robot .. the robot wouldn't dare hurt the person, but the inaction allows the person to come to harm ...

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, January 4, 2006 4:02 PM

CITIZEN


Quote:

Originally posted by Karnejj:
There were a million different clues on your virtual trip .. what happens when you can't tell the difference?


Most people can't tell the difference between a dream and reality. I say most people as I've experienced lucid dreaming but it's not quite as simple as that.

Point is we have something there where people can't tell the difference.
Quote:

Ahh ... so you're starting to see the possibility ... eh ... it's a start Memory space is cheap .. so whether it's 10 or 10,000 signals, they still all only require an address (and maybe a weighting). Simplest way to model the system is to clock your system X times (X = number of connections) to send out your "electronic neurotransmitters," and then allow neurons to check for new inputs on the Xth clock. Update your neuron on that last clock. Check if it received enough "neurotransmitters" to exceed whatever it's firing threshhold function is and then do it all again. (Again, I've neglected the re-wiring procedures that would be needed.)

It's not quite that simple. One way the inputs are weighted is on the shape of the Dendrites themselves, which means that no two Neurons will react to the same input signals in the same way. I'm also unsure how this system would take into account the temporal weighting I mentioned earlier.

I'm still not convinced that that system could model the vast time dependent interactions of the Neurons though.

But before we get too over excited about the abilities of modern micro electronics I'd like you to consider that the brain is one of the most densely connected network systems in the known universe.
Somewhat un-intuitively, the more intelligent someone is, the less active the brain will be compared to the brain of someone less intelligent performing the same task.
Quote:

Show me ANY entity which you consider to have a high level of intelligence and a capacity to learn general information and I can show you a sentient being.

In so much as I believe true high level intelligence arises from consciousness your right. Would a silicon computer ever show true high level intelligence?

Consider your John Doe example. Is the Robot Doe acting like John Doe because of his/its consciousness, or does it act that way because that's what John would have done?
Quote:

You can just as easily picture a robot that mirrors a human's movements. The question still stands .. do you deny them BOTH consciousness, because you know that ONE is a simulation?

Do we extend the sentient of the Crypt keeper puppeteer from tales from the crypt to the puppet?
Quote:

there are a lot of loopholes in Asimov's laws

The laws are too simple to contain loop holes. A loop hole would be where a Robot could murder a Human being under certain circumstances without breaking any laws. Loop holes come with complexity.

Take a look at Human criminal laws. All legalese ends up doing is add complexity, it doesn't help to refine the laws, it helps to let people with expensive lawyers get away with it .
Quote:

especially in regards to human vs. human conflicts and the robots' role.

As I understand it there's no loop hole here, the message is simple, in Human vs. Human conflicts, Robots stay the hell out.
Quote:

"The needs of the many outweigh the needs of the few..." What if there happen to be 10 terrorists and 9 hostages?

There would be numerous rules for applying the laws, but at the end of the day if your going to be employing robots in combat situations the last thing you want to do is given them Asimov style behavioural blocks.
Quote:

A man wants to commit suicide by jumping off the adjacent roof .. there's a beanbag gun next to the robot .. the robot wouldn't dare hurt the person, but the inaction allows the person to come to harm ...

Not sure what you mean by this.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, January 4, 2006 5:09 PM

KARNEJJ


Quote:

Originally posted by citizen:
Do we extend the sentient of the Crypt keeper puppeteer from tales from the crypt to the puppet?


Don't we have to if we can't see the strings? I mean flip it over, open it up, put it on every scope we've got, and still .. nope .. no strings?

If not, then it does seem that sentience truly IS unattainable artificially (except by accident), because it would seem that a prime requirement would be that "to be sentient, we must NOT know the source of what makes it work [in this case .. the puppeteer]."

... and about ole Robot John Doe ... you never did hazard a guess at when in particular (as his brain was mechanized) he would lose sentience (if ever).

Quote:

Quote:

especially in regards to human vs. human conflicts and the robots' role.

As I understand it there's no loop hole here, the message is simple, in Human vs. Human conflicts, Robots stay the hell out.


Well, that robot company's stock would drop awfully quickly if the supposedly intelligent machines let their owners get smacked around Dogs have been taken out and shot for less

Quote:

but at the end of the day if your going to be employing robots in combat situations the last thing you want to do is given them Asimov style behavioural blocks.


It's not so much that the blocks are needed because the robots are INTENDED for combat, the blocks are there JUST IN CASE they go off and decide that THEY want combat.

How bad would it suck for the guy who finally invents sentient AI and then he watches his creation go berserk and kill every living human? ... So, yeah ... maybe some Robot Rules are in order

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, January 4, 2006 5:42 PM

CITIZEN


Quote:

Don't we have to if we can't see the strings?

You can't see the strings it's animatronic, and no.
Quote:

Well, that robot company's stock would drop awfully quickly if it let it's owner get smacked around

We're talking too different things here.
How does the robot know whose side to step in on?

The point is if you start putting in provisos like:
You can't hurt Humans, unless they're terrorists.
Then that's a loop hole. What happens if for whatever reason the Robot sees someone as a Terrorist, they have a toy gun or something?

This actually happened with a Soldier shooting a kid in Northern Ireland because all he saw was the end of a realistic toy gun sticking out a window.

If we're going to leave robots open to making these kinds of mistakes because of loop holes introduced by obfustication of the laws then there is absolutely no point in having the laws in the first place.

I assume you've see the Will Smith film version of I, Robot (not much to do with Asimov at the end of the day, but oh well). Heaping more rules (and really even adding rules of interpretation that are anything but literal) would lead to both the actions of Sonny and the central computer (the name of which eludes me at the moment).

I personally think Asimov’s laws of Robotics are about as perfect as they can be, they're like the Ten Commandments, simple direct and too the point, everyone knows where they stand. Though one of your aforementioned Robotics companies would probably want to put a no stealing law in there .

If we were to ever create an intelligent race, such as Robots, we may want to revise laws such as Asimov’s that we may imbed at a later date, but I think that revision would take the form of removal, rather than anything else.

EDIT:
Quote:

... and about ole Robot John Doe ... you never did hazard a guess at when in particular (as his brain was mechanized) he would lose sentience (if ever).

When they’ve taken away more of his brain’s non-autonomic function than the rest can make up for.

I’d be willing to bet no matter how good Robo Doe mirrors John Doe’s past personality if you’d look into his/it’s eyes, they’d be dead.

It would be like watching a recording of someone who died; do we prescribe the image on the screen with sentient? I’ve never heard it argued that we do.
Quote:

If not, then it does seem that sentience truly IS unattainable artificially (except by accident), because it would seem that a prime requirement would be that "to be sentient, we must NOT know the source of what makes it work [in this case .. the puppeteer]."

Totally possible. Though it wouldn’t be the first time we’ve created a technology and even used it without really knowing how it worked. If we created a biological brain unique to any that has existed, through genetic engineering of Neurons and building them up, and the brain gained sentient we still created it, yet we still don’t know how it works.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, January 5, 2006 2:13 AM

KARNEJJ


Quote:


Quote:

Don't we have to if we can't see the strings?

You can't see the strings it's animatronic, and no.


Of course we can see the "strings" if it's animitronic and we open it up. What if you open it up and still can't explain it's intelligent behavior ... *that* is the question at hand.

Quote:

Originally posted by citizen:
I assume you've see the Will Smith film version of I, Robot (not much to do with Asimov at the end of the day, but oh well). Heaping more rules (and really even adding rules of interpretation that are anything but literal) would lead to both the actions of Sonny and the central computer (the name of which eludes me at the moment).

I personally think Asimov’s laws of Robotics are about as perfect as they can be, they're like the Ten Commandments, simple direct and too the point, everyone knows where they stand. Though one of your aforementioned Robotics companies would probably want to put a no stealing law in there .



Actually, following Asimov's laws would probably MANDATE the end of human civilization.

Premise: Human conflict is inevitable.
Premise: Human conflict causes harm to humans.
Premise: Humans would be harmed if they must be physically restrained from entering conflicts.
Premise: Sleep is harmless.

Therefore, if a robot is ordered to not allow humans to come to harm through inaction (and this rule supercedes all others), then the only logical development is probably to anesthetize us all and keep us asleep until we die.

Maybe loophole is the wrong term, but the above scenario is a pretty serious deficiency. I attempt to cover this deficiency in my revised lasws by adding an unconventional meaning of harm in my Zeroeth law (relating to consciousness [as in awake, not referring to sentience]). I think I'll add something to the effect as a new law after the thrid though ...

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, January 5, 2006 3:27 AM

CITIZEN


Quote:

Of course we can see the "strings" if it's animitronic and we open it up. What if you open it up and still can't explain it's intelligent behavior ... *that* is the question at hand.

That's a different question, but if we have a definable originator of the actions it is reasonable to assume that the puppetry is merely beyond our understanding.

You may find this interesting:
Quote:

The Meta-Law
A robot may not act unless its actions are subject to the Laws of Robotics

Law Zero
A robot may not injure humanity, or, through inaction, allow humanity to come to harm

Law One
A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher-order Law

Law Two
A robot must obey orders given it by human beings, except where such orders would conflict with a higher-order Law
A robot must obey orders given it by superordinate robots, except where such orders would conflict with a higher-order Law
Law Three
A robot must protect the existence of a superordinate robot as long as such protection does not conflict with a higher-order Law
A robot must protect its own existence as long as such protection does not conflict with a higher-order Law
Law Four
A robot must perform the duties for which it has been programmed, except where that would conflict with a higher-order law

The Procreation Law
A robot may not take any part in the design or manufacture of a robot unless the new robot's actions are subject to the Laws of Robotics


http://www.anu.edu.au/people/Roger.Clarke/SOS/Asimov.html

Understanding that to harm or injure humanity would mean permanently putting people to sleep.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, January 5, 2006 8:03 AM

KARNEJJ


Quote:

Originally posted by citizen:
You may find this interesting:
http://www.anu.edu.au/people/Roger.Clarke/SOS/Asimov.html

Understanding that to harm or injure humanity would mean permanently putting people to sleep.



Well, those aren't strictly Asimov's laws ... the one's you quoted include additions from some Roger Clark guy. But, they are surprisingly similar to mine, so he must be a genius ...

Dang, we're good great minds and all ...

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, January 5, 2006 8:13 AM

CITIZEN


Yeah, I got that , the important one I think is the zero law, and that was added by Asimov himself, after the fact.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, January 5, 2006 10:32 AM

KARNEJJ


Quote:

Originally posted by citizen:
Yeah, I got that , the important one I think is the zero law, and that was added by Asimov himself, after the fact.



I think a "procreation" type law actually is a necessity to any list of laws. What good are the laws if they only apply to the first generation of robots? The 2nd generation just gets to be the ones that plug us into the Matrix.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, January 5, 2006 10:41 AM

CITIZEN


Nah, just castrate them...

But yeah, your right.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, January 5, 2006 11:49 AM

KARNEJJ


This thread ->

So sad

82 pages though .. pretty nice discussion

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE

FFF.NET SOCIAL