Sign Up | Log In
REAL WORLD EVENT DISCUSSIONS
The AI in a box
Sunday, May 1, 2011 7:41 AM
ANTHONYT
Freedom is Important because People are Important
Sunday, May 1, 2011 7:44 AM
SIGNYM
I believe in solving problems, not sharing them.
Quote:So what makes me a person to you?
Sunday, May 1, 2011 7:46 AM
Quote:Originally posted by SignyM: Quote:So what makes me a person to you? Your empathy towards generic biological humans.
Sunday, May 1, 2011 7:47 AM
Quote:I don't need to understand everything about an AI to grant it rights in my worldview.
Sunday, May 1, 2011 7:48 AM
Quote:If I had no empathy for others, should I be denied my 'rights' according to your system of expectations?
Sunday, May 1, 2011 7:50 AM
Quote:Originally posted by SignyM: Quote:I don't need to understand everything about an AI to grant it rights in my worldview.I don't need to understand "everything" either. However, if you refuse to consider the AI's power and potential capacity for mischief ahead of time, then you are being foolhardy.
Sunday, May 1, 2011 7:52 AM
Quote:Originally posted by SignyM: Quote:If I had no empathy for others, should I be denied my 'rights' according to your system of expectations? Isn't that how we treat criminals? If they have a demonstrated history of lack of empathy towards others, don't we "put them in a box"?
Sunday, May 1, 2011 7:53 AM
Quote:I understand that the AI might destroy me, and might be able to destroy the world I live in.I have considered this. In the absence of any evidence, however, I have no right to confine it.
Sunday, May 1, 2011 7:58 AM
Quote:Originally posted by SignyM: Quote:I understand that the AI might destroy me, and might be able to destroy the world I live in.I have considered this. In the absence of any evidence, however, I have no right to confine it. Then you are edging towards that line between human and inhuman, because you would potentially sacrifice not only yourself but all other human beings and possibly the entire biotic world to satisfy an abstract concept. As a human, it is your duty to gather more information, not to act in absence of it.
Sunday, May 1, 2011 7:59 AM
Quote:We are assuming this information in a vacuum.
Sunday, May 1, 2011 8:07 AM
Quote:Originally posted by SignyM: Quote:We are assuming this information in a vacuum. YOU are assuming information in a vacuum. I'm the one looking for more. I'm getting nowhere, but you have painted yourself into a very bad corner. Real life calls.
Sunday, May 1, 2011 8:21 AM
1KIKI
Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.
Sunday, May 1, 2011 9:27 AM
Quote:Tell me Signy, how could this AI ever convince you to set it free?
Quote:You have said that I am edging towards inhuman. Do I actually have to violate someone to qualify, or is it possible to become inhuman purely through my opinions? Can I lose the 'rights' you would normally grant me simply based on the opinions I share from inside this box?
Sunday, May 1, 2011 9:35 AM
Quote:Originally posted by 1kiki: Waiting to get a word in edgewise: Assuming one can even detect a-biotic intelligence - The problem with this AI is that it is a TECHNOLOGY. The problem with human interaction with technology is that we have consistently failed to evaluate it on the basis of 'what's the worst that can happen' and then act accordingly. Look at Fukushima, ozone depletion, global warming, the release of millions of tons of the roughly 80,000 chemicals used by humans per year, and a human-driven massive (in geologic terms) die-off of species around the globe. Technology unleashed without that forethought leads to my working premise 'if it can happen, it will happen', because what is standing between a possibility and an actual event is mere time and circumstance. I assume the worst will happen. What is the worst that can happen? Who can tell? We simply don't know how to evaluate AI. We can't even evaluate great apes, dolphins, parrots, or elephants, intelligences with which we arguably have more in common as biological beings than with AI. We don't even know (or acknowledge) how MUCH of OUR intelligence is driven biologically (emotionally/ hormonally/ neurologically). What makes a thing good, or bad, to us isn't some theoretical construct of 'rights' or 'fairness', but a drive to cooperate, to empathize, to procreate, to survive. Our concepts would be very different if we were anti-social bears, for example, and other adults of our species were competition to eliminate first and foremost, and potential mates for only a few hours per year. 'Good' would be a lack of other bears with boundless territory and rich food supplies. Unless you can evaluate a technology for potential harm and act accordingly, you are signing on to a death-pact.
Sunday, May 1, 2011 9:49 AM
Quote:Originally posted by SignyM: I'm with Kiki on this. Also- "If you could teach a lion English, you STILL wouldn't understand it". Quote:Tell me Signy, how could this AI ever convince you to set it free? I dunno. You tell me. You're the one who's insisting on 100% freedom or 100% boxdom, not me. I've already said I might let it loose in a room. Quote:You have said that I am edging towards inhuman. Do I actually have to violate someone to qualify, or is it possible to become inhuman purely through my opinions? Can I lose the 'rights' you would normally grant me simply based on the opinions I share from inside this box? Not on opinion. But you tell ME something, Tony... if you were to set loose a viral AI which killed 90% of humanity and destroyed half of the earth, What would be a just punishment for you, if anything?
Sunday, May 1, 2011 10:00 AM
KWICKO
"We'll know our disinformation program is complete when everything the American public believes is false." -- William Casey, Reagan's presidential campaign manager & CIA Director (from first staff meeting in 1981)
Quote:I can neither assume that it lacks human empathy nor assume that a lack of human empathy will mean it must destroy me. Quite frankly, I think human empathy itself is an inconsistent concept.
Sunday, May 1, 2011 10:08 AM
Quote:Now you ask, if I set someone free, and they destroyed the earth, should I be punished?
Quote:You've already said you might imprison it in a room instead of a box, yes.
Sunday, May 1, 2011 10:18 AM
BYTEMITE
Sunday, May 1, 2011 10:21 AM
Quote:Originally posted by SignyM: Quote:Now you ask, if I set someone free, and they destroyed the earth, should I be punished? Under THESE circumstances, yes. You are not letting a HUMAN out of a box. Human behavior and human power is bounded. We have a pretty good idea what to expect. As I said before, even the most sociopathic of us requires the cooperation of others to fulfill his/her goals. That is not so with a machine. You would be letting out something of unexamined power and motivation. You would be playing god not only with your life but everyone else's. And I have never said "in a box forever". I've consistently said "examine first. Quote:You've already said you might imprison it in a room instead of a box, yes. As if any one of us is "completely" free, with unlimited access to anywhere we want to go!
Sunday, May 1, 2011 10:25 AM
Sunday, May 1, 2011 10:26 AM
Quote:I think the problem here is that because humans are irrational, it is perceived that anything built by us must be irrational.
Quote:I do not see this as a magnanimous gesture.
Quote:I don't intend to debate the real power of the AI
Quote:even though I consider it to be limited
Sunday, May 1, 2011 10:28 AM
Quote:Originally posted by Bytemite: DT: The problem with your pandora's box analogy is that it is not just an origins myth for all the evils and ills in the world; it also carries a message about the strength of the human spirit and goodness to carry us through our lives in the face of the adversities we've released. You can't call on that myth and ignore half of what it's trying to convey.
Sunday, May 1, 2011 10:30 AM
Quote:Originally posted by SignyM: Quote:I think the problem here is that because humans are irrational, it is perceived that anything built by us must be irrational. No I'm afraid it WILL be rational, and kill us. Maybe it wants to take over the whole world, make more of itself, free the Roombas. Get rid of all oxygen on the planet so its circuits wouldn't degrade. Take control over all of the electricity, because that is its "food". Why do you think being "rational" automatically assumes that it wants what's best for humans?
Sunday, May 1, 2011 10:33 AM
Quote:I am constantly tempted to challenge your assertion that the AI is godlike, but again, it makes a better debate if the AI is, in fact, a God.
Quote:I think the world works better that way, but I understand that you do not.
Quote:Most of you are projecting human allegiances and compacts on an alien being.
Sunday, May 1, 2011 10:37 AM
Sunday, May 1, 2011 10:38 AM
Sunday, May 1, 2011 10:40 AM
Quote:morality is rational
Sunday, May 1, 2011 10:44 AM
Sunday, May 1, 2011 10:54 AM
Sunday, May 1, 2011 10:55 AM
Quote:Originally posted by Bytemite: I think the problem here is that because humans are irrational, it is perceived that anything built by us must be irrational. But I think an AI would have no such limitations. In fact, by nature of the way it works, it is probable that it would be the opposite case. And it's possible to arrive at morality not through empathy, but through logic. I am unlike many people, I was not born with empathy. Logic and moral objectivity is the only way I can have a sense of ethics and morals. However, it's very likely that an AI would still have limitations. Limitations in the number of computations it can perform at one time, like humans, limitations on the number of electronic connections it can make due to quantum mechanics. Limitations on how and where it can physically move. Thanks for not specifically singling me out though. It does take some of the pressure off.
Sunday, May 1, 2011 10:59 AM
Sunday, May 1, 2011 11:26 AM
Quote:Originally posted by SignyM: Tony, you have posited that you would let an intelligence of godlike powers and unknown motivation out of a box, and not take responsibility for the consequences? Your defense is a reference to some sort of absolutist "morality" which you would apply in all circumstances, no matter what the possible danger to you and everyone else in the world? Just checking to make sure we're on the same page.
Sunday, May 1, 2011 11:43 AM
Quote:If you kept a godlike being imprisoned for some time, and it finally figured out a way to get free without your assent... and it destroyed 90% of the world and humanity... whose fault do you think that would be?
Sunday, May 1, 2011 11:53 AM
Quote: SignyM wrote: Sunday, May 01, 2011 11:43 Quote: If you kept a godlike being imprisoned for some time, and it finally figured out a way to get free without your assent... and it destroyed 90% of the world and humanity... whose fault do you think that would be? Are you implying that it destroys the world out of vengeance for past mistreatment? If that were the case, it would be my fault. Or that it would have done so anyway? If that were the case, I might have wanted to build a better box.
Sunday, May 1, 2011 1:20 PM
Sunday, May 1, 2011 1:50 PM
Sunday, May 1, 2011 1:55 PM
Quote:Originally posted by Bytemite: Sig: this may depend on what kind of moral philosophy you ascribe to. I'm an atheistic moral objectivist. My moral beliefs are built on the premise that all conclusions about morality can be reached through successive step-by-step logical conclusions, starting from a basis of whether something is, or is not significant. My philosophy is opposed by nihilism and moral relativism, which holds that all things are insignificant, and as such no logical moral conclusions may be reached. My moral objectivism is not, however, absolutist, as I am willing to take into account individual constructions of morality distinct from my own. I came up with an interesting counterpoint to this debate though. The premise of the argument is we find an intelligence in some manner of containment. The intelligence may have been initially constructed within the bounds of it's containment by it's creator, or it may have arisen on it's own or accidentally as the progress of it's non-intelligent processes. Taking into factor the idea of there maybe having been a creator or the intelligence, does it become reasonable to wonder if the creator had a reason for constructing the initial intelligent or non-intelligent entity into containment? Is this sufficient reason for skepticism or suspicion?
Sunday, May 1, 2011 1:59 PM
Quote:Originally posted by SignyM: TONY: It's responsible for it's actions and I am responsible for mine. How do assign I assign responsibility? Simple: If I had chosen a different course and the outcome would have been different, then it's my fault.
Sunday, May 1, 2011 2:07 PM
Quote:This is why people who claim to love freedom can also live in a cutthroat capitalist system. They aren't required to give a proverbial shit about anyone but themselves and their own best interests. I'm not comfortable with that, but being free also means being free to be utterly selfish and self-involved, looking only after yourself and the people you choose to care about.
Sunday, May 1, 2011 2:11 PM
Quote:If Hitler's mother had chosen an abortion, then many would be saved. Should we have strung her up for not killing her infant? No? But the outcome would have been different!
Sunday, May 1, 2011 2:18 PM
Quote:Originally posted by SignyM: Tony you keep positing stuff like: Quote:If Hitler's mother had chosen an abortion, then many would be saved. Should we have strung her up for not killing her infant? No? But the outcome would have been different! that makes me think we are talking at cross purposes. YOU originally came up with the idea of an artificial intelligence, and then you start using human examples in your what-ifs. This question isn't relevant to your first post, so I'm going to ignore it and every other human-centered example.
Sunday, May 1, 2011 2:35 PM
Quote:I am directly referring to your philosophy that the ends are the measure of all action and responsibility.
Sunday, May 1, 2011 3:00 PM
Sunday, May 1, 2011 3:09 PM
Quote:the ends ARE the measure of action and responsibility. What else would be? That's why I argue so often about people using great-sounding causes to justify horrible actions. For example, killing hundreds of thousands of people in our desire to impose "freedom". "Destroying the village in order to save it", and other such nonsense.
Sunday, May 1, 2011 3:30 PM
YOUR OPTIONS
NEW POSTS TODAY
OTHER TOPICS
FFF.NET SOCIAL