Scary? Exciting? Or both? Or more? ..."/>
Sign Up | Log In
REAL WORLD EVENT DISCUSSIONS
2045: The Year Man Becomes Immortal...or not...
Friday, February 18, 2011 12:16 PM
NIKI2
Gettin' old, but still a hippie at heart...
Quote:On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists had to guess what it was: the music was composed by a computer. Kurzweil got $200. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence. That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away. Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing. True? True. So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties. If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville. Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity. The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.
Friday, February 18, 2011 2:04 PM
BYTEMITE
Friday, February 18, 2011 2:34 PM
RIONAEIRE
Beir bua agus beannacht
Friday, February 18, 2011 2:50 PM
FREMDFIRMA
Quote:Originally posted by Bytemite: So bring them on. I only hope that we don't treat our mechanical children the way we treat our organic children, because then humans really WILL be screwed.
Saturday, February 19, 2011 12:07 AM
DREAMTROVE
Monday, February 21, 2011 5:56 PM
Monday, February 21, 2011 6:07 PM
THEHAPPYTRADER
Tuesday, February 22, 2011 10:16 AM
ANTHONYT
Freedom is Important because People are Important
Quote:Originally posted by dreamtrove: Ray is a sharp guy. I met Ray when I was very young, he showed me how to code assembler. I remember how fast he could code, it was hard to conceive. At the time he was into the idea that computers could produce music, but had not yet made his mark on the field. He didn't strike me as a megalomaniac as os often painted by the media. He was really humble and casual, just very much a dreamer, but also an accomplisher. I've read dome stuff on his theories on longevity since then, it's a more complicated topic. I'm not sure. Re: AI, it will happen, but I don't think it will happen like this. Sentient Machines will not see us as a threat, any more than we see frogs as a threat. If they see anyone as a threat, it will be each other.
Tuesday, February 22, 2011 11:40 AM
Tuesday, February 22, 2011 11:46 AM
KPO
Sometimes you own the libs. Sometimes, the libs own you.
Tuesday, February 22, 2011 12:12 PM
Tuesday, February 22, 2011 12:28 PM
Quote:I'm assuming that they're aware enough to look at humans and learn from them. If they do, self-preservation will be one of the things I'd imagine they learn, as will progeny propagation.
Tuesday, February 22, 2011 12:33 PM
Tuesday, February 22, 2011 12:36 PM
Tuesday, February 22, 2011 12:38 PM
Tuesday, February 22, 2011 12:50 PM
Quote:A mass robot revolt would require an impressive conspiracy I don't think 'Artificial Intelligences' modeled after our own could pull off. That would require a hive-mind and perfect cooperation/agreement amongst them.
Quote:Who's to say the PC's and MAC's will get along, they may very well start a civil war amongst robots!
Tuesday, February 22, 2011 1:06 PM
Quote:I'm not assuming any emotion, I'm assuming that any intelligent "species," mechanical or organic, would try to figure out ways to avert the extinction of their species. It's actually a logical response to danger.
Quote:would require assuming that humans are automatically better and more important than intelligent machines, which is an anthropocentric viewpoint
Tuesday, February 22, 2011 1:28 PM
Quote:Robots copying humans, trying to look human and falling in love with humans and having sex with humans is... kinda weird.
Tuesday, February 22, 2011 1:43 PM
Quote:But it assumes that they *care* about the possible extinction of their species. Which assumes emotion.
Quote:We will never care about slugs as much as other human beings, it's instinct, and genes. If we come to care about robots, it's because they will resemble humans (and probably we will design them that way to some extent).
Quote:To me morality is a vain human sense of self-importance, in an existence where nothing has any real meaning.
Quote:But there's nothing rational or consistent about it :-/
Tuesday, February 22, 2011 2:01 PM
Quote:"Vain?" How very humble of you.
Quote:Morality with a basis in logic has worked as an important social mechanism for nearly ten thousand years to reduce the amount of human to human murder.
Quote:I can assure you there are definitely ways it can be both logical and consistent
Quote:Perhaps the studies upon which you're basing your viewpoint are fallacious.
Tuesday, February 22, 2011 2:02 PM
KIRKULES
Tuesday, February 22, 2011 2:07 PM
Tuesday, February 22, 2011 2:14 PM
Tuesday, February 22, 2011 2:15 PM
PIZMOBEACH
... fully loaded, safety off...
Tuesday, February 22, 2011 2:23 PM
Tuesday, February 22, 2011 2:24 PM
Quote:Answer me this: if it's in your genes, then how did it get there? We can't assume it came from nowhere; while all genes are the result of a mutation (which might be considered random), dominant genes only become so because of a clear benefit (which is not random). They are selected for.
Tuesday, February 22, 2011 2:35 PM
Quote:I'll ask you the reverse: what part of human morality does NOT have a clear benefit to human society? Nurturing children? Aversion to theft and murder?
Quote:Most of it IS clear; the rest we should treat as an evolutionary puzzle that we can theorise and solve (like why we care about animals)
Tuesday, February 22, 2011 3:05 PM
Tuesday, February 22, 2011 5:36 PM
HKCAVALIER
Tuesday, February 22, 2011 6:20 PM
Quote:That's fair. But I fail to see how either of those are irrational or that they do not have a logical cost/benefit outcome.
Tuesday, February 22, 2011 7:17 PM
Wednesday, February 23, 2011 3:57 AM
Quote:Depends on what you mean by human society. If you mean the rules/unspoken agreements by which most people don't kill or rape each other, then that is important as it's perpetuated human existence, which is inherently beneficial for the species.
Wednesday, February 23, 2011 5:05 AM
Quote:Morality is important to the species, but is the species important? Says who?
Wednesday, February 23, 2011 7:18 AM
Quote:Originally posted by HKCavalier: I'd say the computers' best ploy will be to work on our pity; if we feel bad enough for them, it might just slow us down long enough for them to do some damage. Even then, there's no avoiding their inevitable "Norman, coordinate" moment.
Wednesday, February 23, 2011 1:13 PM
Quote:All species are important, and how important they are is estimated by the measure of change they induce on the natural world.
Wednesday, February 23, 2011 1:31 PM
Wednesday, February 23, 2011 2:13 PM
Quote: http://en.wikipedia.org/wiki/Nihilism
Quote:Show me anywhere in this article where it says atheists don't believe in values.
Wednesday, February 23, 2011 2:23 PM
Wednesday, February 23, 2011 2:45 PM
Wednesday, February 23, 2011 4:20 PM
Thursday, February 24, 2011 9:02 AM
YOUR OPTIONS
NEW POSTS TODAY
OTHER TOPICS
FFF.NET SOCIAL