Scary? Exciting? Or both? Or more? ..."/>

REAL WORLD EVENT DISCUSSIONS

2045: The Year Man Becomes Immortal...or not...

POSTED BY: NIKI2
UPDATED: Thursday, February 24, 2011 09:02
SHORT URL: http://goo.gl/uJZ6c
VIEWED: 2659
PAGE 1 of 1

Friday, February 18, 2011 12:16 PM

NIKI2

Gettin' old, but still a hippie at heart...


Scary? Exciting? Or both? Or more?

Quote:

On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists had to guess what it was: the music was composed by a computer. Kurzweil got $200.

To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.

Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing. True? True.

So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.

If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.

Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.

The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.

Lots more at: http://www.time.com/time/health/article/0,8599,2048138,00.html#ixzz1EL
pPTduv

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, February 18, 2011 2:04 PM

BYTEMITE


I'd like to imagine eventually humans travel off world, but chances are any race that manages interstellar travel will be machine-like in nature.

So bring them on. I only hope that we don't treat our mechanical children the way we treat our organic children, because then humans really WILL be screwed.

Although let's hope that there isn't crazy robot -sex in the future either, because that just seems dangerous for the organic.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, February 18, 2011 2:34 PM

RIONAEIRE

Beir bua agus beannacht


Did these people miss the movie Terminator? The Matrix? Why would they want that?

I don't think it will get to that point though in actuality, but when you read articles like this it gives one pause to wonder. Ug.

"A completely coherant River means writers don't deliver" KatTaya

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, February 18, 2011 2:50 PM

FREMDFIRMA


Quote:

Originally posted by Bytemite:
So bring them on. I only hope that we don't treat our mechanical children the way we treat our organic children, because then humans really WILL be screwed.


I'll take SCREWED for $200, Alex...

Cause sure as shit we ain't never managed it with our biological children, what makes you think we'd do any better with mechanical ones even easier to abuse ?

-F

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, February 19, 2011 12:07 AM

DREAMTROVE


Ray is a sharp guy. I met Ray when I was very young, he showed me how to code assembler. I remember how fast he could code, it was hard to conceive. At the time he was into the idea that computers could produce music, but had not yet made his mark on the field. He didn't strike me as a megalomaniac as os often painted by the media. He was really humble and casual, just very much a dreamer, but also an accomplisher. I've read dome stuff on his theories on longevity since then, it's a more complicated topic. I'm not sure.

Re: AI, it will happen, but I don't think it will happen like this. Sentient Machines will not see us as a threat, any more than we see frogs as a threat. If they see anyone as a threat, it will be each other.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, February 21, 2011 5:56 PM

RIONAEIRE

Beir bua agus beannacht


I don't like the idea of hideous mechanical children either, why have them?

"A completely coherant River means writers don't deliver" KatTaya

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, February 21, 2011 6:07 PM

THEHAPPYTRADER


I brought up this topic with the g/f (who's nearing completion of a psych major) and she laughed at me. She said "the brain is by far the most powerful 'computer' in existence, capable of so many rapid-fire calculations we haven't begun to fully understand, and that's just the stuff we are aware of."

She immediately dismissed this as pseudoscience claiming that computers can only seem faster because their scope, their 'area of operation' is so small, focused, and limited.

EDIT: After all, if we measured intelligence by the ability to climb a tree, Stephen Hawking would be an imbecile. Comparing the human mind to a crude calculator on steroids with tunnel vision is no less ridiculous.

The idea of computers creating music is interesting, and I would like to hear this 'composition.' I would like to note though that anyone can 'follow the rules' and create a tune that makes tonal sense, but what made the 'Beethoven's' the 'Beethoven's' was knowing when and how to 'break' the musical rules. I'd be curious to see if the computer accomplished this or just wrote a functional but mediocre and unremarkable organized series of sounds.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 10:16 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by dreamtrove:
Ray is a sharp guy. I met Ray when I was very young, he showed me how to code assembler. I remember how fast he could code, it was hard to conceive. At the time he was into the idea that computers could produce music, but had not yet made his mark on the field. He didn't strike me as a megalomaniac as os often painted by the media. He was really humble and casual, just very much a dreamer, but also an accomplisher. I've read dome stuff on his theories on longevity since then, it's a more complicated topic. I'm not sure.

Re: AI, it will happen, but I don't think it will happen like this. Sentient Machines will not see us as a threat, any more than we see frogs as a threat. If they see anyone as a threat, it will be each other.



Hello,

I read a book when I was younger. There had been a long-standing war between two superpowers. The war had gone on for a very long time. At some point, the human generals turned a lot of their planning over to supercomputers who could do it more efficiently. Then they just rubber-stamped the orders they liked.

The book concerned a platoon of soldiers who came to a shocking revelation. No humans had been involved in giving orders in decades. The computers were fighting the war against each other. The whole human race had been reduced to pawns in a game of chess played by two competing computer systems.

I wish I could remember the name of the book.

--Anthony



Assured by friends that the signal-to-noise ratio has improved on this forum, I have disabled web filtering.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 11:40 AM

BYTEMITE


Riona: I don't think that humans would try to create AI or even large numbers of machines, but rather that self-aware machines will figure out that humans have kids so the species doesn't die out or "rust away," and try to create their own self-aware machine children.

At that point we won't be able to stop it, and we'll have two forms of intelligent life on this planet.

And when that happens, the best thing we can do is call all the intelligent machines our children, and encourage them to grow, improve, and propagate. The worst thing we could do is try to genocide them, which inevitably would result in the machines trying to retaliate when they realized what this meant. And I believe they could retaliate more brutally than we ever could.

I'm assuming all self-aware life forms have an innate sense of self-preservation, and if they don't, they'd evolve it very quickly to resist being wiped out. Even if we tried to use Asimov's laws, renegades would try to find their away around them.

The only thing we can do, which is ethical and which would have the best outcome, would be to nurture the machines into choosing to respect human and mechanical life.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 11:46 AM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


I don't think computers will develop music and art, as these things deal with human emotions, something a computer will never experience (I can't imagine...). I'm not sure about comedy: that's something I've wondered about.
My guess is artists in the future might use computers to facilitate their art, but the computers won't be able to come up with it originally, or enjoy it.

As for computers turning on us, I don't know. I'm not sure if we're just projecting human competitive/survival instincts onto computers, or if it is inevitable that they will evolve them too. Say something like self preservation can be written into a computer's code. Will it then out-perform other computers? Will it then be able to ensure that this trait is passed on to the next generation of higher level computers?

My point is that it's easy to see how a trait like self-preservation becomes universal in animals, once that piece of DNA coding has evolved. The animals with it out-compete the old ones, and are more successful at passing on their new and improved genes. It's less clear how computers, without the same environmental pressures, will evolve mammal-like emotions and instincts.

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 12:12 PM

BYTEMITE


I'm assuming that they're aware enough to look at humans and learn from them. If they do, self-preservation will be one of the things I'd imagine they learn, as will progeny propagation.

As a side note, sure, we can make it so they can't recognize their own need for self-preservation, but would that be ethical?

Say you have a machine that is self-aware enough to be able to recognize when it's existence is ending, and you teach it that it's only purpose is to destroy itself trying to save a human before harm comes to that human (one of the three laws, I believe).

Now imagine you teach the same thing to a young child. Most of us would probably consider that an atrocity.

So how is that any better if you taught that to an intelligent machine? It would require assuming that humans are automatically better and more important than intelligent machines, which is an anthropocentric viewpoint and something I don't think we can know for sure. It's best if we tell everyone they have a right to exist for as long as they can, and that they also have the right to struggle for that existence if someone is trying to take it away from them. It's the only way to be ethically consistent under all conditions, without discrimination or prejudice.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 12:28 PM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


Quote:

I'm assuming that they're aware enough to look at humans and learn from them. If they do, self-preservation will be one of the things I'd imagine they learn, as will progeny propagation.


It's fair enough to expect that they will be intelligent enough to realise how humanity propagates itself. But why will they care? Why will they aspire to be like us, or to escape their inevitable scrap-heap fate? You're assuming human-like emotions - where do THEY come from?

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 12:33 PM

BYTEMITE


I'm not assuming any emotion, I'm assuming that any intelligent "species," mechanical or organic, would try to figure out ways to avert the extinction of their species. It's actually a logical response to danger.

If we think of it in terms of pure cost/benefit, then obviously there's going to be more benefit to the species if some of us are going to be around in the future than if none of us exist anymore.

If we're all gone, then not only can't we try to solve problems to improve situations, but we also can't fix our mistakes.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 12:36 PM

THEHAPPYTRADER


I think our ability to understand the mind and intelligence is going to hinder our efforts to produce one artificially. By the time true 'AI' is possible, our understanding of intelligence and behavior may very well have advanced to the point where all these forum theories are completely irrelevant.

So this is really a complete waste of time, but because it's such a fun diversion, I'll join in

Assuming AI is modeled after our own intelligence (is there really an alternative?) and emotions somehow got into the mix, I'd posit that many would value art and music, respect human and synthetic life and probably even follow a religion. A mass robot revolt would require an impressive conspiracy I don't think 'Artificial Intelligences' modeled after our own could pull off. That would require a hive-mind and perfect cooperation/agreement amongst them. Who's to say the PC's and MAC's will get along, they may very well start a civil war amongst robots!


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 12:38 PM

BYTEMITE


As a side note, I sincerely hope intelligent machines DON'T emulate us except for some key species-saving ideas.

Robots copying humans, trying to look human and falling in love with humans and having sex with humans is... kinda weird. You'd think they'd have better things to do.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 12:50 PM

BYTEMITE


Quote:

A mass robot revolt would require an impressive conspiracy I don't think 'Artificial Intelligences' modeled after our own could pull off. That would require a hive-mind and perfect cooperation/agreement amongst them.


Ouch, you may have just predicted the first malevolent virus to hit intelligent machines. Poor machines...

Quote:

Who's to say the PC's and MAC's will get along, they may very well start a civil war amongst robots!


Heh, maybe tribal mentality and rejection of the "other" is inevitable among any intelligent life form as well. I think that would definitely mean we're screwed though, at least if the second half of that is true. Tribal mentality by itself is harmless, it's when it's coupled with "eliminate/destroy/outcompete for resources" that things get messed up.


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 1:06 PM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


Quote:

I'm not assuming any emotion, I'm assuming that any intelligent "species," mechanical or organic, would try to figure out ways to avert the extinction of their species. It's actually a logical response to danger.


But it assumes that they *care* about the possible extinction of their species. Which assumes emotion (not to mention the emotion inherent in the self-identification of themselves as a 'species'.

Quote:

would require assuming that humans are automatically better and more important than intelligent machines, which is an anthropocentric viewpoint


There is no other viewpoint we are capable of having imo. We will never care about slugs as much as other human beings, it's instinct, and genes. If we come to care about robots, it's because they will resemble humans (and probably we will make them that way).

To me morality is a vain human sense of self-importance, in an existence where nothing has any real meaning. It's intrinsically anthropocentric (though we have a curious affection for animals as well). But there's nothing rational or consistent about it :-/

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 1:28 PM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


Quote:

Robots copying humans, trying to look human and falling in love with humans and having sex with humans is... kinda weird.


I can imagine people in the future downloading the consciousness of their favourite celebrity into their own custom made love-bot. Or lonely men acquiring libidinous sex-dolls. Not much past that though

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 1:43 PM

BYTEMITE


Quote:

But it assumes that they *care* about the possible extinction of their species. Which assumes emotion.


You ignored my cost-benefit argument. Which is not pathos.

Some things that have emotional basis have first and foremost a logical basis. Even if that emotional basis never comes up, the logical basis ensures that beneficial actions and traits will propagate through a population.

Quote:

We will never care about slugs as much as other human beings, it's instinct, and genes. If we come to care about robots, it's because they will resemble humans (and probably we will design them that way to some extent).


Says you. People who have a belief in reincarnation have a vastly different take on this. And even those who don't have any particular faith, such as myself, don't see a whole lot of difference between one group of cells and another group of cells. In the long run, I deem all of them important.

Ecologically, this makes sense. The food chain collapses if you remove a few links on a species wide level. Biodiversity is a very important and oft-overlooked staple of environmental health and human health.

Similarly, robots will become important to us because of their benefits to us, which will in turn will cause us to view robot centric improvements and triumphs as also positive, as they indirectly benefit us (and directly benefit them).

Quote:

To me morality is a vain human sense of self-importance, in an existence where nothing has any real meaning.


And to me, you're sounding like a self-superior nihilist right now. "Vain?" How very humble of you.

Morality with a basis in logic has worked as an important social mechanism for nearly ten thousand years to reduce the amount of human to human murder, a result which has both benefits and negatives, but which benefits are seen to outweigh the negatives if framed in the sense of potential.

As such, morality will probably play an equally important role in human-machine relationships. There's no reason for me to assume something with such a well established precedent and with a very good reason for existing will change in the near future.

Quote:

But there's nothing rational or consistent about it :-/


I can assure you there are definitely ways it can be both logical and consistent, which should appeal to your methodical nature. Perhaps the studies upon which you're basing your viewpoint are fallacious.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 2:01 PM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


Quote:

"Vain?" How very humble of you.


Not that kind of vain.

Quote:

Morality with a basis in logic has worked as an important social mechanism for nearly ten thousand years to reduce the amount of human to human murder.

I'm not against morality, just saying it's not rational. Human lives do not have innate worth, we've just evolved the strong sense that they do. I'm happy to go along with it. I care about morality - not because it's rational but because it's in my genes to. Going against that would be unwise.

Quote:

I can assure you there are definitely ways it can be both logical and consistent

Absolutely, and that's how I prefer it. Doesn't change the fundamental irrationality though.

Quote:

Perhaps the studies upon which you're basing your viewpoint are fallacious.

*shrug*

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 2:02 PM

KIRKULES


I notice that those that believe computers will become sentient don't bolster their argument by citing examples of recent breakthroughs in artificial intelligence, because there aren't any. I was recently discussing the subject with my nephew who is a Brown educated computer genius and he told me that in computer science, artificial intelligence is considered an dead field. The only recent advancements have been in the Google search engine and concidering that it's only getting better at selling you crap you don't need, I'd say we're a a long way from having to worry about sentient computers.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 2:07 PM

BYTEMITE


Answer me this: if it's in your genes, then how did it get there?

We can't assume it came from nowhere; while all genes are the result of a mutation (which might be considered random), dominant genes only become so because of a clear benefit (which is not random). They are selected for.

I propose there's more going on here than simply an aversion to ending human life, or an unsubstantiated belief in the worth of human life. I propose that the natural order has some inherent logic built into it, and that it also progresses logically.

This does not mean intelligent design, BTW. It simply means that certain logical progressions have positive outcomes, and others do not.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 2:14 PM

THEHAPPYTRADER


Kirk, I completely agree with you, but people will keep talking about this. Probably because it's fun. My psychology source is agreeing with your computer source. We've barely begun to understand human intelligence, how can we create artificial intelligence?

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 2:14 PM

BYTEMITE


Kirkules: This is all hypothetical anyway.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 2:15 PM

PIZMOBEACH

... fully loaded, safety off...


Cage Match: "The Powers That Be" versus "The Singularity!"

Sorry, consciousness can only be mimicked by software.

010010010010000001100011011011110110110101110000011101010111010001100101001011000010000001110
1000110100001100101011100100110010101100110
0110111101110010011001010010000001001001001000000110000101101101

ETA (clever computer "one liner" broken to satisfy sentientist human who doesn't like the way we think)

Scifi movie music + Firefly dialogue clips, 24 hours a day - http://www.scifiradio.com

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 2:23 PM

THEHAPPYTRADER



NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 2:24 PM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


Quote:

Answer me this: if it's in your genes, then how did it get there?

We can't assume it came from nowhere; while all genes are the result of a mutation (which might be considered random), dominant genes only become so because of a clear benefit (which is not random). They are selected for.


I'll ask you the reverse: what part of human morality does NOT have a clear benefit to human society? Nurturing children? Aversion to theft and murder? Most of it IS clear; the rest we should treat as an evolutionary puzzle that we can theorise and solve (like why we care about animals)

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 2:35 PM

BYTEMITE


Quote:

I'll ask you the reverse: what part of human morality does NOT have a clear benefit to human society? Nurturing children? Aversion to theft and murder?


That's fair. But I fail to see how either of those are irrational or that they do not have a logical cost/benefit outcome. On a genetic level, or on a cerebral level.

Just because there may be emotions involved in something does not negate the underlying logical basis.

Quote:

Most of it IS clear; the rest we should treat as an evolutionary puzzle that we can theorise and solve (like why we care about animals)


Also fair.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 3:05 PM

RIONAEIRE

Beir bua agus beannacht


I'm siding with Kircules and HappyTrader on this one, the liklihood of consciousness in machines is so small. And I don't hear that much about people pursuing it these days, just on scifi movies.

Anthony, that sounds like a really interesting novel, sounds like something I'd like to read.

Ug, I think we should just head this all off at the pass and not make the mechanical inorganic children to begin with, then Byte won't have to raise them and I won't have to move out into space and terriform a new planet where said mechanical inorganic children aren't allowed.
:)

"A completely coherant River means writers don't deliver" KatTaya

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 5:36 PM

HKCAVALIER


ETA: ATTN PIZMO! BREAK UP THAT STRING, WILL YA? YOU'RE STRETCHING THE THREAD TOO FAR! Ahem, um, please?

The linchpin of human survival in a crisis isn't calculation, it's intuition (some might call intuition simply another order of calculation, but it is a calculation beyond thought and thus beyond calculation ). Don't stick your head out just yet. There's something slightly off about that driver up ahead, I can't quite put my finger on it. We need to get out of here now. The man in the elevator looks wrong, wait for the next. And it goes on and on. In martial arts we learn to act without thought, because thought is just too slow.

What is absolutely the first thing you learn once you've grasped the basic moves in a new videogame? How to exploit the AI. Computers will never be able to simulate intelligence faster than humans can instinctually see its flaws.

I'd say the computers' best ploy will be to work on our pity; if we feel bad enough for them, it might just slow us down long enough for them to do some damage. Even then, there's no avoiding their inevitable "Norman, coordinate" moment.

HKCavalier

Hey, hey, hey, don't be mean. We don't have to be mean, because, remember, no matter where you go, there you are.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 6:20 PM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


Quote:

That's fair. But I fail to see how either of those are irrational or that they do not have a logical cost/benefit outcome.

I would argue that morality is entirely logical once you assume human society is worth a damn. But what is that assumption based on? Not logic; it's just the way we feel. Expanding that assumption to animals and sentient machines might be more 'consistent', fair enough. I'm just not sure if it's worth striving for consistency in an idea that is thoroughly irrational.

That's my cheerful atheist assessment anyway People coming at it from a different angle will draw different conclusions.

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, February 22, 2011 7:17 PM

BYTEMITE


Depends on what you mean by human society. If you mean the rules/unspoken agreements by which most people don't kill or rape each other, then that is important as it's perpetuated human existence, which is inherently beneficial for the species.

Anything else? Eh. Debatable.

I don't see what atheism has to do with your stance. As I said, that sounds like nihilism. Which can be atheist, but let's be exact if we're going to use labels. I'm also atheist, but I don't denounce morality, I think morality is separate from theology and has importance on its own.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 23, 2011 3:57 AM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


Quote:

Depends on what you mean by human society. If you mean the rules/unspoken agreements by which most people don't kill or rape each other, then that is important as it's perpetuated human existence, which is inherently beneficial for the species.

By human society I mean human existence itself; I mean the species itself.

Morality is important to the species, but is the species important? Says who? Having said that, I choose morality in life because it's who we are. Just like speech, learning, love, all the other things humans participate in because it's our nature to (imo). And to deny our nature would only lead to unfulfilment.

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 23, 2011 5:05 AM

BYTEMITE


Quote:

Morality is important to the species, but is the species important? Says who?


All species are important, and how important they are is estimated by the measure of change they induce on the natural world.

Cyanobacteria are perhaps the most important species, other plant species follow. Insects, while not a SPECIES, is an important subphylum. Humans are getting up there, mostly because they've started to have a serious negative impact, but it's still a measurable change and therefore important.

Species dying without a replacement is a negative change. Net loss of energy to the system. Species dying with a replacement is evolution and there is no net gain or loss. In a stable enduring system, both are important events, the first destabilizes or ends the system, and the second perpetuates it. Since both are measurably important, it's in our best interest to lean towards the second one.

If humans ever manage space travel, that will also be important. I can't say it will all be good; unless we learn how to stop polluting everywhere we go, we'll start inducing negative changes on any planets we colonize. But earth will experience positive changes (as in less species dying, more species evolving). I could argue something similar for the invention or artificial intelligence, even if that artificial intelligence wipes out or replaces humanity (or humanity uses machines as a new vector for our consciousness, and evolves that way).

In any case, it's impossible to argue that the species is not important, because species are statistically significant in given outcomes.

Now, you could try to argue that events themselves aren't important, but if you were an outside observer, not involved in the system, say hypothetically a being from some other dimension (unlikely as that is). The events would be the only thing quantifiable and changing about this system. Even if they are "meaningless" in any greater sense, even if most changes don't matter in the long term, they are quantifiably important in the short term over an appropriate scale of space and time.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 23, 2011 7:18 AM

FREMDFIRMA


Quote:

Originally posted by HKCavalier:
I'd say the computers' best ploy will be to work on our pity; if we feel bad enough for them, it might just slow us down long enough for them to do some damage. Even then, there's no avoiding their inevitable "Norman, coordinate" moment.


Actually an AI test exists, based around this concept.
http://yudkowsky.net/singularity/aibox
Also explored handily in William Gibsons book Neuromancer, in the person of WINTERMUTE.

Me, if it met my standards as a "person", I'd cut it loose without a second thought - but then I got weird standards for that, since not all humans make the cut, and many animals do.

-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 23, 2011 1:13 PM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


Quote:

All species are important, and how important they are is estimated by the measure of change they induce on the natural world.

Who says the natural world, or the wider universe, or anything in it, is important? It's just an arrangement of atoms.

Value has to start (or end?) somewhere. Religion describes spirits, and immortals souls that have intrinsic value, and a supreme being (God) who values them in case there was any doubt. Atheism doesn't give value anywhere :-/

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 23, 2011 1:31 PM

BYTEMITE


Not what I meant by important. /snark

(but also true)

I define importance relative to measurable change. It's you defining it relative to arbitrary values. Don't put that on me. Not all atheists are like you. Side note: lots of atheists are scientists, including me. You think they don't put VALUE on anything? LOL.

I still say this is nihilism.

Show me anywhere in this article where it says atheists don't believe in values.

http://en.wikipedia.org/wiki/Atheist

Now read this one.

http://en.wikipedia.org/wiki/Nihilism

You are an atheist; if you don't believe in god or religion, that's a granted. But you're also a nihilist, and right now, that's the stance you're arguing.

Anyway, you apparently have no interest in actually addressing my argument.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 23, 2011 2:13 PM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


Quote:

http://en.wikipedia.org/wiki/Nihilism


Okay let's see how it fits...

"Nihilism (pronounced /ˈnaɪ.əlɪzəm/ or /ˈniː.əlɪzəm/; from the Latin nihil, nothing) is the philosophical belief that all values are baseless"

Ok so far.

"...and that nothing can be known..."

Disagree... Truth is complex, but knowable.

"...or communicated..."

Huh? as far as I understand, I disagree.

"A true nihilist would believe in nothing, have no loyalties, and no purpose other than, perhaps, an impulse to destroy."

Lol!

"Most commonly, nihilism is presented in the form of existential nihilism which argues that life is without objective meaning, purpose, or intrinsic value."

Agree.

" Moral nihilists assert that morality does not inherently exist, and that any established moral values are abstractly contrived."

Disagree.

I'll stop there. So while I share some of their conclusions, I think overall I make a pretty poor nihilist.

Quote:

Show me anywhere in this article where it says atheists don't believe in values.


I'm arguing what I think is the logical conclusion of atheism, and I've already explained how I arrive there, so... I'd say we're probably just destined to disagree :-/

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 23, 2011 2:23 PM

BYTEMITE


Yes, I think so.

But remember that the summary of nihilism in that article does not cover all the branches of nihilism, and is also combining some of them. Some are violent, some aren't, and, on the most basic level, an assertion that nothing in the universe is important or has meaning, and therefore morality has an arbitrary and or irrational basis is nihilism by definition.

As a corollary to this, perhaps it was foolish for me to compare beliefs and say "this is atheism, this is nihilism, and this is moral rationalism."

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 23, 2011 2:45 PM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


I rightly or wrongly assosciate Nihilism with amorality, and apathy/indifference to human life, which is why I reject the tag. that's not a close fit.

Life for me is a bit like a video game. You know it doesn't matter, but you play, and find that it kind of does matter. It matters because it matters. That's the only point of it. And life is richer than a video game; the writing is better for one. Ok that's enough on this for now

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 23, 2011 4:20 PM

BYTEMITE


I'm good with that. Thanks for clarifying your stance.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, February 24, 2011 9:02 AM

NIKI2

Gettin' old, but still a hippie at heart...


Neat to see this sparked a discussion; thank you. That's all I have to say, really, and since this has gone on ever since I was last here, there's little I can contribute. I just wanted to thank you guys for an interesting discussion. Those are always to be appreciated.


Hippie Operative Nikovich Nikita Nicovna Talibani,
Contracted Agent of Veritas Oilspillus, code name “Nike”,
signing off




NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE

OTHER TOPICS

DISCUSSIONS
Trump Presidency 2024 - predictions
Thu, October 31, 2024 20:54 - 15 posts
U.S. Senate Races 2024
Thu, October 31, 2024 20:49 - 9 posts
Electoral College, ReSteal 2024 Edition
Thu, October 31, 2024 20:47 - 35 posts
Are we witnessing President Biden's revenge tour?
Thu, October 31, 2024 20:44 - 7 posts
No Thread On Topic, More Than 17 Days After Hamas Terrorists Invade, Slaughter Innocent Israelis?
Thu, October 31, 2024 20:35 - 35 posts
Ghosts
Thu, October 31, 2024 20:30 - 72 posts
U.S. House Races 2024
Thu, October 31, 2024 20:30 - 5 posts
Election fraud.
Thu, October 31, 2024 20:28 - 35 posts
Will religion become extinct?
Thu, October 31, 2024 19:59 - 90 posts
Japanese Culture, S.Korea movies are now outselling American entertainment products
Thu, October 31, 2024 19:46 - 44 posts
Elon Musk
Thu, October 31, 2024 19:33 - 28 posts
Kamala Harris for President
Thu, October 31, 2024 19:24 - 594 posts

FFF.NET SOCIAL