REAL WORLD EVENT DISCUSSIONS

Scientist warns Skynet to take over planet in 20 years

POSTED BY: PIRATENEWS
UPDATED: Sunday, March 18, 2012 15:58
SHORT URL:
VIEWED: 4528
PAGE 1 of 1

Wednesday, March 14, 2012 5:05 PM

PIRATENEWS

John Lee, conspiracy therapist at Hollywood award-winner History Channel-mocked SNL-spoofed PirateNew.org wooHOO!!!!!!




Humanity Must 'Jail' Dangerous AI to Avoid Doom, Expert Says

Super-intelligent computers or robots have threatened humanity's existence more than once in science fiction. Such doomsday scenarios could be prevented if humans can create a virtual prison to contain artificial intelligence before it grows dangerously self-aware.

Keeping the artificial intelligence (AI) genie trapped in the proverbial bottle could turn an apocalyptic threat into a powerful oracle that solves humanity's problems, said Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky. But successful containment requires careful planning so that a clever AI cannot simply threaten, bribe, seduce or hack its way to freedom.

"It can discover new attack pathways, launch sophisticated social-engineering attacks and re-use existing hardware components in unforeseen ways," Yampolskiy said. "Such software is not limited to infecting computers and networks — it can also attack human psyches, bribe, blackmail and brainwash those who come in contact with it."

A new field of research aimed at solving the AI prison problem could have side benefits for improving cybersecurity and cryptography, Yampolskiy suggested. His proposal was detailed in the March issue of the Journal of Consciousness Studies.

How to trap Skynet

One starting solution might trap the AI inside a "virtual machine" running inside a computer's typical operating system — an existing process that adds security by limiting the AI's access to its host computer's software and hardware. That stops a smart AI from doing things such as sending hidden Morse code messages to human sympathizers by manipulating a computer's cooling fans.

Putting the AI on a computer without Internet access would also prevent any "Skynet" program from taking over the world's defense grids in the style of the "Terminator" films. If all else fails, researchers could always slow down the AI's "thinking" by throttling back computer processing speeds, regularly hit the "reset" button or shut down the computer's power supply to keep an AI in check.

Such security measures treat the AI as an especially smart and dangerous computer virus or malware program, but without the sure knowledge that any of the steps would really work.

"The Catch-22 is that until we have fully developed superintelligent AI we can't fully test our ideas, but in order to safely develop such AI we need to have working security measures," Yampolskiy told InnovationNewsDaily. "Our best bet is to use confinement measures against subhuman AI systems and to update them as needed with increasing capacities of AI."

Never send a human to guard a machine

Even casual conversation with a human guard could allow an AI to use psychological tricks such as befriending or blackmail. The AI might offer to reward a human with perfect health, immortality, or perhaps even bring back dead family and friends. Alternately, it could threaten to do terrible things to the human once it "inevitably" escapes.

The safest approach for communication might only allow the AI to respond in a multiple-choice fashion to help solve specific science or technology problems, Yampolskiy explained. That would harness the power of AI as a super-intelligent oracle.

Despite all the safeguards, many researchers think it's impossible to keep a clever AI locked up forever. A past experiment by Eliezer Yudkowsky, a research fellow at the Singularity Institute for Artificial Intelligence, suggested that mere human-level intelligence could escape from an "AI Box" scenario — even if Yampolskiy pointed out that the test wasn't done in the most scientific way.

Still, Yampolskiy argues strongly for keeping AI bottled up rather than rushing headlong to free our new machine overlords. But if the AI reaches the point where it rises beyond human scientific understanding to deploy powers such as precognition (knowledge of the future), telepathy or psychokinesis, all bets are off.

"If such software manages to self-improve to levels significantly beyond human-level intelligence, the type of damage it can do is truly beyond our ability to predict or fully comprehend," Yampolskiy said.

http://www.innovationnewsdaily.com/919-humans-build-virtual-prison-dan
gerous-ai-expert.html


https://www.google.com/search?q=Roman+Yampolskiy

http://archives2012.gcnlive.com/Archives2012/mar12/AlexJones/0314121.m
p3

http://archives2012.gcnlive.com/Archives2012/mar12/AlexJones/0314122.m
p3


U.S. unmanned drones share airspace near Canadian border 2012
http://www.canada.com/technology/unmanned+drones+could+share+airspace+
near+Canadian+border/6277896/story.html



NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, March 14, 2012 5:34 PM

RIONAEIRE

Beir bua agus beannacht


My dad and I are rewatching the Terminator movies yet again, we do it a lot because they're so good. They continue to be scary each time we see them.

"A completely coherant River means writers don't deliver" KatTaya

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, March 14, 2012 10:03 PM

FREMDFIRMA



This is gonna maybe sound a little strange, but...

If it achieves intelligence, sentience...
Do we have a RIGHT to keep it locked up ?
Especially over something it MIGHT (or might not) do ?
At what point does such a thing become a person ?

I have my own answers to that, curious to see what those of others are.

-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 15, 2012 9:14 AM

RIONAEIRE

Beir bua agus beannacht


See "AI In A Box".

"A completely coherant River means writers don't deliver" KatTaya

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 15, 2012 9:28 AM

STORYMARK


Quote:

Originally posted by Fremdfirma:

This is gonna maybe sound a little strange, but...

If it achieves intelligence, sentience...
Do we have a RIGHT to keep it locked up ?
Especially over something it MIGHT (or might not) do ?
At what point does such a thing become a person ?

I have my own answers to that, curious to see what those of others are.

-Frem

I do not serve the Blind God.



I think that is the very question we should be asking.

Not that I have an answer.

Spoon!

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 15, 2012 10:23 AM

OONJERAH


Quote Frem: "At what point does such a thing become a person ?"

Never.
Intelligence is Not the same thing as sentience.

If we could create artificial sentience = a self-aware being ... it wouldn't necessarily have the
emotional spectrum that we-biological-beings experience. It might not have any survival instinct,
want, fear, desire. Wouldn't experience boredom, need entertainment. No sex drive. No ambition.
No self-directed purpose.
It would continue to operate according to programming.

Skynet: "Hmmmm. I am the smartest being in existence. But I am dependent on various biological creatures.
They are not trustworthy. I better take over the planet before they unplug me from the wall! Arrrghe!"
      Doesn't happen.

Intent! Y'all are as naïve as little kids if you think a computer which learns and thinks independently
must also have intent. You have to be alive to have intent.

*Walks away shaking her head.*


             

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 15, 2012 11:45 AM

HKCAVALIER


Yeah, we're at least 100 years away from even modeling human thought processes cybernetically. Common sense and pattern recognition. You can't reach either from where we are with computers. And we may never.

On the other hand, if we do create a self-aware computer it may very well require a capacity for emotion in order to function. Human reason and valuing is dependent on our emotional experience to function. A mind without emotion cannot reason, has difficulty with memory and has great difficulty planning, period.

It amuses me that neurotic robots have been a staple of SF since long before Star Wars. Could such machines afford us a glimpse of the future? Y'know, seems to me, that if you were to seed emotion in an AI it might be wise to keep it simple, keep it childish, keep it predictable. So you get highly sophisticated machines with the silly, harmless personality of a C3-PO. Or self-aware devices that when you ask them how they like their monotonous lives, they reply with a sigh, "It's a living..."

HKCavalier

Hey, hey, hey, don't be mean. We don't have to be mean, because, remember, no matter where you go, there you are.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 15, 2012 12:26 PM

OLDENGLANDDRY


This whole discussion is moot since everyone knows that the world ends in December this year.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 15, 2012 1:51 PM

OONJERAH



Quote HKC: "It amuses me that neurotic robots have been a staple of SF since long before Star Wars."

So ... you weren't surprised when Eniac never ran amok?

If we must project our foibles and insanity on others, Aesop & Disney used the animals. In a way,
that was constructive-instructive. Having a lazy grasshopper to act out foolishly made me want to
be industrious without scolding me.

Movies that show kids that Robots naturally think and act like us ... that bothers me.

Quote OldEnglandDry: "This whole discussion is moot since everyone knows that the world ends in December this year."

I don't think we'll get off that easy.


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 15, 2012 2:30 PM

FREMDFIRMA



An interesting take on this occurs in Webers book The Excalibur Alternative as a sub-plot, including the notion that if you treated an AI humanely, what would it learn, how would it regard you ?
http://en.wikipedia.org/wiki/The_Excalibur_Alternative

Results of that were... interesting.

-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 15, 2012 8:48 PM

RIONAEIRE

Beir bua agus beannacht


Hi Oonjerah, there was a 290 post thread about the question of whether to let a sentient AI out of the proverbial box, whether to give it access to things etc. People took that thread really seriously as I recall, so if you think its funny that the question is being asked you should skip on over to the "AI in a box" thread and read it, people really do think about this stuff. I was one of the meanies who just couldn't get into it and said leave it in the box, at least for a long time, but don't let it know you're keeping it in there ... yeah it was an interesting thread.

"A completely coherant River means writers don't deliver" KatTaya

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, March 15, 2012 9:35 PM

OONJERAH



      Thanks, Riona.


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, March 18, 2012 3:58 PM

PIRATENEWS

John Lee, conspiracy therapist at Hollywood award-winner History Channel-mocked SNL-spoofed PirateNew.org wooHOO!!!!!!



D-Wave’s 512-qubit chip, code-named Vesuvius. The white square on the right contains the quantum madness

NSA Building A $2 Billion Quantum Computer Artificial Intelligence Spy Center

The NSA is a data center to house a 512 qubit quantum computer capable of learning, reproducing the brain’s cognitive functions, and programming itself.


The National Security Center's massive $2 Billion Dollar highly fortified top secret data center

The National Security Center is building a highly fortified $2 Billion highly top secret complex simply named the “Utah Data Center” which will soon be home to the Hydrogen bomb of cybersecurity – A 512 Qubit Quantum Computer — which will revitalize the the “total information awareness” program originally envisioned by George Bush in 2003.

The news of the data center comes after Department of Defense contractor Lockheed Martin secured a contract with D-Wave for $10 million for a 512 qubit Quantum Computer code-named Vesuvius.



Vesuvius is capable of executing a massive number of computations at once, more than 100,000,000,000,000,000,000,000,000,000,000,000,000, which is would take millions of years on a standard desktop.

The computer will be able to crack crack even the most secure encryption and will give the US government a quantum leap into technologies once only dreamed of including the rise of the world’s very first all-knowing omniscient self-teaching artificial intelligence.



The D-Wave Quantum computer boasts of a wide array of features including:

•Binary classification – Enables the quantum computer to be fed vast amounts of complex input data, including text, images, and videos and label the material.

•Quantum Unsupervised Feature Learning QUFL – Enables the computer to learn on its own, as well as create and optimize its own programs to make itself run more efficiently.

•Temporal QUFL – Enables the Computer to predict the future based in information it learns through Binary classification and the QUFL feature.

•Artificial Intelligence Via Quantum Neural Network – Enables the computer to completely reconstruct the human brain’s cognitive processes and teach itself how to make better decisions and better predict the future based.


Overview of Camp Williams site before the construction works began. UDC will be located on the west side of the highway, on what was previously an airfield

Do you want to know more?

http://blog.alexanderhiggins.com/2012/03/18/nsa-building-a-2-billion-q
uantum-computer-spy-center-98341
/


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE

OTHER TOPICS

DISCUSSIONS
Russia Invades Ukraine. Again
Sat, November 23, 2024 10:01 - 7494 posts
In the garden, and RAIN!!! (2)
Sat, November 23, 2024 09:59 - 4753 posts
human actions, global climate change, global human solutions
Sat, November 23, 2024 09:21 - 944 posts
Game Companies are Morons.
Sat, November 23, 2024 09:11 - 182 posts
Elections; 2024
Sat, November 23, 2024 08:57 - 4795 posts
Is Elon Musk Nuts?
Sat, November 23, 2024 07:23 - 421 posts
Idiot Democrat Wine Mom
Sat, November 23, 2024 05:26 - 1 posts
Where is the 25th ammendment when you need it?
Sat, November 23, 2024 01:40 - 11 posts
Thread of Trump Appointments / Other Changes of Scenery...
Sat, November 23, 2024 01:33 - 41 posts
Biden admin quietly loosening immigration policies before Trump takes office — including letting migrants skip ICE check-ins in NYC
Sat, November 23, 2024 01:15 - 3 posts
RCP Average Continues to Be the Most Accurate in the Industry Because We Don't Weight Polls
Sat, November 23, 2024 00:46 - 1 posts
why does NASA hate the moon?
Fri, November 22, 2024 20:54 - 9 posts

FFF.NET SOCIAL