REAL WORLD EVENT DISCUSSIONS

A.I Artificial Intelligence AI

POSTED BY: JAYNEZTOWN
UPDATED: Monday, June 24, 2024 17:17
SHORT URL:
VIEWED: 5153
PAGE 4 of 4

Thursday, May 16, 2024 8:12 AM

6IXSTRINGJACK


Quote:

Originally posted by JAYNEZTOWN:
The National File this was a Blog, news website started by Alex Jones, Tom Pappert, Patrick Howley.

Black Athletic Director Creates AI Racist Rant to Incriminate White Principal
https://nationalfile.com/black-athletic-director-creates-ai-racist-ran
t-to-incriminate-white-principal
/
A former Baltimore high school athletic director was arrested after using AI to incriminate the school's principal with racist comments.



Look... I know he was only an athletic director, but they should learn how to read and write too.



Quote:

Many fell for the hoax, including Black Lives Matter activist Deray McKesson, who said on X that Eiswert should be fired immediately and lose his administrator licenses.


Why does this not surprise me.

Quote:

Baltimore County Schools Superintendent Dr. Myriam Rogers told CBS News, “We are incredibly proud of the students and staff and how they stepped up to support one another … This has been a very difficult time for Pikesville High School community, principal Eiswert and his family, and team BCPS.” She agreed with the arrest of Darien and recommended termination.


Jesus Christ. Blow it out your ass. Nobody wants to hear you whine about your 1st world problems.

Quote:

Baltimore County’s State Attorney Scott Shellenberger told the Post Millennial that this case was a first-of-its-kind and that things have changed to the point where we must “take a broader look at how this technology can be used and abused to harm other people.”


You knuckleheads should have already figured that out. Just like the American Justice System to finally getting around to something only after its become a problem.

TIP OF THE DAY: If your grandma calls you up and asks you to wire her money on Western Union, do not wire her money on Western Union.

BONUS TIP: Come up with a code phrase that you can say to people in your circle that would require a specific response back from them that would let you know who you're actually talking to. It sounds silly to say that today, but as this technology keeps improving it's basically a guaranty that at some point in your life you're going to start fielding calls from people that sound like somebody you know but are not that person.

--------------------------------------------------

Trump will be fine.
He will also be your next President.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 20, 2024 4:57 PM

JAYNEZTOWN


Sony Music warns AI companies against “unauthorized use” of its content
https://www.theverge.com/2024/5/17/24158887/sony-music-ai-training-let
ter


GRIT Vision System applies AI to Kane Robotics' cobot weld grinding
https://www.therobotreport.com/grit-vision-system-applies-ai-kane-robo
tics-cobot-weld-grinding
/


BonziBuddy was a freeware desktop virtual assistant created by Joe and Jay Bonzi. Upon a user's choice, it would share jokes and facts, manage downloads, sing songs, and talk, , 2004, the Federal Trade Commission released a statement indicating that Bonzi Software, Inc. was ordered to pay US$75,000 in fees, among other aspects, for violating the Children's Online Privacy Protection Act by collecting personal information from children under the age of 13 with BonziBuddy.
https://www.ftc.gov/opa/2004/02/bonziumg.htm

KinitoPET, or simply Kinito, is the titular main antagonist of the psychological horror game of the same name. It is an AI that was originally created to be an assistant for computer users, but later on it reveals itself to be a malevolent being that wishes to forcefully become its user's best friend.


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, May 24, 2024 5:04 AM

JAYNEZTOWN


OpenAI to start using news content from News Corp. as part of a multiyear deal
https://abcnews.go.com/Business/wireStory/openai-start-news-content-ne
ws-corp-part-multiyear-110512545


Google is redesigning its search engine — and it’s AI all the way down
https://www.theverge.com/2024/5/14/24155321/google-search-ai-results-p
age-gemini-overview


AI headphones let wearer listen to a single person in a crowd by looking at them just once
https://techxplore.com/news/2024-05-ai-headphones-wearer-person-crowd.
html


Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve
https://www.wired.com/story/indian-elections-ai-deepfakes/

Let's look at GPT-4o without the hype
https://www.humanityredefined.com/p/gpt-4o-without-the-hype

Video Shows OpenAI Engineer Admitting It's "Deeply Unfair" to "Build AI and Take Everyone's Job Away"
https://futurism.com/the-byte/openai-engineer-unfair-ai-jobs

From neurons to network: Building computers inspired by the brain
https://interestingengineering.com/innovation/neuromorphic-computing-n
eural-networks-hardware

Researchers used memristors to model artificial neurons and synaptic devices, effectively mimicking synaptic activity.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, May 25, 2024 12:36 AM

6IXSTRINGJACK


Quote:

Originally posted by JAYNEZTOWN:
Sony Music warns AI companies against “unauthorized use” of its content
https://www.theverge.com/2024/5/17/24158887/sony-music-ai-training-let
ter/



Wouldn't that be something if it was the evil corporations that saved us from the AI invasion?



--------------------------------------------------

Trump will be fine.
He will also be your next President.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, May 25, 2024 8:38 AM

6IXSTRINGJACK


Feeling depressed? Google Gemini has the solution for you.

Jump off the Golden Gate Bridge.





--------------------------------------------------

Trump will be fine.
He will also be your next President.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 27, 2024 5:43 AM

JAYNEZTOWN


George Lucas Thinks Artificial Intelligence in Filmmaking Is 'Inevitable' "It's like saying, 'I don't believe these cars are gunna work. Let's just stick with the horses.' "
https://www.ign.com/articles/george-lucas-thinks-artificial-intelligen
ce-in-filmmaking-is-inevitable


Tech companies have agreed to an AI ‘kill switch’ to prevent Terminator-style risks
https://fortune.com/2024/05/21/ai-regulation-guidelines-terminator-kil
l-switch-summit-bletchley-korea
/

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 27, 2024 9:09 AM

JAYNEZTOWN


AI, the END of HOLLYWOOD!


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, June 9, 2024 7:15 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


Situational Awareness

By Scott Aaronson | June 8, 2024

https://scottaaronson.blog/?p=8047

My friend Leopold Aschenbrenner, who I got to know and respect on OpenAI’s Superalignment team, just released “Situational Awareness,” one of the most extraordinary documents I’ve ever read. With unusual clarity, concreteness, and seriousness, Leopold sets out his vision of how AI is going to transform civilization over the next 5-10 years.

Leopold foresaw the current AI boom, before most of us did, and apparently made a lot of money as a result. I don’t know how his latest predictions will look from the standpoint of 2030. In any case, though, it’s very hard for me to imagine anyone in the US national security establishment reading Leopold’s document without crapping their pants. Is that enough to convince you to read it?

Read it at https://situational-awareness.ai/wp-content/uploads/2024/06/situationa
lawareness.pdf


SITUATIONAL AWARENESS
The Decade Ahead
By Leopold Aschenbrenner | June 6, 2024

You can see the future first in San Francisco.

Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.

The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP (Chinese Communist Party); if we’re unlucky, an all-out war. Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.

Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself among them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people — the smartest people I have ever met — and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.

Let me tell you what we see. More at https://situational-awareness.ai/wp-content/uploads/2024/06/situationa
lawareness.pdf


V. Parting Thoughts
What if we’re right?
“I remember the spring of 1941 to this day. I realized then that a nuclear bomb was not only possible — it was inevitable. Sooner or later these ideas could not be peculiar to us. Everybody would think about them before long, and some country would put them into action. . . . And there was nobody to talk to about it, I had many sleepless nights. But I did realize how very very serious it could be. And I had then to start taking sleeping pills. It was the only remedy, I’ve never stopped since then. It’s 28 years, and I don’t think I’ve missed a single night in all those 28 years.”- James Chadwich
(Physics Nobel Laureate and author of the 1941 British government report on the inevitability of an atomic bomb, which finally spurred the Manhattan Project into action)

Before the decade is out, we will have built superintelligence. That is what most of this series has been about. For most people I talk to in SF, that’s where the screen goes black. But the decade after—the 2030s—will be at least as eventful. By the end of it, the world will have been utterly, unrecognizably transformed. A new world order will have been forged. But alas—that’s a story for another time.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, June 10, 2024 9:25 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


F Minus by Tony Carrillo for June 10, 2024

A.I. speaks to people:

“I assure you, none of your ideas or fears of what artificial intelligence may do to humanity are valid. No offense, but they’re all far too stupid.”

- https://www.gocomics.com/fminus/2024/06/10

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, June 15, 2024 4:35 AM

JAYNEZTOWN


Microsoft Lays Off 1,500 Workers, Blames "AI Wave"
https://futurism.com/the-byte/microsoft-layoffs-blaming-ai-wave

Luma AI debuts 'Dream Machine' for realistic video generation, heating up AI media race
https://venturebeat.com/ai/luma-ai-debuts-dream-machine-for-realistic-
video-generation-heating-up-ai-media-race
/

Technology showcase: Compilation of 50 videos generated by Luma Dream Machine



Photographer Disqualified From AI Image Contest After Winning With Real Photo
https://petapixel.com/2024/06/12/photographer-disqualified-from-ai-ima
ge-contest-after-winning-with-real-photo
/

Meta says European regulators are ruining its AI bot
https://www.theverge.com/2024/6/14/24178591/meta-ai-assistant-europe-i
reland-privacy-objections


Jonathan Marcus of Anthropic says AI models are not just repeating words, they are discovering semantic connections between concepts in unexpected and mind-blowing ways
https://x.com/tsarnick/status/1801404160686100948

Edward Snowden: "They've gone full mask-off: ???? ?????? ???????? trust OpenAI or its products (ChatGPT etc). There is only one reason for appointing an NSA Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth. You have been warned."
https://x.com/Snowden/status/1801610725229498403

Former OpenAI board member Helen Toner says the board had "years worth of issues with trust, accountability, oversight" leading up to the firing of Sam Altman
https://twitter.com/tsarnick/status/1801111311549665491

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, June 16, 2024 5:04 PM

JAYNEZTOWN


Leaked Memo Claims New York Times Fired Artists to Replace Them With AI

https://futurism.com/the-byte/new-york-times-fires-artists-ai-memo

or the joke is on us?


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, June 17, 2024 1:31 PM

JAYNEZTOWN


Gen-3 Alpha Video Generation


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, June 19, 2024 5:00 AM

JAYNEZTOWN


"Yanis" Varoufakis is a Greek economist and politician, he was part of center left or a socialist or left-winger party depending on how you look at it, the party was to reshape the debt of Greece after a financial crisis.

He's somethings correct otehr times he gets stuff wrong but he can be interesting



he has been Secretary-General of Democracy in Europe Movement 2025

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, June 19, 2024 6:03 PM

JAYNEZTOWN


How the frenzy for AI has turned Nvidia into the world's wealthiest company
https://www.mirror.co.uk/money/how-frenzy-ai-turned-nvidia-33065368
Nvidia, the US computer chip firm, has moved ahead of tech giants Microsoft and Apple to become the most valuable company in the world.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, June 20, 2024 2:18 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


The Gods of Logic
Before and after artificial intelligence

By Benjamín Labatut

https://harpers.org/archive/2024/07/the-gods-of-logic-benjamin-labatut
-ai
/

We will never know how many died during the Butlerian Jihad. Was it millions? Billions? Trillions, perhaps? It was a fantastic rage, a great revolt that spread like wildfire, consuming everything in its path, a chaos that engulfed generations in an orgy of destruction lasting almost a hundred years. A war with a death toll so high that it left a permanent scar on humanity’s soul. But we will never know the names of those who fought and died in it, or the immense suffering and destruction it caused, because the Butlerian Jihad, abominable and devastating as it was, never happened.

The Jihad was an imagined event, conjured up by Frank Herbert as part of the lore that animates his science-fiction saga Dune. It was humanity’s last stand against sentient technology, a crusade to overthrow the god of machine-logic and eradicate the conscious computers and robots that in the future had almost entirely enslaved us. Herbert described it as “a thalamic pause for all humankind,” an era of such violence run amok that it completely transformed the way society developed from then onward. But we know very little of what actually happened during the struggle itself, because in the original Dune series, Herbert gives us only the faintest outlines—hints, murmurs, and whispers, which carry the ghostly weight of prophecy. The Jihad reshaped civilization by outlawing artificial intelligence or any machine that simulated our minds, placing a damper on the worst excesses of technology. However, it was fought so many eons before the events portrayed in the novels that by the time they occur it has faded into legend and crystallized in apocrypha. The hard-won lessons of the catastrophe are preserved in popular wisdom and sayings: “Man may not be replaced.” “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” “We do not trust the unknown which can arise from imaginative technology.” “We must negate the machines-that-think.” The most enduring legacy of the Jihad was a profound change in humankind’s relationship to technology. Because the target of that great hunt, where we stalked and preyed upon the very artifacts we had created to lift ourselves above the seat that nature had intended for us, was not just mechanical intelligence but the machinelike attitude that had taken hold of our species: “Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments,” Herbert wrote.

Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program!

The Butlerian Jihad removed a crutch—the part of ourselves that we had given over to technology—and forced human minds to develop above and beyond the limits of mechanistic reasoning, so that we would no longer depend on computers to do our thinking for us.

Herbert’s fantasy, his far-flung vision of a devastating war between humanity and the god of machine-logic, seemed quaint when he began writing it in the Sixties. Back then, computers were primitive by modern standards, massive mainframe contraptions that could process only hundreds of thousands of cycles per second (instead of billions, like today), had very little memory, operated via punch cards, and were not connected to one another. And we have easily ignored Herbert’s warnings ever since, but now the Butlerian Jihad has suddenly returned to plague us. The artificial-intelligence apocalypse is a new fear that keeps many up at night, a terror born of great advances that seem to suggest that, if we are not very careful, we may—with our own hands—bring forth a future where humanity has no place. This strange nightmare is a credible danger only because so many of our dreams are threatening to come true. It is the culmination of a long process that hearkens back to the origins of civilization itself, to the time when the world was filled with magic and dread, and the only way to guarantee our survival was to call down the power of the gods.

Apotheosis has always haunted the soul of humankind. Since ancient times we have suffered the longing to become gods and exceed the limits nature has placed on us. To achieve this, we built altars and performed rituals to ask for wisdom, blessings, and the means to reach beyond our capabilities. While we tend to believe that it is only now, in the modern world, that power and knowledge carry great risks, primitive knowledge was also dangerous, because in antiquity a part of our understanding of the world and ourselves did not come from us, but from the Other. From the gods, from spirits, from raging voices that spoke in silence.

At the heart of the mysteries of the Vedas, revealed by the people of India, lies the Altar of Fire: a sacrificial construct made from bricks laid down in precise mathematical proportions to form the shape of a huge bird of prey—an eagle, or a hawk, perhaps. According to Roberto Calasso, it was a gift from the primordial deity at the origin of everything: Prajapati, Lord of Creatures. When his children, the gods, complained that they could not escape from Death, he gave them precise instructions for how to build an altar that would permit them to ascend to heaven and attain immortality: “Take three hundred and sixty border stones and ten thousand, eight hundred bricks, as many as there are hours in a year,” he said. “Each brick shall have a name. Place them in five layers. Add more bricks to a total of eleven thousand, five hundred and fifty-six.” The gods built the altar and fled from Mrtyu, Death itself. However, Death prevented human beings from doing the same. We were not allowed to become immortal with our bodies; we could only aspire to everlasting works. The Vedic people continued to erect the Altar of Fire for thousands of years: with time, according to Calasso, they realized that every brick was a thought, that thoughts piled on top of each other created a wall—the mind, the power of attention—and that that mind, when properly developed, could fly like a bird with outstretched wings and conquer the skies.

Seen from afar by people who were not aware of what was being made, these men and women must surely have looked like bricklayers gone mad. And that same frantic folly seems to possess those who, in recent decades, have dedicated their hearts and minds to the building of a new mathematical construct, a soulless copy of certain aspects of our thinking that we have chosen to name “artificial intelligence,” a tool so formidable that, if we are to believe the most zealous among its devotees, will help us reach the heavens and become immortal.

Raw and abstract power, AI lacks body, consciousness, or desire, and so, some might say, it is incapable of generating that primordial heat that the Vedas call tapas—the ardor of the mind, the fervor from which all existence emerges—and that still burns, however faintly, within each and every one of us. Should we trust the most optimistic voices coming from Silicon Valley, AI could be the vehicle we use to create boundless wealth, cure all ills, heal the planet, and move toward immortality, while the pessimists warn that it may be our downfall. Has our time come to join the gods eternal? Or will our digital offspring usurp the Altar of Fire and use it for their own ends, as we ourselves stole that knowledge, originally intended for the gods? It’s far too early to tell. But we can be certain of one thing, since we have learned it, time and time again, from the punishing tales of our mythologies: it is never safe to call on the gods, or even come close to them.

In the mid-nineteenth century, the mathematician George Boole heard the voice of God. As he crossed a field near his home in England, he had a mystical experience and came to believe he would uncover the rules underlying human thought. The poor son of a cobbler, Boole was a child prodigy who taught himself calculus and worked as a schoolteacher in Doncaster until one of his papers earned him a gold medal from the Royal Society, and he secured an offer to become the first professor of mathematics at Queen’s College, Cork, in Ireland. Under the auspices of a university, and relatively free from the economic hardships that he had endured for so long, he could dedicate himself almost entirely to his passions for the first time, and he soon managed something unique: he married mathematics and logic in a system that would change the world.

Before Boole, the disciplines of logic and mathematics had developed quite separately for more than a thousand years. His new logic functioned with only two values—true and false—and with it he could not only do math but analyze philosophical statements and propositions to divine their veracity or falsehood. Boole put his new type of logic to use on something that to him, a deeply religious man, was a spiritual necessity: to demonstrate that God was incapable of evil.

In a handwritten note that he titled “Origin of Evil,” Boole subjected four basic premises to analysis using the principles of his logic:

1. If God is omnipotent, all things must take place according to his will, and vice versa.

2. If God is perfectly good, and if all things take place according to his will, absolute evil does not exist.

3. If God were omnipotent, and if benevolence were the sole principle of his conduct, either pain would not exist, or it would exist solely as an instrument of good.

4. Pain does exist.

He subjected these statements to logical analysis, by replacing them with symbols, and combined them in different ways, through mathematical operations, till all he had left was a result that, according to his system, was categorically true: absolute evil does not exist and pain is an instrument of good.

Boole was a man inhabited by the spirit of his time, a spirit that was very different from ours: he believed that the human mind was rational and functioned according to the same laws that shape the larger universe; by painstakingly uncovering those laws, not only could we understand the world and reveal the hidden mechanisms that produce and guide our own thoughts, we could actually peer into the mind of Divinity. After confronting the problem of evil, he continued to develop his ideas, trying to create a calculus to reduce all logical syllogisms, deductions, and inferences to the manipulation of mathematical symbols, and to cast a precise foundation for the theory of probability. This resulted in his greatest work: An Investigation of the Laws of Thought, a book that laid out the rules of his new symbolic logic and also outlined, in the opening chapter, his grand intention to capture, with mathematics, the language of that ghost that whispers within the tortuous pathways of our minds:

The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic and construct its method.

Boole was convinced that our minds operate on a fundamental basis of logic, but he died without having reached his goal of creating a system to understand thought; ten years after publishing his masterpiece, he walked the two and a half miles that separated his home from the university, got drenched by the rain, lectured all day in wet clothes, and developed a cold that later became fatal pneumonia. He fell into delirium due to fever and told his wife, Mary Everest, that he could perceive the whole universe spread before him like a great black ocean, with nothing to see and nothing to hear except a silver trumpet and a chorus that sang, “Forever, O Lord, Thy word is settled in Heaven.” He died on the eighth of December 1864, not long after Mary (according to a story that may be apocryphal) had wrapped him in wet blankets—following the strange logic of homeopathy, wherein cures must mimic causes—unwittingly hastening her beloved’s demise.

His work was inconsequential during his lifetime and ignored for more than eighty years after his death, until one day a young graduate student at MIT chanced upon The Laws of Thought, immersed himself in Boole’s strange algebraic logic, and created a practical application that has, since then, affected every aspect of our lives.

His name was Claude Shannon, a mathematician and electrical engineer who was working on the most advanced thinking machine of his time (Vannevar Bush’s differential analyzer, an early computer as big as an entire room), when he realized that Boole’s two-value logic was the perfect system with which to design electronic circuits. Electrical switches use binary values (0 for off and 1 for on), and they can be controlled by the logical operations created by the English mathematician. Incredibly complex computations can be made just by exploiting a simple duality: true or false, on or off, 1 or 0. That duality is the cornerstone of the Information Age.

Boole’s seemingly useless and highly abstract ideas roared to life in digital circuitry, completely transforming our technological landscape; the vast majority of technology that uses electricity relies on them, from vacuum cleaners to intercontinental ballistic missiles. But their most important application is the fact that they form the basis of how modern computers “think.” These computers “speak” to one another in Boolean, and they use it to calculate; every single task they perform boils down to a series of yes-or-no questions that are processed using Boolean logic. All of their software, every word of their code, depends on it. Boole’s strange logic is the invisible alchemy that powers the modern world, and the mud that forms the bricks of the new Altar of Fire, because it is fundamental to the basic unit that inspired today’s most advanced AI systems: the artificial neuron.

In 1943, Warren Sturgis McCulloch and Walter Pitts published the first mathematical model of a neuron. It was extremely simplified and abstract, with none of the convoluted processes of real biology, but from that simplicity came enormous power. McCulloch, a neurophysiologist and one of the founders of cybernetics, and Pitts, a brilliant young polymath who excelled at logic, built on the fact that, in essence, the behavior of a neuron is binary: when excited, it either fires an electrical impulse or it doesn’t. Based on this premise, their landmark paper, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” used a Boolean scalpel to pry open the inner workings of the neuronal mechanism. According to their scheme, each artificial neuron receives multiple electrical signals from its neighbors, just as biological ones do; if, together, those signals exceed a certain threshold, the neuron fires, otherwise it remains inactive. Having created this mathematical construct, they took their ideas one step further and showed that, since both the input and the output of a neuron are Boolean, by stringing together these binary units into chains and loops, a network made up of them could calculate and implement every possible operation of Boolean logic. What arose from this model was a new understanding of the brain and the mind: viewed from their perspective, the brain could be understood as a computing device, a machine that used neurons to perform logic. Mental activity in humans, therefore, was nothing but binary information processed by neurons following mathematical rules. Before Pitts and McCulloch, not even Turing had thought of using the notion of computation to build a theory of the mind. But while their model demonstrated that artificial neurons are able to simulate complex cognitive processes, it turned out to be far too limited to capture the full intricacy of real biological brains. It was, nevertheless, a monumental insight, because it presented the first modern computational theory of the mind and offered an answer to one of the great questions in neuroscience—namely, how a brain can be intelligent. McCulloch and Pitts’s work in neural networks kicked off the computational approach to neuroscience that led John von Neumann to create the logical design of modern computers. It opened up a new vista into how our brains may function and seemed to show how neurons process and transmit information. Because of its far-reaching consequences, their demonstration that neural networks can do logic may be one of the most important ideas in the history of human thought, but with time, their highly idealized neurons were either dismissed or ignored by scientists working to understand the brain and replaced by different schemes. McCulloch spent years trying to develop a full mechanistic model of the mind and continued to search for the logic of the nervous system until his death in 1969, whereas Pitts—who had devoted his life to the conviction that the mysterious workings of the human mind, our many psychological feats and shortcomings, found their source in the pure mechanics of neurons firing electrical impulses in the brain—fell into depression and alcoholism, suffered delirium tremens, seizures, and episodes of unconsciousness, and died of bleeding esophageal varices, a condition associated with cirrhosis, alone in a boardinghouse in Cambridge, Massachusetts, after setting fire to his work on a model of three-dimensional neural networks. Their artificial neurons, however, survived them and sparked an approach to computer learning that, some four decades after that landmark paper, was doggedly and fervently championed, against the prevailing wisdom of the time, by none other than George Boole’s great-great-grandson Geoffrey Hinton.

Hinton is widely considered the godfather of AI. He is perhaps the single person who has had the greatest influence on the field in the past several decades. In the Eighties, he championed an approach based on deep neural nets, mathematical abstractions of the brain in which neurons are represented with code; just by altering the strength of the connections between those neurons—changing the numbers used to represent them—the network could learn by itself. Before him, the dominant paradigm was quite different: most researchers believed that, for machines to think, they would have to mimic the way humans reason, by manipulating symbols (words or numbers, for example) following logical rules, which is what Boole himself believed. But his descendant disagreed: “Crows can solve puzzles,” he said in an interview with MIT Technology Review last year, “and they don’t have language.?.?.?. They’re doing it by changing the strengths of connections between neurons in their brain. And so it has to be possible to learn complicated things by changing the strengths of connections in an artificial neural network.”

For the longest time, Hinton’s neural networks could not come alive. There was simply not enough computing power or training data for them to exhibit intelligence. But then things changed, violently. Beginning in 2010, he saw his ideas bloom in ways he could never have imagined, and neural networks became the main focus of international research. “We ceased to be the lunatic fringe,” he said. “We’re now the lunatic core.” In the following years, absolutely stunning systems were developed: AlphaGo pummeled the world champion of Go, Lee Sedol; AlphaFold predicted the shape of virtually every known protein structure; and programs like DALL-E 2 gave us photorealistic images conjured from pure noise. And then came ChatGPT, an AI able to do so many things Hinton had thought were decades away that it put the fear of God in him.

In spring 2023, Hinton quit his job as a vice president at Google to warn the world about the dangers of his brainchild.

He feels, as he explained at an annual MIT conference, that AI is developing too fast:

It’s quite conceivable that humanity is just a passing phase in the evolution of intelligence. You couldn’t directly evolve digital intelligence. It would require too much energy and too much careful fabrication. You need biological intelligence to evolve so that it can create digital intelligence, but digital intelligence can then absorb everything people ever wrote in a fairly slow way, which is what ChatGPT is doing, but then it can get direct access experience from the world and run much faster. It may keep us around for a while to keep the power stations running. But after that, maybe not.

Hinton has been transformed. He has mutated from an evangelist of a new form of reason into a prophet of doom. He says that what changed his mind was the realization that we had, in fact, not replicated our intelligence, but created a superior one.

Or was it something else, perhaps? Did some unconscious part of him whisper that it was he, rather than his great-great-grandfather, who was intended by God to find the mechanisms of thought? Hinton does not believe in God, and he would surely deny his ancestor’s claim that pain is an instrument of the Lord’s will, since he was forced to have every one of his meals on his knees, resting on a pillow like a monk praying at the altar, because of a back injury that caused him excruciating pain. For more than seventeen years, he could not sit down, and only since 2022 has he managed to do so long enough to eat.

Hinton is adamant that the dangers of thinking machines are real. And not just short-term effects like job replacement, disinformation, or autonomous lethal weapons, but an existential risk that some discount as fantasy: that our place in the world might be supplanted by AI. Part of his fear is that he believes AI could actually achieve a sort of immortality, as the Vedic gods did. “The good news,” he has said, “is we figured out how to build things that are immortal. When a piece of hardware dies, they don’t die. If you’ve got the weights stored in some medium and you can find another piece of hardware that can run the same instructions, then you can bring it to life again. So, we’ve got immortality. But it’s not for us.”

Hinton seems to be afraid of what we might see when the embers of the Altar of Fire die down at the end of the sacrifice and the sharp coldness of the beings we have conjured up starts to seep into our bones. Are we really headed for obsolescence? Will humanity perish, not because of the way we treat all that surrounds us, nor due to some massive unthinking rock hurled at us by gravity, but as a consequence of our own irrational need to know all that can be known? The supposed AI apocalypse is different from the mushroom-cloud horror of nuclear war, and unlike the ravages of the wildfires, droughts, and inundations that are becoming commonplace, because it arises from things that we have, since the beginning of civilization, always considered positive and central to what makes us human: reason, intelligence, logic, and the capacity to solve the problems, puzzles, and evils that taint even the most fortunate person’s existence with everyday suffering. But in clawing our way to apotheosis, in daring to follow the footsteps of the Vedic gods who managed to escape from Death, we may shine a light on things that should remain in darkness. Because even if artificial intelligence never lives up to the grand and terrifying nightmare visions that presage a nonhuman world where algorithms hum along without us, we will still have to contend with the myriad effects this technology will have on human society, culture, and economics.

In the meantime, the larger specter of superintelligent AI looms over us. And while it is less likely and perhaps even impossible (nothing but a fairy tale, some say, a horror story intended to attract more money and investment by presenting a series of powerful systems not as the next step in our technological development but as a death-god that ends the world), it cannot be easily dispelled, for it reaches down and touches the fibers of our mythmaking apparatus, that part of our being that is atavistic and fearful, because it reminds us of a time when we shivered in caves and huddled together, while outside in the dark, with eyes that could see in the night, the many savage beasts and monsters of the past sniffed around for traces of our scent.

As every new AI model becomes stronger, as the voices of warning form a chorus, and even the most optimistic among us begin to fear this new technology, it is harder and harder to think without panic or to reason with logic. Thankfully, we have many other talents that don’t answer to reason. And we can always rise and take a step back from the void toward which we have so hurriedly thrown ourselves, by lending an ear to the strange voices that arise from our imagination, that feral territory that will always remain a necessary refuge and counterpoint to rationality.

Faced, as we are, with wild speculation, confronted with dangers that no one, however smart or well informed, is truly capable of managing or understanding, and taunted by the promises of unlimited potential, we may have to sound out the future not merely with science, politics, and reason, but with that devil-eye we use to see in the dark: fiction. Because we can find keys to doors we have yet to encounter in the worlds that authors have imagined in the past. As we grope forward in a daze, battered and bewildered by the capabilities of AI, we could do worse than to think about the desert planet where the protagonists of Herbert’s Dune novels sought to peer into the streaming sands of future time, under the heady spell of a drug called spice, to find the Golden Path, a way for human beings to break from tyranny and avoid extinction or stagnation by being more diverse, resilient, and free, evolving past purely logical reasoning and developing our minds and faculties to the point where our thoughts and actions are unpredictable and not bound by statistics. Herbert’s books, with their strange mixture of past and present, remind us that there are many ways in which we can continue forward while preserving our humanity. AI is here already, but what we choose to do with it and what limits we agree to place on its development remain decisions to be made. No matter how many billions of dollars are invested in the AI companies that promise to eliminate work, solve climate change, cure cancer, and rain down miracles unlike anything we have seen before, we can never fully give ourselves over to these mathematical creatures, these beings with no soul or sympathy, because they are neither alive nor conscious—at least not yet, and certainly not like us—so they do not share the contradictory nature of our minds.

In the coming years, as people armed with AI continue making the world faster, stranger, and more chaotic, we should do all we can to prevent these systems from giving more and more power to the few who can build them. But we should also consider a warning from Herbert, the central commandment he chose to enshrine at the heart of future humanity’s key religious text, a rule meant to keep us from becoming subservient to the products of our reason, and from bowing down before the God of Logic and his many fearsome offspring:

Thou shalt not make a machine in the likeness of a human mind.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, June 23, 2024 7:53 PM

JAYNEZTOWN

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, June 24, 2024 5:17 PM

6IXSTRINGJACK


LOL...

Yup. I'd say that women are about 3 years away from being obsolete.



This is what happens when you give all the creative jobs to the talentless, fat, man-hating, virgin, potbelly, pink-haired trolls in media who make all the game/comic women fat and ugly.

The sex-starved, thirsty autist virgins of the world left with nothing good to fap to come out and do it better than you ever did.



God help us all. I think we're due for another 40 days and nights of non-stop torrential rain.

--------------------------------------------------

Trump will be fine.
He will also be your next President.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE

OTHER TOPICS

DISCUSSIONS
In the garden, and RAIN!!! (2)
Wed, June 26, 2024 02:03 - 3982 posts
Elections; 2024
Wed, June 26, 2024 00:53 - 2828 posts
The Joe Biden* Camp cowards at CNN
Wed, June 26, 2024 00:44 - 5 posts
If you see somebody wearing a mask, punch them in the face for their own good
Wed, June 26, 2024 00:39 - 5 posts
End of the world Peter Zeihan
Tue, June 25, 2024 22:33 - 153 posts
The NEW List of Swing States
Tue, June 25, 2024 22:28 - 4 posts
The Honeymoon is Over
Tue, June 25, 2024 17:17 - 261 posts
Julian Assange is Free!
Tue, June 25, 2024 17:06 - 7 posts
Punishing Russia With Sanctions
Tue, June 25, 2024 16:34 - 535 posts
Putin's Russia
Tue, June 25, 2024 16:01 - 47 posts
Macron Calls the Unvaccinated in France "Non-Citizens"
Tue, June 25, 2024 16:01 - 11 posts
Russia begins to stir.
Tue, June 25, 2024 15:56 - 22 posts

FFF.NET SOCIAL