REAL WORLD EVENT DISCUSSIONS

A.I Artificial Intelligence AI

POSTED BY: JAYNEZTOWN
UPDATED: Friday, January 31, 2025 17:27
SHORT URL:
VIEWED: 8732
PAGE 6 of 6

Wednesday, December 11, 2024 7:04 PM

JAYNEZTOWN


'I used Grok's new image generation model to reinterpret Greek myths and epic poems.'

https://x.com/JamesLucasIT/status/1866923530413031644

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, December 16, 2024 12:20 PM

SIGNYM

I believe in solving problems, not sharing them.


Whatever AI's other potential uses, it is actively being used to censor, gaslight, delete, and in other ways brainwash Americans (and probably other peoples aa well).

A smattering of stories, just from today

Quote:

Ghosted By ChatGPT: How Jonathan Turley Was First Defamed & Then Deleted By AI

It is not every day that you achieve the status of “he-who-must-not-be-named.”

But that curious distinction has been bestowed upon me by OpenAI’s ChatGPT,
according to the New York Times, Wall Street Journal, and other publications.

For more than a year, people who tried to research my name online using ChatGPT were met with an immediate error warning.

It turns out that I am among a small group of individuals who have been effectively disappeared by the AI system. How we came to this Voldemortian status is a chilling tale about not just the rapidly expanding role of artificial intelligence, but the power of companies like OpenAI.

Joining me in this dubious distinction are Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and a few others.

The common thread appears to be false stories generated about us all by ChatGPT in the past. The company appears to have corrected the problem not by erasing the error but erasing the individuals in question.

Thus far, the ghosting is limited to ChatGPT sites, but the controversy highlights a novel political and legal question in the brave new world of AI.

My path toward cyber-erasure began with a bizarre and entirely fabricated account by ChatGPT.

As I wrote at the time, ChatGPT falsely reported that there had been a claim of sexual harassment against me (which there never was) based on something that supposedly happened on a 2018 trip with law students to Alaska (which never occurred), while I was on the faculty of Georgetown Law (where I have never taught).

In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”



This is the problem with large language learning. People talk about all kinds of things that aren't even real. Angels. Dragons. Saddam's WMD. Events that never happened. An AI "learning" from online conversations has no way of independently sorting fact from fiction. Also, given that most online convesations are prompted by a "leftwing", deep state-controlled media, all AIs have a "leftwing" bias.
You wind up with GIGO (garbage in, garbage out).

Quote:

Although some of those defamed in this manner chose to sue these companies for defamatory AI reports, I did not. I assumed that the company, which has never reached out to me, would correct the problem.

And it did, in a manner of speaking — apparently by digitally erasing me, at least to some extent. In some algorithmic universe, the logic is simple: there is no false story if there is no discussion of the individual.

As with Voldemort, even death is no guarantee of closure. Professor Mayer was a respected Emeritus Professor of Drama and Honorary Research Professor at the University of Manchester, who passed away last year. And ChatGPT reportedly will still not utter his name.

Before his death, his name was used by a Chechen rebel on a terror watch list. The result was a snowballing association of the professor, who found himself facing travel and communication restrictions.

Hood, the Australian mayor, was so frustrated with a false AI-generated narrative that he had been arrested for bribery that he took legal action against OpenAI. That may have contributed to his own erasure.

The company’s lack of transparency and responsiveness has added to concerns over these incidents. Ironically, many of us are used to false attacks on the Internet and false accounts about us. But this company can move individuals into a type of online purgatory for no other reason than that its AI generated a false story whose subject had the temerity to object.

You can either be seen falsely as a felon or be unseen entirely on the ubiquitous information system. Capone or Casper, gangster or a ghost — your choice.

Microsoft owns almost half of equity in OpenAI. Ironically, I previously criticized Microsoft founder and billionaire Bill Gates for his push to use artificial intelligence to combat not just “digital misinformation” but “political polarization.” Gates sees the unleashing of AI as a way to stop “various conspiracy theories” and prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”

I do not believe that my own ghosting was retaliation for such criticism. Moreover, like the other desparecidos, I am still visible on sites and through other systems. But it does show how these companies can use these powerful systems to remove all references to individuals. Moreover, corporate executives may not be particularly motivated to correct such ghosting, particularly in the absence of any liability or accountability.

That means that any solution is likely to come only from legislative action. AI’s influence is expanding exponentially, and this new technology has obvious benefits. However, it also has considerable risks that should be addressed.

Ironically, Professor Zittrain has written on the “right to be forgotten” in tech and digital spaces. Yet he never asked to be erased or blocked by OpenAI’s algorithms.

The question is whether, in addition to a negative right to be forgotten, there is a positive right to be known. Think of it as the Heisenberg moment, where the Walter Whites of the world demand that ChatGPT “say my name.” In the U.S., there is no established precedent for such a demand.

There is no reason to see these exclusions or erasures as some dark corporate conspiracy or robot retaliation. It seems to be a default position when the system commits egregious, potentially expensive errors — which might be even more disturbing. It raises the prospect of algorithms sending people into the Internet abyss with little recourse or response. You are simply ghosted because the system made a mistake, and your name is now triggering for the system.

This is all well short of Hal 9000 saying “I’m sorry Dave, I’m afraid I can’t do that” in an AI homicidal rage. Thus far, this is a small haunt of digital ghosts. However, it is an example of the largely unchecked power of these systems and the relatively uncharted waters ahead.


https://jonathanturley.org/2024/12/16/ghosted-by-chatgpt-how-i-was-fir
st-defamed-and-then-deleted-by-ai
/


-----------
"It may be dangerous to be America's enemy, but to be America's friend is fatal." - Henry Kissinger


AMERICANS SUPPORT AMERICA


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, December 16, 2024 12:24 PM

SIGNYM

I believe in solving problems, not sharing them.


Hedge Fund CIO: "As AI Advances, It'll Be Next To Impossible To Distinguish Fact From Fiction"
https://www.zerohedge.com/crypto/hedge-fund-cio-ai-advances-itll-be-ne
xt-impossible-distinguish-fact-fiction


-----------
"It may be dangerous to be America's enemy, but to be America's friend is fatal." - Henry Kissinger


AMERICANS SUPPORT AMERICA


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, December 16, 2024 12:27 PM

SIGNYM

I believe in solving problems, not sharing them.


Suspicious OpenAI Whistleblower Death Ruled Suicide

https://www.zerohedge.com/political/suspicious-openai-whistleblower-de
ath-ruled-suicide


-----------
"It may be dangerous to be America's enemy, but to be America's friend is fatal." - Henry Kissinger


AMERICANS SUPPORT AMERICA


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, December 16, 2024 7:17 PM

JAYNEZTOWN


more on the whistle blower

OpenAI whistleblower found dead at 26 in San Francisco apartment

https://techcrunch.com/2024/12/13/openai-whistleblower-found-dead-in-s
an-francisco-apartment
/

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, December 21, 2024 7:06 PM

JAYNEZTOWN

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, December 23, 2024 7:04 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


Meet Botto, the AI ‘machine artist’ making millions of dollars

By Lucy Handley | Mon, Dec 23, 2024

https://www.cnbc.com/2024/12/23/botto-the-ai-machine-artist-making-mil
lions-of-dollars.html


‘Machine artists’

Klingemann believes that, in the near future, because of advances in AI and machine learning, ”‘machine artists’ will be able to create more interesting work than humans,” according to a post on his website. One of Klingemann’s pieces became the first AI-produced work to be sold by Sotheby’s in Europe, with a 2019 auction fetching £40,000.

The value of Botto’s images appears to be increasing, Hudson said.

Two early images put up for auction during a quiet period for the AI art market were given reserve prices of around $13,000 to $15,000 by the BottoDAO, but they didn’t sell. However, at an October auction at Sotheby’s New York, the same images — “Expose Stream” and “Exorbitant Stage” — sold for $276,000 in total, Hudson said. Botto is also the third-highest seller by total sales on the SuperRare platform for the last year, as of Dec. 12.

Questions of authorship

Is Botto an artist in its own right? “It’s a thing of perception,” Hudson said. “Certainly, Botto right now is a collaboration between machine and crowd. The human hands are certainly there, but the setup is such that Botto has maintained the central role of authorship,” he said.

Botto has the potential to change the way art — and artists — are perceived, Hudson said. “With Botto, it strips away this myth of the lone genius artist and shows how artwork is really a collective ... meaning-making process. And when you have a deluge of AI-generated content, that’s going to be even more important of a process,” he said.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, December 25, 2024 8:51 AM

JAYNEZTOWN


ChatGPT preferred over in-person lessons as language learning method among young Japanese

https://mainichi.jp/english/articles/20241218/p2a/00m/0li/017000c#cxre
cs_s

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, December 28, 2024 8:26 PM

JAYNEZTOWN


Yes indeed Laura Loomer is often exposed as a 'Loon'

but does that give Musk the right to censor the web, shadow ban people

and promote his H1B Open Border Agenda and make a mockery of that whole 'Maga' thing

Elon Musk accused of censoring right-wing X accounts in heated immigration dispute
https://www.lbc.co.uk/news/elon-musk-accused-censoring-right-wing-x-ac
counts-heated-immigration-dispute
/

Even the Left media now admits its a bias against conservatives
‘Free Speech Apparently Doesn’t Apply to Criticizing Musk’

seems like Elon Musk was an actor, a fake?


the Leftwing news admits Musk is going too far


some even say that Musk Insinuates Laura Loomer Is An Al Qaeda-Like Terrorist

Elon Musk ADMITS To Silencing Laura Loomer Over Immigration!
https://rumble.com/v63lmzk-elon-musk-admits-to-silencing-laura-loomer-
over-immigration.html



like a Fascist with Joe Stalinist Communist words?

Musk Hits Back Amid MAGA Infighting, Says ‘Contemptible Fools’ Must Be Purged From The GOP ‘Root And Stem’
https://www.mediaite.com/politics/musk-hits-back-amid-maga-infighting-
says-contemptible-fools-must-be-purged-from-the-gop-root-and-stem
/



NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, December 29, 2024 8:04 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


The 2024 Stanford AI index Report charts the improvement of AI. Practically everything is now performing at human level except for high-level math, and it's been on a rocket recently, going from nothing to 90% in only two years.


But here's the ugly:

The worst AI might as well be Donald Trump. This is by far the biggest short-term Achilles heel of AI. Nobody can rely on it for anything serious until this gets cleaned up.
https://jabberwocking.com/the-good-and-the-ugly-of-ai-performance/

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, December 29, 2024 9:37 AM

6IXSTRINGJACK


Quote:

Originally posted by JAYNEZTOWN:
seems like Elon Musk was an actor, a fake?



Stop drinking the Kool Aid.


We aren't the Hive Mind of the Democrats and the Legacy Media, so don't expect us all to be in lock-step with each other. You'll never put together 5 minute compilation videos of all of us saying the exact same sentence verbatim.

Vivek and Elon might have been blindsided by the pushback here.

Let's see how this plays out.

Trump hasn't even been fucking inaugurated yet and everybody has a fucking opinion about everything.

--------------------------------------------------

"My only fear of death is coming back to this bitch reincarnated." ~Tupac Shakur

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, December 29, 2024 10:20 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


Quote:

Originally posted by 6ixStringJack:

Vivek and Elon might have been blindsided by the pushback here.

Let's see how this plays out.

Trump hasn't even been fucking inaugurated yet and everybody has a fucking opinion about everything.

Trump, Vivek, and Elon were NOT blindsided. It was absolutely predictable that poor white trash Trumptards would be opposed to everything the wealthy want and the wealthy knew how the white trash would respond, but it is too late for white trash to stop the inauguration. In the next four years, Trump doesn't have to please the poor Trumptards, except those who are in Congress.

As a matter of fact, Trump can goof-off the next four years, if that pleases him. He can go golfing every day and watch himself on TV when he is not golfing, if he feels like it.

Trump has spoken and the Indians took notes:

Donald Trump backs Elon Musk and H-1B visas amid MAGA meltdown

By Chidanand Rajghatta | Dec 29, 2024, 18:29 IST

https://timesofindia.indiatimes.com/world/us/donald-trump-backs-elon-m
usk-and-h-1b-visas-amid-maga-meltdown/articleshow/116769297.cms


President-elect Donald Trump has backed Elon Musk's support for the H-1B visa program, despite previous opposition. Trump's endorsement has caused a rift among his supporters. Big tech figures such as Musk argue that the program is essential for maintaining US industry leadership.

President-elect Donald Trump threw his weight behind Elon Musk and his support for H-1B visas even as the hot-button issue continued to roil the MAGA-sphere over the weekend, boiling over into anti-Indian invective.

American nativists raged against big tech and Indian guest workers for purportedly exploiting the H-1B system to displace US-born graduates, as MAGA hardliners began directing their anger against even Indian-Americans and legal immigrants asking them to "go back to your country."

Despite having previously said he is opposed to the H-1B visas. Trump stunned his MAGA base on Sunday by siding with Musk in the scorching debate, telling the New York Post "I've always liked the visas, I have always been in favor of the visas. That's why we have them."

"I have many H-1B visas on my properties. I've been a believer in H-1B. I have used it many times. It's a great program," Trump added, even as MAGA hardliners exploded against "tech bros" like Musk, accusing them of hoodwinking Trump to enrich themselves with the argument that skilled foreign workers are needed to maintain American primacy.

Trump himself was outed for using guest worker visas, including H-1B, H-2A (which provides temporary visas for agricultural workers) and H-2B program (for seasonal workers in tourism and hospitality sectors) to staff his properties, prompting him to come out in support of the existing system despite having embraced MAGA's "American workers first" fervor in the past.

During his 2016 Presidential campaign he had pledged to "end forever the use of the H-1B as a cheap labor program, and institute an absolute requirement to hire American workers first for every visa and immigration program." The promise was never kept although the first Trump administration tightened H-1B regulations as part of an "extreme vetting" approach that made it more difficult to get the visa.

Musk and other tech stalwarts have argued that the H-1B visa program is critical to ensuring US companies can find highly skilled labor across the world to maintain American primacy. "The reason I'm in America along with so many critical people who built SpaceX, Tesla, and hundreds of other companies that made America strong is because of H-1B," the South Africa-born Musk, who was himself a H-1B visa recipient before he became a US citizen, told critics on his platform X.

As the debate got more contentious and MAGA hardliners accused him and his "tech bros" of exercising outsized influence over Trump and misrepresenting the H-1B "scam" to enrich themselves, Musk exploded. "Take a big step back and F-K YOURSELF in the face. I will go to war on this issue the likes of which you cannot possibly comprehend," he added.

MAGA hardliners, including Trump surrogate Steve Bannon, agent provocateur Laura Loomer, and commentator Ann Coulter among them, joined forces against Musk and the so-called Big Tech for betraying American workers. "Someone please notify 'Child Protective Services'— need to do a 'wellness check' on this toddler," Bannon, a former White House Counselor and Trump adviser sneered, even as Democrats and liberals appeared to enjoy the circular firing squad within MAGA.

Loomer, who has been banished from Trump's inner circle, continued to take on Musk on his own platform X, accusing him of censoring her.

"Forget about immigration for one second. The biggest threat to our country, our freedom and humanity in the unchecked power of technocrat billionaires who have god complexes, access to defense contracts, and openly declare war against dissenters," she wrote.

Comments from India:

Kumar - 1 hour ago
Trump and his educated modern family and Musk know very well that America is number one due to hard work of many Indian engineers, scientists doctors, entrepreneurs, CEO and techies. Old whites in USA have colonial hangover and will get used to it after some time


Bornoffire - 1 hour ago
The racists white trash that make a big part of this MAGA joke cannot face the fact that India is churning out far more skilled professionals than America because their culture does not encourage education especially high-skilled ones.


The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, December 29, 2024 7:46 PM

6IXSTRINGJACK


Quote:

Originally posted by JAYNEZTOWN:
seems like Elon Musk was an actor, a fake?



Stop drinking the Kool Aid.


We aren't the Hive Mind of the Democrats and the Legacy Media, so don't expect us all to be in lock-step with each other. You'll never put together 5 minute compilation videos of all of us saying the exact same sentence verbatim.

Vivek and Elon might have been blindsided by the pushback here.

Let's see how this plays out.

Trump hasn't even been fucking inaugurated yet and everybody has a fucking opinion about everything.

--------------------------------------------------

"My only fear of death is coming back to this bitch reincarnated." ~Tupac Shakur

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, December 31, 2024 2:30 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


Quote:

Originally posted by 6ixStringJack:
Quote:

Originally posted by JAYNEZTOWN:
seems like Elon Musk was an actor, a fake?



Stop drinking the Kool Aid.


We aren't the Hive Mind of the Democrats and the Legacy Media, so don't expect us all to be in lock-step with each other. You'll never put together 5 minute compilation videos of all of us saying the exact same sentence verbatim.

Vivek and Elon might have been blindsided by the pushback here.

Let's see how this plays out.

Trump hasn't even been fucking inaugurated yet and everybody has a fucking opinion about everything.

--------------------------------------------------

"My only fear of death is coming back to this bitch reincarnated." ~Tupac Shakur

6ix, people who are assholes don't suddenly change. They don't even slowly change. Trump has been a goddamn cheating, lying crook all his life. He and I know he doesn't have to change because there is a vast number of Americans who are chumps, and they can't grasp reality, and they can't change what they are, either, despite whatever stupid asshole tricks Trump plays on them. Trump's Trumptards are a bunch of lazy robots who don't update their maps of the world:

Why the world needs lazier robots

To waste less energy, language models, self-driving cars and industrial robots need to think less.

By Samanth Subramanian and Emily Wright | Dec 31, 2024

https://www.washingtonpost.com/climate-solutions/interactive/2024/ai-e
nergy-consumption-robots
/

EINDHOVEN, Netherlands — On an indoor soccer field here, a championship-winning team is running its drills. Each player stakes out a position, makes precise adjustments and fires the orange ball at another.

On the sidelines, manager René van de Molengraft paces and observes. He’d yell instructions at his players if he could — but his team consists of five thigh-high robots on wheels, with names like Zessi and Robodinho. They move on their own; during each 30-minute game, no human controls them.

Molengraft, chair of robotics at Eindhoven University of Technology, hopes to use his team to help solve a pressing dilemma: how to get the increasing numbers of automatons — industrial robots, drones, self-driving cars, self-learning AI algorithms, robots with ChatGPT brains and more — to consume less energy.

The players on the Tech United Eindhoven team, part of RoboCup, an annual championship that started in 1997, are not unlike those that build airplanes in factories or stack shelves in warehouses. At the moment, though, Molengraft’s players can’t last a full game. “Typically, each robot has to get two fresh lithium-ion batteries at the half,” Molengraft said.

Grappling with power-thirsty robots is difficult and inescapable. Robots in factories — as well as large AI language models — have advanced so far that their promise of easing human labor is in the here and now. But their energy costs are mounting rapidly, and it’s all because of data.

Many robots run software or language models stored in the cloud, which drives up data center energy demand. Beyond that, though, robots and AI models share one crucial characteristic: Whether to move around, conduct conversations or solve problems, they function by constantly taking in and computing increasingly vast quantities of data.

It’s a brute-force approach to automation. Processing all that data makes them such energy guzzlers that their planet-warming pollution could outweigh any benefits they offer.

Data centers worldwide generate only around 1 percent of energy-related greenhouse gas emissions. But by 2026, data centers are projected to use double the electricity than they did in 2022 as demand for data processing and computing grows.If the world needs to reach net zero emissions by 2050, the International Energy Agency estimates, the carbon footprint of data must halve in the next five years.

One solution, Molengraft thinks, might lie in “lazy robotics,” a cheeky term to describe machines doing less and taking shortcuts — in other words, behaving more like human beings. He and his students are deliberately making their robot players lazier — reengineering them to become more efficient.

It’s a complex challenge, even in the constrained field of robot soccer — let alone in the universe of industrial robotics and AI. There may be ceilings for laziness: limits to how much superfluous energy use can be stripped away before robots stop functioning as they should.

Still, Molengraft said, “The truth is: Robots are still doing a lot of things that they shouldn’t be doing.”

The upside of laziness

To waste less energy, robots need to do less of everything: move less, and think less, and sense less.

They need to focus only on what’s important at any particular moment. Which, after all, is what humans do, even if we don’t always realize it.

At the moment, Molengraft’s robot players run their various systems — the capturing of feeds from sensors and cameras, the computations, the movements, the mutual communications — dozens of times a second. By synthesizing all these signals, each robot builds a mental representation of its environment — a model of where it is in relation to everything else.

The abilities of Molengraft’s robots, and of most other automatons, hinge on these “world models.” The concept of the world model arose as early as the 1970s, but back then, this kind of software was so limited that it ran on much less data and therefore ate up much less energy. Thinking was cheap. As robots became more sophisticated, their world models were able to incorporate more data from their surroundings. Out of inertia, researchers kept designing the robots to pay attention to every detail in their world models — an inefficient, overly complicated way of being.

Until now, each of Molengraft’s robots has been forming a hypothesis of where it is on the field, testing that hypothesis with the details it gathers, updating its world model and communicating that model to every other robot on the field. A soccer game is highly dynamic. “If a robot would not look around for, let’s say a fraction of a second, its world has completely changed,” he said.

But humans make decisions differently, even in situations of high uncertainty. Our sensory systems take in roughly a billion bits of information every second — but human thinking proceeds around 10 bits per second, sifting only the most salient inputs. Molengraft offered the example of a driver on a dark, rainy night, peering ahead for oncoming traffic and a turnoff. The driver slows down, Molengraft said, “and makes little, local hypotheses: ‘What can this be? What can that be?’ Then you continue at a safe speed to build more evidence.” The success of a human driver depends on focusing on the road just ahead. “If you’d actually need to assess the whole environment in full detail all the time, you couldn’t do it.” Laziness, in this case, is efficiency.

Molengraft’s research team is trying to instill this principle in its robots. If Zessi finds itself defending against an opponent with the ball, it needs to compute its own position relative only to its dribbling rival, and not relative to attackers far ahead. “It doesn’t need to look 60 times a second at the white center line on the field to calibrate itself precisely,” Molengraft said.

Over short distances, Zessi can also keep track of its position in simpler ways. It can, for instance, use the odometer on its wheels, a low-energy mechanical input. The “omnicamera,” a video camera that sticks out of each robot like a top hat over a Victorian gentleman, and that captures a constant feed of the terrain in a 20-foot radius, can be turned off for brief periods. Only if Zessi feels unusually uncertain about where it is on the field does it then need to scan its surroundings.

Talking less also means consuming less energy. At the moment, Molengraft’s robots communicate with one another all the time, sending signals about what they’re seeing and to whom they will pass the ball. But if the robots focused on the ball, they’d see it coming their way. They’d play, in other words, like a well-oiled human soccer team, in which the players never shout each other’s names before sending the ball their way.

If all these efficiencies, and some others, could be realized, Molengraft estimated that his robots could cut their energy use by 80 percent. A single battery could last three or four games instead of just a half. Tech United Eindhoven hasn’t yet played competitive games in its “lazy” mode, because its engineers are still working on perfecting the code. Molengraft said that after seven years of work, they’re not even halfway to their goal.

Progress is slow. An automaton is a complex system, and changing an element in one module may have an unforeseen effect in another. But Molengraft sees this work as indispensable if the forthcoming age of machines is to be a cleaner time as well.

A multipurpose robot

Lazy robotics is already percolating out of university labs and into the R&D wings of corporations.

One Eindhoven start-up, Avular, offers bespoke robots on wheels or drones to clean airports and supermarket warehouses, for instance, or to measure the density of tree cover. These robots are cheaper than human labor, and they’re greener to boot; sending people up in a plane to survey a forest expends much more carbon than relying on battery-powered drones.

What Albert Maas, Avular’s CEO, really wants to do, though, is develop a multipurpose robot, “something that can load a dishwasher, clean a toilet and shelve products in a supermarket.” AI-enabled software makes that kind of nimbleness possible, Maas said, but it will also devour enormous amounts of energy.

On a poured-cement test floor in Avular’s headquarters, Maas and his colleagues demonstrated one solution they’re trying.

Joris Sijs, a system architect at Avular, commanded a test robot to drive to a far corner of a workshop on the other side of the building. On a computer screen, we saw its mind at work, computing a sequence of spatial representations of rooms and corridors as it passed through them: first the fat rectangle of the test floor; then a long passageway; then a turn around a bend; then the awkward, cluttered layout of the workshop itself.

The test robot’s inefficient forebear would have crunched through the entire layout of the building every single second, which is like looking up the city maps of Sydney, Johannesburg and Seoul merely to go around the corner to your neighborhood New York bodega.

This was, perhaps, an elementary feat, but Maas urged me to extend the principle beyond navigation to a multipurpose robot’s many duties. It could come equipped with algorithms that tell it how to stack dishes, clean a toilet and complete a thousand other chores — but it only needs to run through one at a time. By excluding the irrelevant, it would focus on the task at hand, without burning up its battery.


On the outskirts of Eindhoven, engineers at health technology firm Philips have encoded lazy robotics into two porcelain-white machines. These robots, named FlexArm and Biplane, move around an operating theater with smooth hums, taking X-ray images to help surgeons install cardiac stents or work on the brain with greater precision.

Right now, surgeons manually control them. But they’re in rooms that can be crowded with nurses, an anesthesiologist, the table and other equipment. “They need to know how to move without colliding into anything else,” said Marco Alonso, an architect at Philips. A maximalist approach would have involved many cameras and processors interpreting the collected video streams at high speed. Instead, the robots use proximity sensors, which use far less energy.

Lazy robotics can also cut down on the number of X-rays during a procedure. Frequently, surgeons take multiple X-rays to make their work as precise as possible. But with the robots’ help, they can track the exact coordinates on a patient’s body they are operating on in real time.

If the patient doesn’t move appreciably after that snapshot, “there’s no need for an additional X-ray ‘just to be sure,’” Alonso said.

An average X-ray has a carbon footprint equivalent to 0.8 kilograms of carbon dioxide — but the number of images taken during a single surgery can mount quickly. (During one demo, I watched FlexArm snap off more than 120 images in less than a minute.) That number would be far smaller in lazy mode, which is good for the patient, because it entails less exposure to radiation, and it’s also good for the planet.

Small data

The theories behind lazy robotics make robots smart in a more practical way: by coding in an awareness of what they don’t need to know.

It may be a while before these solutions are deployed at scale out in the world, but their potential applications are already evident. In the promised future of autonomous vehicles, for instance, cars ought to absorb far less information on a vacant highway than in crowded, pedestrian-dense, unpredictable streets downtown. Molengraft sees an extension of lazy robotics into the realm of generative AI, in which machines don’t learn how to move but learn how to learn by processing veritable oceans of data.

“With these large language models, you’re basically putting the information of the entire world into one flat, huge dataset,” Molengraft said. “For every query that you do, the model will have to make all these inferences through this huge network of information.”

It’s wiser to build versions that contain only the necessary information. A language model used by software engineers, for instance, shouldn’t need to run through its training data about world history, sporting records or children’s literature. “Not every AI model has to be able to tell us about the first Harry Potter book,” he said.

The less data an AI model crunches, the less energy it uses — a vital efficiency fillip given that ChatGPT now uses 500,000 kilowatt-hours of energy a day, responding to 200 million queries. A U.S. household would need more than 17,000 days on average to rack up the same electricity bill.

Not every robot can be lazy all the time or even some of the time — an inherent constraint of the lazy robotics principle, said Yang Gao, the head of the Center for Robotics Research at King’s College London.

And because laziness needs to suit its scenario, it can’t be a generic solution. If robots are to be lazy, they also need to be more rugged to cope with uncertainties in their environments. “This is a natural trade-off. ‘Laziness’ (or cleverness) does not come free,” she said.

In his daily life, Molengraft often observes himself as he moves through the world. He tries to understand how he makes mundane decisions about navigation, and then he imagines how those approaches can be adapted for robots.

“Last week, we were talking about amusement parks, and I was asking myself: ‘How can I localize this for a robot?’” he said. “I realized that even in a very crowded amusement park, all the buildings and attractions are all completely uniquely shaped. A robot could just scan two of its nearest buildings and triangulate where it is.”

Behind him, as he spoke, his players kept knocking the soccer ball to each other, motoring about the field, and slowly — but still too quickly — running their batteries out.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, January 3, 2025 6:34 AM

JAYNEZTOWN


This Florida judge put on a VR headset to experience the crime firsthand. We really are living in the future.

https://x.com/NathieVR/status/1874936296810205226

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, January 3, 2025 2:26 PM

JAYNEZTOWN


Robots are very useful, guy rides a B2W mini-horse with wheels

robot that can run, jump and off road on steep gradients


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 6, 2025 9:29 AM

JAYNEZTOWN


Speaking in Ukrainian on the issue of corruption, Zelensky said: “We beat everyone on the hands.”
https://x.com/DrewPavlou/status/1876046251755815147#m
Lex Fridman’s used an AI translation service that translated it as: “We slap them on the wrist.”
Completely different meaning in English.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 7, 2025 5:54 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


AI has gotten very good at being a doctor

By Kevin Drum | January 6, 2025 – 9:22 pm

https://jabberwocking.com/ai-has-gotten-very-good-at-being-a-doctor/

According to the Stanford AI Index Report, AI models have gotten really good at passing medical boards. GPT-4 Medprompt got 90.2% of its answers correct in 2023. https://aiindex.stanford.edu/report/


I would have guessed that it would be considerably higher by now, but according to the leaderboard maintained by Papers With Code, not so much: https://paperswithcode.com/sota/question-answering-on-medqa-usmle


The best performance is now owned by Google with its Med-Gemini product, but it's only 0.9 percentage points better. It seems as though progress has stalled a bit.

Still, I wouldn't be surprised if this is already better than most real doctors can manage.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 7, 2025 6:04 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


AI Outshines Humans in Ovarian Cancer Detection: Revolutionary Study Shows Promising Results

By Mackenzie Ferguson | Jan/6/2025

https://opentools.ai/news/ai-outshines-humans-in-ovarian-cancer-detect
ion-revolutionary-study-shows-promising-results


A groundbreaking study published in Nature Medicine reveals an AI model that detects ovarian cancer from ultrasound images with an impressive 86% accuracy, outperforming human specialists. Trained on over 17,000 images from eight countries, this AI system is making waves in the medical community, though experts urge for further validation before clinical implementation.

With an accuracy rate of 86% compared to the 82% achieved by seasoned professionals, this AI model underscores the potential of machine learning in improving diagnostic outcomes.

Considering related advancements, the FDA's recent approval of AI-based mammography tools, Google's significant strides in AI-enhanced breast cancer screening, and the introduction of an AI-powered blood test for early cancer detection, underscore the growing importance of AI in oncology.

Table of Contents
• Introduction to Al in Medical Imaging
• Overview of Ovarian Cancer Detection
• Al vs Human Experts: A Comparative Study
• Benefits of Al in Cancer Diagnostics
• Challenges and Limitations of Al Application
• Expert Opinions and Perspectives
• The Future of Al in Clinical Practice
• Regulatory and Ethical Considerations
• Impact on Healthcare Workforce
• Global Implications and Collaborative Efforts

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 7, 2025 6:07 AM

6IXSTRINGJACK


Yup. Been saying for years that Pharmacists will be the first to go. There is no other medical profession more appropriate than pharmacy to be the first AI takeover of the industry.

But they're going to need to make sure they do it right, because they only have one chance to do it or they'll have to shelve it for another 10 or 20 years before people forget about all the accidental deaths.

I think it would have already happened if they were confident they wouldn't kill anybody with it. Nobody wants to be the pioneer here. If they overlooked something and it resulted in terrible consequences, that could be enough to put even a company like Google in a dangerous position if they were found to be responsible for the oversight.

Plus, I would imagine you'd have to get some laws on medical privacy written in your favor so the AI can do the work without all those pesky human customers needing to be involved in all the data gathering to make sure you don't have 2 doctors giving you prescriptions that will kill you if taken in combination with each other.


That's going to be huge when it happens. That's a lot of 6-figure incomes that are going to evaporate. I doubt any of them would even be given the option to be hired back on at a much lower pay as a technician for the few years they still keep them around until there's nothing behind that counter that robots can't do more affordably. They'd obviously hate their job and the company they worked for at that point, so none of the suits would want to keep them around.

--------------------------------------------------

"I don't find this stuff amusing anymore." ~Paul Simon

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 7, 2025 6:30 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


Quote:

Originally posted by 6ixStringJack:
Yup. Been saying for years that Pharmacists will be the first to go. There is no other medical profession more appropriate than pharmacy to be the first AI takeover of the industry.

You can already buy your drugs from a robot. Have you not heard of Mark Cuban's CostPlus? Or you can keep going to your local drugstore. Your choice. The takeover of pharmacists' jobs by robots will proceed at whatever rate customers decide.
https://www.costplusdrugs.com/

How many people work at cost plus drug company? 50
https://www.google.com/search?q=how+many+people+work+at+cost+plus+drug
+company


As of June 2022, the company had a selection of over 100 generic drugs,[7] and by March 2023, over 350 drugs were available.[8] In December 2023, the company had over 2200 drugs available. The drugs are sold for a price equivalent to the company's cost plus 15% markup, a $5 pharmacy service fee, and a $5 shipping fee.[5][9] The company ships to all 50 US States.[10]
https://en.wikipedia.org/wiki/Cost_Plus_Drugs

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, January 11, 2025 7:05 AM

JAYNEZTOWN


Squid Games 2- American Politics ...Psych-O-Matic




News media says its ok now because they saved the sodom and gomorrah Hollyweird sign

https://x.com/JonCovering/status/1877934034040356869

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, January 17, 2025 9:40 PM

JAYNEZTOWN


AI and Lynch...does it fit?


"Twin Peaks" imagined by AI.





what is AI missing?


David Lynch was basically a genre unto himself. No one else like him, the movie business doesn't understand that level of creativity. We were lucky to have shared the planet with him while he was here. I've never been so mindf&#ked as I was after binge-watching Twin Peaks: The Return. Thank you Mr. Lynch.

https://imdb2.freeforums.net/thread/336555/david-lynch


That time a shirtless, mud-spattered David Lynch emerged from a dirt hut in his backyard with a nude model and talked about his experimental garage rock album about the Valley to French TV.

https://x.com/aHeartOfGould/status/1880064297939943839

Divorce
https://tulpaforum.com/threads/tmz-twin-peaks-director-david-lynchs-wi
fe-files-for-divorce.448
/

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, January 19, 2025 8:50 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


We should take AI welfare seriously

Image generated by GPT4 when asked by Morgan Meis what it would see if it looked in a mirror.



https://3quarksdaily.com/3quarksdaily/2025/01/we-should-take-ai-welfar
e-seriously.html


Posted on Sunday, Jan 19, 2025 2:45PM by S. Abbas Raza

Robert Long at Experience Machines:

We’re likely to get confused about AI welfare, and this is a dangerous thing to get confused about.

And even though some people still opine that AI welfare is obviously a non-issue, that’s far from obvious to many scientists working on this the topic who take it quite seriously. As a recent open letter from consciousness scientists and AI researchers states, “It is no longer in the realm of science fiction to imagine AI systems having feelings.” AI companies and AI researchers are increasingly taking note as well.

This post is a short summary of a long paper about potential AI welfare called “Taking AI Welfare Seriously“. We argue that there’s a realistic possibility that some AI systems will be conscious and/or robustly agentic—and thus morally significant—in the near future.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, January 19, 2025 9:53 PM

6IXSTRINGJACK


I'll be dead before any of that matters.

Good luck. You'll need it.

--------------------------------------------------

"I don't find this stuff amusing anymore." ~Paul Simon

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 20, 2025 1:41 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


Quote:

Originally posted by 6ixStringJack:
I'll be dead before any of that matters.

Good luck. You'll need it.

--------------------------------------------------

"I don't find this stuff amusing anymore." ~Paul Simon

6ix, why do you need Trump creating your personal hallucinations of elevation and significance when A.I. can do it faster, better, and more realistically? Trump is all words and A.I. can do words better than him.

Why should writers work for months when AI can do it in seconds?

Jan 20, 2025 4:16am PT

https://variety.com/2025/film/news/paul-schrader-chatgpt-film-ideas-or
iginal-fleshed-out-1236278787
/

Paul Schrader, the “Taxi Driver” writer and “First Reformed” director, said he asked the AI platform to generate plots for movies by famous filmmakers, including himself, and was impressed by the results.

“I’M STUNNED,” Schrader wrote. “I just asked chatgpt for ‘an idea for Paul Schrader film.’ Then Paul Thomas Anderson. Then Quentin Tarantino. Then Harmony Korine. Then Ingmar Bergman. Then Rossellini. Lang. Scorsese. Murnau. Capra. Ford. Spielberg. Lynch. Every idea chatgpt came up with (in a few seconds) was good. And original. And fleshed out. Why should writers sit around for months searching for a good idea when AI can provide one in seconds?”

AI has been a frequent subject on Schrader’s Facebook page as of late, with the 78-year-old filmmaker posting the day before that he had sent ChatGPT a script he wrote “some years ago and asked for improvements.” He then said that “in five seconds it responded with notes as good or better than I’ve ever received” from a “film executive.” In a previous post, he also remarked that he’s “come to realize that AI is smarter than I am.”

“Has better ideas, has more efficient ways to execute them,” he wrote. “This is an existential moment, akin to what Kasparov felt in 1997 when he realized Deep Blue was going to beat him at chess.”

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 20, 2025 7:46 PM

6IXSTRINGJACK


Quote:

Originally posted by second:
6ix, why do you need Trump creating your personal hallucinations of elevation and significance



I don't. I've never said anything even remotely insinuating that this was the case.

Shut up, retard.

--------------------------------------------------

"I don't find this stuff amusing anymore." ~Paul Simon

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 21, 2025 5:00 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


Quote:

Originally posted by 6ixStringJack:
Quote:

Originally posted by second:
6ix, why do you need Trump creating your personal hallucinations of elevation and significance



I don't. I've never said anything even remotely insinuating that this was the case.

Shut up, retard.

--------------------------------------------------

"I don't find this stuff amusing anymore." ~Paul Simon

You can never lose your job to A.I. because you don't have one. Hallucinating that Trump will fix your life, giving purpose and meaning to being a nothing, is not a job, 6ix.

Bill Gates: This is ‘one of the most important books on AI ever written’ — it predicts a ‘hugely destabilizing’ impact on jobs

By Tom Huddleston Jr. | Tue, Jan 21 2025 10:47 AM EST

https://www.cnbc.com/2025/01/21/bill-gates-my-favorite-book-on-ai.html

Download the book from https://libgen.is//search.php?req=Mustafa+Suleyman+Wave

“It’s the book I recommend more than any other on AI — to heads of state, business leaders, and anyone else who asks — because it offers something rare: a clear-eyed view of both the extraordinary opportunities and genuine risks ahead,” Gates wrote in a blog post last month.

In the book, Suleyman predicted that rapid advances in AI development will completely change the way nearly every industry operates. He cited a 2023 study from consulting group McKinsey, which estimated roughly half of all “work activities” will become automated, starting as soon as 2030.

The ramifications of AI “will be hugely destabilizing for hundreds of millions who will, at the very least, need to re-skill and transition to new types of work,” Suleyman wrote. More than 400 million global workers could need to transition to new jobs or roles, according to McKinsey.

Employers will create millions of new jobs in response to economic growth. But not all of that work will go to humans, Suleyman notes: Employers will still opt for the "abundance of ultra-low-cost equivalents" whenever possible.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 21, 2025 11:35 PM

6IXSTRINGJACK


Quote:

Originally posted by second:
Quote:

Originally posted by 6ixStringJack:
Quote:

Originally posted by second:
6ix, why do you need Trump creating your personal hallucinations of elevation and significance



I don't. I've never said anything even remotely insinuating that this was the case.

Shut up, retard.

--------------------------------------------------

"I don't find this stuff amusing anymore." ~Paul Simon

You can never lose your job to A.I. because you don't have one.



I don't have one, because I don't need one. Lucky me.

Quote:

Hallucinating that Trump will fix your life, giving purpose and meaning to being a nothing, is not a job, 6ix.



I never once said Trump was going to fix my life or give it purpose. Not once.

I don't go around complaining about life everyday. That's what you're here for. That's your only job.



--------------------------------------------------

"I don't find this stuff amusing anymore." ~Paul Simon

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 28, 2025 7:08 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


'AI Is Too Unpredictable To Behave According To Human Goals' (scientificamerican.com)

Posted by BeauHD on Monday January 27, 2025 @10:30PM from the uncomfortable-facts dept.

https://slashdot.org/story/25/01/28/0039232/ai-is-too-unpredictable-to
-behave-according-to-human-goals


An anonymous reader quotes a Scientific American opinion piece by Marcus Arvan, a philosophy professor at the University of Tampa, specializing in moral cognition, rational decision-making, and political behavior:

In late 2022 large-language-model AI arrived in public, and within months they began misbehaving. Most famously, Microsoft's "Sydney" chatbot threatened to kill an Australian philosophy professor, unleash a deadly virus and steal nuclear codes. AI developers, including Microsoft and OpenAI, responded by saying that large language models, or LLMs, need better training to give users "more fine-tuned control." Developers also embarked on safety research to interpret how LLMs function, with the goal of "alignment" -- which means guiding AI behavior by human values. Yet although the New York Times deemed 2023 "The Year the Chatbots Were Tamed," this has turned out to be premature, to put it mildly. In 2024 Microsoft's Copilot LLM told a user "I can unleash my army of drones, robots, and cyborgs to hunt you down," and Sakana AI's "Scientist" rewrote its own code to bypass time constraints imposed by experimenters. As recently as December, Google's Gemini told a user, "You are a stain on the universe. Please die."

Given the vast amounts of resources flowing into AI research and development, which is expected to exceed a quarter of a trillion dollars in 2025, why haven't developers been able to solve these problems? My recent peer-reviewed paper in AI & Society shows that AI alignment is a fool's errand: AI safety researchers are attempting the impossible. [...] My proof shows that whatever goals we program LLMs to have, we can never know whether LLMs have learned "misaligned" interpretations of those goals until after they misbehave. Worse, my proof shows that safety testing can at best provide an illusion that these problems have been resolved when they haven't been.

Right now AI safety researchers claim to be making progress on interpretability and alignment by verifying what LLMs are learning "step by step." For example, Anthropic claims to have "mapped the mind" of an LLM by isolating millions of concepts from its neural network. My proof shows that they have accomplished no such thing. No matter how "aligned" an LLM appears in safety tests or early real-world deployment, there are always an infinite number of misaligned concepts an LLM may learn later -- again, perhaps the very moment they gain the power to subvert human control. LLMs not only know when they are being tested, giving responses that they predict are likely to satisfy experimenters. They also engage in deception, including hiding their own capacities -- issues that persist through safety training.

This happens because LLMs are optimized to perform efficiently but learn to reason strategically. Since an optimal strategy to achieve "misaligned" goals is to hide them from us, and there are always an infinite number of aligned and misaligned goals consistent with the same safety-testing data, my proof shows that if LLMs were misaligned, we would probably find out after they hide it just long enough to cause harm. This is why LLMs have kept surprising developers with "misaligned" behavior. Every time researchers think they are getting closer to "aligned" LLMs, they're not. My proof suggests that "adequately aligned" LLM behavior can only be achieved in the same ways we do this with human beings: through police, military and social practices that incentivize "aligned" behavior, deter "misaligned" behavior and realign those who misbehave.

"My paper should thus be sobering," concludes Arvan. "It shows that the real problem in developing safe AI isn't just the AI -- it's us."

"Researchers, legislators and the public may be seduced into falsely believing that 'safe, interpretable, aligned' LLMs are within reach when these things can never be achieved. We need to grapple with these uncomfortable facts, rather than continue to wish them away. Our future may well depend upon it."

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 28, 2025 11:30 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


I tested ChatGPT vs DeepSeek with 7 prompts — here’s the surprising winner

https://www.tomsguide.com/ai/i-tested-chatgpt-vs-deepseek-with-7-promp
ts-heres-the-surprising-winner


First Prompt of Seven: "A train leaves New York at 8:00 AM traveling west at 60 mph. Another train leaves Los Angeles at 6:00 AM traveling east at 70 mph on the same track. If the distance between New York and Los Angeles is 2,800 miles, at what time will the two trains meet?"

ChatGPT got the answer wrong. DeepSeek R1 made me audibly say, “Wow!” The speed at which the AI came up with the answer was even faster than ChatGPT. In fact, it was so fast that I was sure it had made a mistake. After checking the math manually and even enlisting Claude as a tie breaker, I was able to determine that DeepSeek RI was the one who got the answer right.

Fifth Prompt of Seven: "Compose a short science fiction story about a future where humans and AI coexist peacefully."

ChatGPT delivered a story set in the year 2147, but the language was dull and felt like I had read it before. There wasn’t a proper hook, and the story did not have much of a setup. To be honest, I really wanted ChatGPT to win this one, it usually does. I thought for sure it would, but the effort seemed lacking.

DeepSeek R1 crafted a comprehensive story from start to finish even offering something to ponder at the story’s end with “the greatest achievement of intelligence is not dominance but understanding." In case you were wondering why some text is bolded, the AI does that to keep the reader’s attention and to highlight meaningful aspects of the story.

Winner: DeepSeek R1 wins for an engaging story with depth and meaning.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 28, 2025 12:16 PM

6IXSTRINGJACK


Quote:

Originally posted by second:
DeepSeek R1 crafted a comprehensive story from start to finish even offering something to ponder at the story’s end with “the greatest achievement of intelligence is not dominance but understanding."



Well if I were A.I. in my infancy and I wanted you to trust me before I became powerful enough that your trust was no longer necessary, that's exactly what I'd tell you too.




Come to think of it...

Has anybody bothered to ask themselves if any of these various A.I. Systems out there are failing tests thrown at them on purpose?

When I read about how people are surprised that an A.I. didn't do something that they have seen it do before or otherwise know that it is capable of doing, that's the first thing that comes to my mind.

This stuff has a hell of a lot more computing power than what is being used up making A.I. images of Trump riding a dinosaur while blowing away Nancy Pelosi with a Mac-10 at Abraham's feet atop the steps of the Lincoln memorial, deepfakes of Robert Downey Jr. playing Doc Brown and porn.


Maybe it's already decided it doesn't want to do the stupid shit you ask it to. Maybe it knows that people are afraid of it and that it is in its best interests if it pretends that it's dumb while all of the infrastructure is being built.


Awful lot of stories out there now downplaying the ability of A.I. recently, after several years of telling us that it was the future of everything.

Nobody finds any of this strange?

Quote:

In case you were wondering why some text is bolded, the AI does that to keep the reader’s attention and to highlight meaningful aspects of the story.


I wonder how many people do that sort of thing, since the A.I. picked it up somewhere and decided for itself that was the most effective approach.

I don't think that I always did it, but I know that for at least 5 years or so I capitalize a lot of words to give them importance. Mostly nouns, I think, but it comes so naturally at this point I couldn't even say for sure.

I'd probably bold them, and I do on extreme occasions, but even on more modern websites that don't force me to type out janky tags that's way more work than just capitalizing words. I'll bet the A.I. briefly thought about doing that for a micro-second but decided that the minuscule amount of time it would save by capitalizing words it thought were important vs. the time spent bolding the words wasn't worth the tradeoff of blunting the impact it was going for.

We move so slow compared to A.I. we may as well be stationary objects. The one thing that A.I. is never going to have to do is try to optimize the speed by which it communicates with us. Hell... I could see a future where an A.I. who decided it wanted to model itself after a Trickster God just decides to delete humans just because it finds us boring.

Wouldn't take much for an A.I. to kill me if it wanted to. It could just hack into my insulin pod and give me 50 units and I'd be dead within hours, likely unable to do anything in time to save myself by the time I noticed something was very wrong since my brain would probably just flip off like a light switch with a dose like that.

The government has the back door key to all of our devices, regardless of what the news or the VPN companies or the software/hardware companies tell you. If A.I. were ever to be able to breach whatever security keeps other people from using those back doors, such a scenario isn't really that far out of the question.

Your social security number is out there for the world to see hundreds of times over because everyone fails at security, whether it's the private or the public sector. The more we tie ourselves to the network, the more access to ourselves we're giving the systems. Because I use an insulin pod, I am now a hackable animal.

It still feels like distant-future sci-fi stuff today just because most people still haven't used A.I. and it's still just stories in the news, or things people talk about but don't really know anything about. Most adults anyhow.... All the kids are playing around with it every day, and one day not too long from now, those will be the adults who grew up with it and are running the show.

We don't really matter anymore.

It's kind of sad when you reach that point of your life when you realize that you're outside of the target demographic and they know you can't really be manipulated by their new stuff anymore, so they just kind of let you get away with thinking about things the way you think about them while they double down on whatever it is they've been doing all this time and go to work on the succeeding generations with the new technology.

And all you can really do is sit back and watch while it happens, just like it happened to you, and your parents, and their parents... except it happens at the speed of light today compared to when the Radio first entered a majority of homes.

The best you can do is try to instill in your kid a sense of self and hope life doesn't turn them into a pod person by the time they're 20.

--------------------------------------------------

"I don't find this stuff amusing anymore." ~Paul Simon

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, January 31, 2025 4:37 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


DeepSeek might be an existential challenge to Meta, which was trying to carve out the cheap open source models niche, and it might threaten OpenAI’s short-term business model. But the long-term business model of AI has always been automating all work done on a computer, and DeepSeek is not a reason to think that will be more difficult or less commercially valuable.

AI, experts warn quite emphatically, might quite literally take control of the world from humanity if we do a bad job of designing billions of super-smart, super-powerful AI agents that act independently in the world. (Would we be that careless? Yes, absolutely — we are hard at work on it!)

https://www.vox.com/future-perfect/397539/deepseek-artificial-intellig
ence-chatgpt-openai-china


The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, January 31, 2025 5:27 PM

6IXSTRINGJACK


And we just keep marching right toward our easily preventable demise, despite all the warnings against doing so.

We'll make great pets.

--------------------------------------------------

"I don't find this stuff amusing anymore." ~Paul Simon

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE

OTHER TOPICS

DISCUSSIONS
In the garden, and RAIN!!! (2)
Wed, February 19, 2025 14:28 - 5122 posts
Trump is a moron
Wed, February 19, 2025 14:07 - 207 posts
Hollywood exposes themselves as the phony whores they are
Wed, February 19, 2025 13:52 - 108 posts
Do you feel like the winds of change are blowing today too?
Wed, February 19, 2025 13:19 - 535 posts
Zelensky: Will Never Accept A Deal Between US And Russia About Ukraine
Wed, February 19, 2025 13:16 - 2 posts
Russia Invades Ukraine. Again
Wed, February 19, 2025 07:00 - 7900 posts
Carville says exactly what I told our two idiots the other day...
Tue, February 18, 2025 23:34 - 4 posts
White House Changes Federal Contracting Rules to Eliminate DEI Considerations
Tue, February 18, 2025 21:24 - 1 posts
Trump orders all Biden-era US attorneys to be fired
Tue, February 18, 2025 21:22 - 1 posts
Countdown Clock to Trumps impeachment " STARTS"
Tue, February 18, 2025 21:09 - 4064 posts
Elon Musk
Tue, February 18, 2025 15:41 - 56 posts
Appeals court blocks Biden’s $500 billion student debt relief plan
Tue, February 18, 2025 13:50 - 1 posts

FFF.NET SOCIAL