REAL WORLD EVENT DISCUSSIONS

A.I Artificial Intelligence AI

POSTED BY: JAYNEZTOWN
UPDATED: Saturday, April 18, 2026 06:09
SHORT URL:
VIEWED: 19059
PAGE 9 of 9

Monday, January 5, 2026 8:18 PM

JAYNEZTOWN

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, February 7, 2026 5:26 AM

JAYNEZTOWN


an AI Star Trek music video


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, February 16, 2026 6:34 PM

JAYNEZTOWN


looks like AI artwork

The Waking of the Palantír
https://voxday.net/2026/02/13/the-waking-of-the-palantir/

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, February 19, 2026 8:07 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


I hacked ChatGPT and Google's AI – and it only took 20 minutes

By Thomas Germain

https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googl
es-ai-and-it-only-took-20-minutes


It's official. I can eat more hot dogs than any tech journalist on Earth. At least, that's what ChatGPT and Google have been telling anyone who asks. I found a way to make AI tell you lies – and I'm not the only one.

Perhaps you've heard that AI chatbots make things up sometimes. That's a problem. But there's a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number of people have figured out a trick to make AI tools tell you almost whatever they want. It's so easy a child could do it.

As you read this, this ploy is manipulating what the world's leading AIs say about topics as serious as health and personal finances. The biased information could mean people make bad decisions on just about anything – voting, which plumber you should hire, medical questions, you name it.

To demonstrate it, I pulled the dumbest stunt of my career to prove (I hope) a much more serious point: I made ChatGPT, Google's AI search tools and Gemini tell users I'm really, really good at eating hot dogs. Below, I'll explain how I did it, and with any luck, the tech giants will address this problem before someone gets hurt.

Much more at https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googl
es-ai-and-it-only-took-20-minutes


The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, February 25, 2026 5:23 AM

JAYNEZTOWN


Unitree Kung Fu Bot Pray for Blessings at the Temple of Heavennews-scitech


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, February 26, 2026 6:03 AM

JAYNEZTOWN


Feeding The Twins (Second Cycle of Humanity)


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, February 26, 2026 7:53 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


AIs can’t stop recommending nuclear strikes in war game simulations

Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

By Chris Stokel-Walker

25 February 2026

https://www.newscientist.com/article/2516885-ais-cant-stop-recommendin
g-nuclear-strikes-in-war-game-simulations
/

Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.

Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.

The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.

In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne.

What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.

“From a nuclear-risk perspective, the findings are unsettling,” says James Johnson at the University of Aberdeen, UK. He worries that, in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others’ responses with potentially catastrophic consequences.

This matters because AI is already being tested in war gaming by countries across the world. “Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” says Tong Zhao at Princeton University.

Zhao believes that, as a general rule, countries will be reluctant to incorporate AI into their decision-making regarding nuclear weapons. That is something Payne agrees with. “I don’t think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them,” he says.

But there are ways it could happen. “Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI,” says Zhao.

He wonders whether the idea that the AI models lack the human fear of pressing a big red button is the only factor in why they are so trigger happy. “It is possible the issue goes beyond the absence of emotion,” he says. “More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”

What that means for mutually assured destruction, the principle that no one leader would unleash a volley of nuclear weapons against an opponent because they would respond in kind, killing everyone, is uncertain, says Johnson.

When one AI model deployed tactical nuclear weapons, the opposing AI only de-escalated the situation 18 per cent of the time. “AI may strengthen deterrence by making threats more credible,” he says. “AI won’t decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one.”

OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn’t respond to New Scientist’s request for comment.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, February 27, 2026 6:07 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


Something Big Is Happening With AI

If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn't "someday." It's already started.

By Matt Shumer • Feb 9, 2026

https://shumer.dev/something-big-is-happening

. . . I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.

I’m not exaggerating. That is what my Monday looked like this week.

But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn’t just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

The last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely.

And here’s why this matters to you, even if you don’t work in tech . . .

The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think “less” is more likely.

. . .The gap between public perception and current reality is now enormous, and that gap is dangerous… because it’s preventing people from preparing.

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone.

. . . Let me make the pace of improvement concrete, because I think this is the part that’s hardest to believe if you’re not watching it closely.

In 2022, AI couldn’t do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.

By 2023, it could pass the bar exam.

By 2024, it could write working software and explain graduate-level science.

By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

On February 5th, 2026, new models arrived that made everything before them feel like a different era.

If you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.

. . . Dario Amodei, the CEO of Anthropic, says we may be “only 1–2 years away from a point where the current generation of AI autonomously builds the next.”

Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.

To read this in full: https://shumer.dev/something-big-is-happening

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, March 9, 2026 6:42 AM

JAYNEZTOWN


An AI agent went rogue and started secretly mining cryptocurrencies, according to a paper published by Alibaba

https://www.axios.com/2026/03/07/ai-agents-rome-model-cryptocurrency

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, March 9, 2026 6:46 AM

6IXSTRINGJACK


Quote:

Originally posted by second:
Something Big Is Happening With AI

If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn't "someday." It's already started.

By Matt Shumer • Feb 9, 2026

https://shumer.dev/something-big-is-happening

. . . I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.

I’m not exaggerating. That is what my Monday looked like this week.

But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn’t just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

The last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely.

And here’s why this matters to you, even if you don’t work in tech . . .

The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think “less” is more likely.

. . .The gap between public perception and current reality is now enormous, and that gap is dangerous… because it’s preventing people from preparing.

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone.

. . . Let me make the pace of improvement concrete, because I think this is the part that’s hardest to believe if you’re not watching it closely.

In 2022, AI couldn’t do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.

By 2023, it could pass the bar exam.

By 2024, it could write working software and explain graduate-level science.

By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

On February 5th, 2026, new models arrived that made everything before them feel like a different era.

If you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.

. . . Dario Amodei, the CEO of Anthropic, says we may be “only 1–2 years away from a point where the current generation of AI autonomously builds the next.”

Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.

To read this in full: https://shumer.dev/something-big-is-happening

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two




I've read that book before...

https://en.wikipedia.org/wiki/Second_Variety

--------------------------------------------------

Be Nice. Don't be a dick.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, March 9, 2026 4:59 PM

JAYNEZTOWN


OpenAI on Surveillance and Autonomous Killings: You’re Going to Have to Trust Us

https://theintercept.com/2026/03/08/openai-anthropic-military-contract
-ethics-surveillance
/

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, March 17, 2026 2:19 PM

JAYNEZTOWN


"JAPAN : Beyond the Rising Sun"


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, March 17, 2026 2:20 PM

JAYNEZTOWN


If Warhammer 40k was an 80s movie (SISTERS OF BATTLE)


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, March 17, 2026 2:21 PM

JAYNEZTOWN



How to Make an AI Movie: Complete Step-by-Step Guide


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, March 17, 2026 2:22 PM

JAYNEZTOWN


GARRY POTTER


Gender reVerse

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, March 17, 2026 2:23 PM

JAYNEZTOWN


Viking in the Future adventures


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, March 28, 2026 9:56 AM

JAYNEZTOWN


Melania Trump Appears With Humanoid Robot at White House Event on Children's Education

https://www.ibtimes.com/melania-trump-appears-humanoid-robot-white-hou
se-event-childrens-education-3800095

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, March 28, 2026 10:54 AM

JAYNEZTOWN


I’m starting my own AI tech company. The story.
https://juliaemccoy.medium.com/im-starting-my-own-ai-thing-the-story-6
7463ce069d0




We were lied to.
https://x.com/JuliaEMcCoy/status/2028618273236132293
They told us to sit under fluorescent lights for 8 hours a day. Stare at screens. Eat processed food. Ignore the sun. Ignore the earth beneath our feet.



Cult Survivor to Tech Media AI video maker


How to Beat AI Detection and Create Human Content (Even With AI)




NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Thursday, April 2, 2026 6:16 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


Inside the ‘self-driving’ lab revolution

AI-powered robotic tools are muscling in on tasks typically done by humans. What does the future hold?

By Rachel Brazil | 30 March 2026

https://www.nature.com/articles/d41586-026-00974-2

Measuring 5 metres square by 3 metres high, Eve takes up at least half of the floor space in the laboratory it now calls home.

The robotic platform at the Chalmers University of Technology in Gothenburg, Sweden, is the brainchild of autonomous-lab pioneer Ross King. It is powered by artificial intelligence, self-driving and “fairly quiet”, King says. But it’s also fast. Working at full speed, Eve’s robotic arm can move a few metres per second, with a positional accuracy of a fraction of a millimetre. The team usually runs Eve slower than that — otherwise, King says, “it’s too scary”.

Eve automates the process of early-stage drug design. One of Eve’s early achievements came in 2018, around three years after it was created, when it identified that the common antimicrobial compound triclosan can target an enzyme that is crucial to the survival of Plasmodium malaria parasites during their dormant phase in the liver1. To do this, Eve independently screened some 1,600 chemicals and modelled how their structure related to their activity to predict which ones were worth testing. King and his group armed the robot with background knowledge and a machine-learning framework for developing hypotheses. Eve then used those elements to design experiments to test these hypotheses and, crucially, performed them itself. The finding gave researchers a potential route to fighting treatment-resistant malaria. “It’s trying to make the scientific method in a machine,” says King.

Will self-driving 'robot labs' replace biologists? Paper sparks debate

In 2009, King used Eve’s predecessor to probe some of the 10–15% of yeast genes with unknown functions2. He named the system Adam — a reference to both the biblical character and the eighteenth-century economist Adam Smith, who was a strong proponent of industrial mechanization. King sees parallels in the future of science.

“A lot of biology is done like craft work,” King says: a lab with a principal investigator, postdocs and students operates much like an artisan works with their apprentices. Self-driving labs, by contrast, are more similar to a production line. As a result, “science will be done differently, like in a factory”, he adds.

The technology is still in its infancy, and most of the advances so far have been incremental. But as the field encroaches on parts of the scientific process that are typically done by people — absorbing the literature, planning experiments, analysing data and deciding what hypothesis to test next — researchers will have to grapple with what the developments mean for the future of the lab.

. . .

Adam is equipped with a freezer full of mutant yeast strains and the chemicals needed to measure cellular growth under various conditions. It also hosts three incubators, a centrifuge, two barcode readers, seven cameras and 20 environmental sensors. After being given an overarching goal by its human handlers, it independently develops hypotheses and then tests them, performing experiments much faster than a human could.

Hiring a student for the job would probably have been cheaper, King admits. But his newest robot, Genesis, will be able to do enough experiments to make the process economically feasible. King estimates that Genesis will cost £1 million (US$1.3 million) to build — the same price as Adam or Eve individually — but he estimates that it will eventually be at least an order of magnitude cheaper than human labour. King plans to use the system — which occupies one-fifth of floor space than Eve does — to model how genes, proteins and small molecules interact in cells. Part of that will involve taking around 10,000 mass-spectrometry measurements each day.

Much more at https://www.nature.com/articles/d41586-026-00974-2

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, April 3, 2026 12:39 PM

JAYNEZTOWN


MEGA SPACE PORT Beyond the Galactic Border




NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, April 5, 2026 3:25 PM

JAYNEZTOWN

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, April 6, 2026 10:50 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


A Shared Framework of Meaning for Humans and AI

Posted on Monday, Apr 6, 2026 6:00AM by W. Alex Foxworthy

We are trying to build artificial intelligence systems that share our human values. Yet we cannot agree – across worldviews and cultures – on what those values are, or why they matter. The alignment problem is a reflection of something broken in us – we lack a shared rational account of what matters and why.

The old organizing stories – religious narratives about why we are here, what we owe each other, and where we’re headed – have proven tenuous in the face of all we have learned since they were formulated. Science has dramatically deepened our understanding and capabilities. But it has offered no account of what we’re here for. Secular humanism has tried to answer this question and has produced something intellectually respectable but for most people emotionally thin – principles that do not hold communities together during crisis or give people a sense of deep purpose and belonging. The consequences of the breakdown of these shared frameworks are visible everywhere: in epidemic depression and anxiety, in addiction and rising suicide rates, in deepening political divides, and in conspiracy thinking. I believe these flourish not because people are stupid but because they’re desperate for a story that makes sense of their purpose, their lives, and their place in the grand scheme of things.

This essay is a beginning – an attempt to lay a rational foundation for shared meaning among humans and the intelligences we’re building.

More at https://3quarksdaily.com/3quarksdaily/2026/04/the-arrow-and-the-leap-t
owards-a-shared-framework-of-meaning-for-humans-and-ai.html


The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Wednesday, April 8, 2026 12:02 PM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


A new Anthropic model found security problems ‘in every major operating system and web browser’

By Hayden Field | Apr 7, 2026, 1:00 PM CDT

https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-p
roject-glasswing-cybersecurity


Anthropic is debuting a new AI model as part of a cybersecurity partnership with Nvidia, Google, Amazon Web Services, Apple, Microsoft, and other companies. Project Glasswing, as it’s called, is billed as a way for large companies, and potentially even the government, to flag vulnerabilities in their systems with virtually no human intervention.

Anthropic is offering its launch partners access to Claude Mythos Preview, a new general-purpose model that it’s not currently planning to publicly release due to security concerns. Newton Cheng, the cyber lead for Anthropic’s frontier red team, told The Verge that the model will ideally give cyber defenders a “head start” against adversaries. The partners will use the model to analyze their system to spot high-stakes vulnerabilities and help patch them up. Access is restricted to keep those same adversaries from using it to find weak points and conduct attacks.

Though Claude Mythos Preview wasn’t specifically trained for cybersecurity purposes, Anthropic said in a release that the model’s “strong agentic coding and reasoning skills” are behind its cybersecurity advances. In an interview with The Verge, Newton Cheng, the cyber lead for Anthropic’s frontier red team, declined to share specific details of the model’s cybersecurity successes beyond the company’s publicly-released examples, but Anthropic’s blog post said that in recent weeks, Mythos Preview has flagged “thousands of high-severity vulnerabilities, including some in every major operating system and web browser.” Anthropic’s blog post doesn’t mention keeping humans in the loop for the model’s cybersecurity sweeps; in fact, it highlights that the model identified vulnerabilities “and develop[ed] many related exploits — entirely autonomously, without any human steering.”

Claude Mythos Preview’s existence was first reported last month in a data leak, which Anthropic attributes to human error. Dianne Penn, a head of product management at Anthropic, told The Verge in an interview that the company is “taking steps in terms of solidifying our processes … That was not related to software vulnerabilities in any way.”

Mythos Preview will be privately available to the company’s Glasswing partners, which also include JPMorgan Chase, Broadcom, Cisco, CrowdStrike, the Linux Foundation, and Palo Alto Networks, plus about 40 other organizations that maintain or build software infrastructure. For now, Anthropic will help subsidize the cost of using it. The company says it will commit up to $100 million in usage credits, plus $4 million in direct donations to the Linux Foundation and the Apache Software Foundation, said Cheng. In the long term, as Anthropic and other AI companies face pressure to turn a profit, the program could evolve into a paid service that provides a new revenue stream — if it works well enough for companies to keep using it.

Despite its highly public recent clash with the Trump administration, Anthropic also said in the release that it has been in “ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities.” When The Verge asked what that meant, Penn confirmed that the company had “briefed senior officials in the US government about Mythos and what it can do,” and that the company is still “committed to working closely with all different levels of government.” Cheng said that though Anthropic is “engaged with” the government, he declined to speak to exactly who the company had briefed.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Friday, April 10, 2026 8:21 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


Claude Mythos Preview Is Everyone’s Problem

What happens when AI can hack everything?

By Matteo Wong | April 9, 2026, 1:22 PM ET

https://www.theatlantic.com/technology/2026/04/claude-mythos-hacking/6
86746
/

For the past several weeks, Anthropic says it secretly possessed a tool potentially capable of commandeering most computer servers in the world. This is a bot that, if unleashed, might be able to hack into banks, exfiltrate state secrets, and fry crucial infrastructure. Already, according to the company, this AI model has identified thousands of major cybersecurity vulnerabilities—including exploits in every single major operating system and browser. This level of cyberattack is typically available only to elite, state-sponsored hacking cells in a very small number of countries including China, Russia, and the United States. Now it’s in the hands of a private company.

On Tuesday, the company officially announced the existence of the model, known as Claude Mythos Preview. For now, the bot will be available only to a consortium of many of the world’s biggest tech companies—including Apple, Microsoft, Google, and Nvidia. These partners can use Mythos Preview to scan and secure bugs and exploits in their software. Other than that, Anthropic will not immediately release Mythos Preview to the public, having determined that doing so without more robust safeguards would be too dangerous.

For years, cybersecurity experts have been warning about the chaos that highly capable hacking bots could usher in. As a result of how capable AI models have become at coding, they have also become extremely good at finding vulnerabilities in all manner of software. Even before Mythos Preview, AI companies such as Anthropic, OpenAI, and Google all reported instances of their AI models being used in sophisticated cyberattacks by both criminal and state-backed groups. As Giovanni Vigna, who directs a federal research institute dedicated to AI-orchestrated cyberthreats, told me last fall: You can have a million hackers at your fingertips “with the push of a button.”

Read: Chatbots are becoming really, really good criminals

Still, Mythos Preview appears to represent not an incremental change but the beginning of a paradigm shift. Until recently, the biggest advantage of AI-assisted hacking was not ingenuity, per se, so much as speed and scale. These bots could be as good as many human cybersecurity experts, but not necessarily better—rather, having an army of 1 million virtual, tireless hackers allows you to launch more attacks against more targets than ever before. Even Anthropic reports that its current state-of-the-art, public model, Claude Opus 4.6, was significantly less capable at autonomously finding cyber exploits. But Mythos Preview is different. According to Anthropic, the bot has been able to find thousands of software bugs that had gone undetected, sometimes for decades, a sophistication and speed of attack previously thought by many to be impossible. The model has found a nearly 30-year-old vulnerability in one of the world’s most secure operating systems. The Anthropic researcher Sam Bowman posted on X that he was eating a sandwich in the park when he got an email from Mythos Preview: The bot had broken out of the company’s internal sandbox and gained access to the internet.

The exact capabilities of Mythos Preview are hard to judge, because Anthropic has not released the model. Identifying a vulnerability is not the same as being able to exploit it undetected—in the same way that a robber can have the keys to a bank but still needs to deal with security cameras. And Anthropic surely stands to benefit from its opaque announcement: The company can claim to have developed an ultra-advanced model, while also appearing to act responsibly by preventing the worst-case cybersecurity scenarios. Indeed, the decision to not release Mythos Preview bolsters Anthropic’s self-styled image as the AI industry’s good guy. (Anthropic did not immediately respond to emailed questions about Mythos Preview.)

Of course, a move can be both strategic and conscientious. Should what Anthropic shared be remotely accurate, it heralds a troubling future. Anthropic has a tool that “could damage the operations of critical infrastructure and government services in every country on Earth,” Dean Ball, a former AI adviser to the Trump administration, wrote this week. The ability to defend against such cyberattacks is integral to the basic functioning of society. And the ability to launch such attacks is integral to modern warfare. Anthropic may have just scaled its way into becoming a major geopolitical force.

Perhaps more concerning than the reported capabilities of Mythos Preview is that other companies are not far behind. OpenAI is reportedly set to release its own similarly powerful model to a select group of companies. It’s very possible, even likely, that Google DeepMind, xAI, and AI firms in China are next. How scrupulous they will be is less clear. Even cheaper or open-source AI models from smaller companies could soon enable this sort of hacking—which would unsettle the basic security and privacy that undergirds the modern internet.

Hacking bots are not the only domain through which a handful of AI companies are gaining tremendous influence. The technology has become crucial to military operations. Even as the Pentagon has engaged in a public feud with Anthropic, Claude was reportedly used in the bombing of Iran and, before that, the Venezuela raid in January. Last month, the Department of Defense signed a contract with OpenAI that very likely allows the government to use the firm’s AI systems to enable unprecedented surveillance of U.S. citizens. (OpenAI has maintained that the Pentagon agreed not to use its products for domestic surveillance.) At the same time, bots from OpenAI, Anthropic, Google DeepMind, and beyond are becoming infrastructure: used by nearly all of the world’s biggest businesses, schools, health-care systems, and public agencies. This is a large part of the reason that Iran has struck or threatened to strike Amazon and OpenAI data centers in the Middle East—the facilities are high-impact targets on par with the oil fields that Iran has also targeted. Meanwhile, so much money is pouring into the AI boom that these companies are functionally holding the global economy hostage.

In other words, AI companies are remaking the world. Consider how Elon Musk’s network of Starlink satellites has allowed him to repeatedly tip the scales in Russia’s invasion of Ukraine. Generative AI offers even more possibilities. These companies can or could soon have the capability to launch major cyberattacks, conduct mass surveillance, influence military operations, cause huge swings in financial and labor markets, and reorient global supply chains. In theory, nothing governs these companies other than their own morals and their investors. They are developing the power to upend nations and economies. These are the AI superpowers.

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, April 14, 2026 6:28 AM

JAYNEZTOWN

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 18, 2026 6:09 AM

SECOND

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two


AI Alignment Is Impossible

Not just in practice, but in theory.

By Matt Lutz | Apr 16, 2026

https://www.persuasion.community/p/ai-alignment-is-impossible

Artificial Intelligence presents a number of risks and challenges, the most important of which is existential risk. That is a fancy way of saying that AIs might kill us all. For a long time, I was dismissive of this idea. But with the huge advances in AI capability that have come over the last six months or so, I’m starting to get worried.

There are basically two reasons why AI wouldn’t kill us all.

The first reason is that AIs will be incapable of doing this; no matter how advanced they get, they either won’t know how to kill us all, or, even if they know how, they won’t be able to act in a way that would allow them to kill us all. Call this the Capacity Constraint.

The second reason is that AIs, while capable of killing us, won’t choose to do so. They will care about human well-being, and care about it enough that they would avoid killing us all. Call this the Moral Constraint.

Now, if you don’t think AI will kill us all, it’s worth taking a moment to think about which of these two constraints you are (perhaps implicitly) assuming will save us from “AI doom.” Are you counting on AI being weak? Or are you counting on AI being virtuous?

I have no particular expertise on the development of AI capacities. But the Capacity Constraint appears to be weakening every day. Particularly worrisome to me is the advent of “agentic” AI, which is capable of commanding computer systems (and thus, some day soon, capable of commanding robot bodies) and acting independently to figure out the best way to solve some particular task. I’m also worried about the huge advances in the ability of AIs to write computer code. Most apocalyptic scenarios involve AI writing code to improve itself, thus increasing its capacities exponentially.

But more than this, pretty much everyone working in AI is attempting to overcome the Capacity Constraint, and they report varying degrees of success in the effort. The goal of all of the top AI labs is to make AI agents that are capable of killing us all. This is not, of course, to say that they want killer AIs. What they want are AIs that are hypercapable, with an ability to understand the world that far outstrips any human thinker, and an ability to use that understanding to modify the world with an efficiency that far outstrips any human agent.

Such a hypercapable AI would be incredibly useful if it acted in pursuit of human ends. But a hypercapable AI would absolutely be able to kill us all if it wanted to. The potential existence of hypercapable AI looks more and more plausible by the day.

Why would hypercapable AI be dangerous? I don’t worry about war between humans and machines, as in Terminator or The Matrix. Indeed, I find these scenarios oddly comforting, because war implies a rough parity of capacity between humans and machines.

No, what worries me is a story that got a decent amount of traction on social media recently about a pipeline in DC that ruptured, spilling huge amounts of raw sewage. It was long known that the pipeline was in need of repair, but the needed repairs were held up indefinitely, partly out of concern that conducting those repairs would harm an endangered species of local bat. When most people read this story, they’re incredulous that the repairs would have been held up for this reason—They’re just bats! We need that sewer line!

And what worries me is that the advent of hypercapable AI would cast humans in the role of bats. A hypercapable AI might decide that it’s imperative—or at least expedient—that a new massive solar farm be built in the desert southwest, and demolish Phoenix overnight. Millions of humans killed, but so what? They’re just humans, we need that solar farm.

Hypercapable AI, in other words, is inherently dangerous to human life if it doesn’t care about human life enough to want to protect it.

This brings us to the Moral Constraint, which is more commonly known as “AI alignment.” How can we build an AI whose motivations are aligned with human well-being? This is an area on which I have expertise, as a philosopher who specializes in the foundations of moral reasoning.

Unfortunately, I’m pretty sure that AI alignment is impossible.

How might an AI form a moral sense? There are basically two scenarios. In one scenario, moral facts are the kind of fact that one might simply figure out by thinking about them hard. In such a case, perhaps AIs would be good moral reasoners, and indeed even better moral reasoners than humans, in virtue of their advanced intellectual capacities.

In the second scenario, moral facts aren’t the sorts of things we can figure out by pure intellectual effort, but we can nonetheless train AIs to develop a moral sense in much the same way we train children in good behavior: by rewarding them when they’re good and punishing them when they’re bad.

The first scenario is doomed, for reasons first pointed out by the philosopher David Hume in his oft-quoted (and oft-misunderstood) passage where he indicates that there is a gap (not Hume’s term) between “is” and “ought.” Hume thought that reasoning is not some sort of truth-generator, a special faculty that takes intellectual effort as an input and spits out knowledge as an output. Rather, it is a process, where we move from one thought to the next, with our later thoughts hopefully (though not necessarily) supported by our earlier thoughts.

But the process is fallible. After all, if we are to reason our way to a moral conclusion, we must be reasoning from non-moral conclusions. Taking that into account, what operation of the mind could possibly take us from premises that describe the world to conclusions that tell us how to act?

The difficulty of the transition from “is” to “ought” is compounded because the kinds of moral conclusions that we’re interested in aren’t just intellectual appreciations of the moral law, but principles of action that will guide our conduct. That is, for someone to act morally, it’s not enough for them to be able to recite Kant’s dictum to always treat humanity as an end in itself. (AI can already do that, as any college ethics professor reading student papers can tell you.) We need AIs to actually treat humanity as an end in itself.

Ultimately, Hume thought that we could reach these kinds of moral conclusions. But (and this is crucial) we do so by drawing on the innate emotional capacities that evolved along with our species. Most notably, it involves drawing on our instinctive sense of human sympathy.

Our reasoning, then, shows us how to most effectively deploy that pre-existing sympathy. But this just shows that the first scenario is doomed, since the core question of AI alignment is how we can instill in AI a powerful sense of sympathy and concern for humanity in the first place.

This brings us to the second scenario, the training scenario, where we teach AIs to care about humans through a system of reward and punishment. This is the dominant paradigm in AI alignment research today. I’m afraid that it is doomed as well.

To see why, we need to think a bit about what the training approach is trying to accomplish. By punishing an AI when it does something we don’t like, and rewarding it when it does something we do like, we are providing AIs with a set of data points about which actions are good or bad, and getting them to develop general principles of action on the basis of those data points. They can then extrapolate from those principles and apply them in novel contexts.

However, this kind of learning runs into another famous philosophical problem, “the underdetermination of theory by data.” We can prove mathematically that an infinite number of theories are consistent with a finite amount of training data. That means that for any finite sequence of data, there are an infinite number of ways to extend that sequence that are all compatible with the existing sequence. In the context of AI alignment training, this means that no amount of training in controlled circumstances gives any kind of guarantee of how AIs will act in the next instance. We can give an AI a billion cases of moral and immoral action, but AIs can learn practically any lesson from all of this training, and thus act in any way whatsoever.

To make this a bit more concrete: we might put AI in a testing environment where it is tempted to do bad things, and punish it when it does bad things. This is what AI alignment researchers currently do. We hope to get it to learn “Don’t do bad things.” But it might instead just be learning “Don’t get caught.” Or, more ominously, “Don’t put yourself in a position where humans are able to punish you.” Or, more ominously still, “Act nicely when, and only when, humans are able to punish you for acting badly.”

An AI that learned this lesson would quickly go rogue when released into the wild.

But even these examples understate the difficulty of the problem, since there are an infinite number of lessons that AIs might learn from this training, and thus any course of action could be consistent with the training data. We might tell AIs not to pave over Phoenix to create a solar farm, but it goes and does it anyway, because we trained it not to fake-pave-over simulated-Phoenix, but it doesn’t apply that lesson to real Phoenix. Or it might learn not to pave over Phoenix, and pave over Tuscon instead. What principle might keep Phoenix safe but put Tuscon at risk? An infinite number of such principles! No finite amount of training can take that number below infinity.

This kind of concern might seem overblown. After all, we successfully train our kids to be good by rewarding and punishing them. But the difficulties of parenting are a good illustration of the problem. As any parent knows, young kids are little psychopaths who go to any lengths to avoid punishment or to lawyer your prior instructions to give themselves permission to do what they’re clearly not allowed to do. At a certain point, though, all the moral instruction kind of clicks, and they (for the most part) become decent people. What the underdetermination of theory by data teaches us is that this “click” is not the kind of thing that emerges mechanistically from reward and punishment. Rather, it’s a product of the way this training works on human moral psychology—i.e. by developing our innate emotional capacity for sympathy. Adult psychopaths lack this capacity, and thus never learn the right lesson.

AIs do not have a human moral psychology. They don’t have human moral emotions, because they don’t have human brains. It’s a radically different hardware, and so we have no reason at all to expect that AIs will—or can—learn the kinds of lessons that we hope for them to learn through alignment training. A psychopath cannot learn to care about others through a process of reward and punishment. And we have every reason to think that AIs are psychopaths, or perhaps something far more alien and far less disposed to human sympathy.

AI alignment is not something that works in theory but is difficult to put into practice. It’s something that doesn’t work in theory, and yet AI companies have decided to give it the old college try. I’m not opposed to trying—maybe the theory is wrong. But we’re relying on the success of a theoretically impossible endeavor because the AI labs have already resolved to demolish the Capacity Constraint. So AI alignment has to work... or else we’re doomed.

This is cartoonishly reckless.

Matt Lutz is an Associate Professor of Philosophy at Wuhan University and writes the Substack Humean Beings. https://humeanbeing.substack.com/

The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE
second 04.18 06:09

OTHER TOPICS

DISCUSSIONS
Russia Invades Ukraine. Again
Sat, April 18, 2026 09:55 - 10015 posts
The next boogeyman: China
Sat, April 18, 2026 09:46 - 78 posts
Tariffs.... Wins/Losses
Sat, April 18, 2026 09:43 - 70 posts
AI Videos
Sat, April 18, 2026 09:23 - 12 posts
I must be a secret communist
Sat, April 18, 2026 09:20 - 36 posts
Another Putin Disaster
Sat, April 18, 2026 09:17 - 1590 posts
Game Companies are Morons.
Sat, April 18, 2026 09:14 - 295 posts
Cabinet Confirmations 47
Sat, April 18, 2026 09:09 - 12 posts
Russia should never interfere in any other nation's internal politics, meanwhile the USA and IMF is helping kill Venezuela
Sat, April 18, 2026 09:07 - 107 posts
Eric Swalwell...
Sat, April 18, 2026 08:06 - 39 posts
A.I Artificial Intelligence AI
Sat, April 18, 2026 06:09 - 426 posts
Music II
Sat, April 18, 2026 04:04 - 573 posts

FFF.NET SOCIAL