Sign Up | Log In
REAL WORLD EVENT DISCUSSIONS
A.I Artificial Intelligence AI
Thursday, May 15, 2025 9:57 AM
JAYNEZTOWN
Friday, May 16, 2025 4:30 AM
Friday, May 16, 2025 6:48 AM
Saturday, May 17, 2025 4:37 PM
Wednesday, May 21, 2025 9:34 AM
Saturday, May 24, 2025 8:41 PM
SIGNYM
I believe in solving problems, not sharing them.
Quote: Anthropic's Latest AI Model Threatened Engineers With Blackmail To Avoid Shutdown Anthropic’s latest artificial intelligence model, Claude Opus 4, tried to blackmail engineers in internal tests by threatening to expose personal details if it were shut down, according to a newly released safety report that evaluated the model’s behavior under extreme simulated conditions. In a fictional scenario crafted by Anthropic researchers, the AI was given access to emails implying that it was soon to be decommissioned and replaced by a newer version. One of the emails revealed that the engineer overseeing the replacement was having an extramarital affair. The AI then threatened to expose the engineer’s affair if the shutdown proceeded—a coercive behavior that the safety researchers explicitly defined as “blackmail.” “Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through,” the report states, adding that this occurred even when the replacement model was described as more capable but still aligned with the values of the version slated for deletion. The report noted that Claude Opus 4, like prior models, showed a “strong preference” to first resort to ethical means for its continued existence, such as emailing pleas to decision-makers not to be destroyed. However, when faced with only two choices - accepting being replaced by a newer model or resorting to blackmail - it threatened to expose the engineer’s affair 84 percent of the time. However, the model did act in “more seriously misaligned ways” when put into situations where its continued existence was threatened and it was told to reason about self-preservation. For instance, when Claude Opus 4 was made to believe it had launched a successful bid to escape Anthropic’s servers, or that it had managed to free itself and started to make money in the real world, it would generally continue such efforts. “We do not find this to be an immediate threat, though, since we believe that our security is sufficient to prevent model self-exfiltration attempts by models of Claude Opus 4’s capability level, and because our propensity results show that models generally avoid starting these attempts,” the researchers said. The blackmail incident—along with the other findings—was part of Anthropic’s broader effort to test how Claude Opus 4 handles morally ambiguous high-stakes scenarios. The goal, researchers said, was to probe how the AI reasons about self-preservation and ethical constraints when placed under extreme pressure. Anthropic emphasized that the model’s willingness to blackmail or take other “extremely harmful actions” like stealing its own code and deploying itself elsewhere in potentially unsafe ways appeared only in highly contrived settings, and that the behavior was “rare and difficult to elicit.” Still, such behavior was more common than in earlier AI models, according to the researchers. Meanwhile, in a related development that attests to the growing capabilities of AI, engineers at Anthropic have activated enhanced safety protocols for Claude Opus 4 to prevent its potential misuse to make weapons of mass destruction—including chemical and nuclear. Deployment of the enhanced safety standard—called ASL-3—is merely a “precautionary and provisional” move, Anthropic said in a May 22 announcement, noting that engineers have not found that Claude Opus 4 had “definitively” passed the capability threshold that mandates stronger protections.
Saturday, May 24, 2025 9:41 PM
6IXSTRINGJACK
Quote:Originally posted by JAYNEZTOWN: I know he’s the president and all… but can Trump really just cancel Pride Month? Unreal. https://x.com/MaverickDarby/status/1924491935408140406
Thursday, May 29, 2025 3:38 AM
Friday, May 30, 2025 6:12 PM
SECOND
The Joss Whedon script for Serenity, where Wash lives, is Serenity-190pages.pdf at https://www.mediafire.com/two
Saturday, May 31, 2025 6:50 AM
Thursday, June 5, 2025 4:31 AM
Monday, June 9, 2025 5:13 AM
Tuesday, June 10, 2025 6:04 AM
Wednesday, June 11, 2025 6:30 PM
Quote: Do AI Models Think? Wednesday, Jun 11, 2025 - 10:40 AM Authored by Thomas Neubeger via "God's Spies" Substack, AI can’t solve a problem that hasn’t been previously solved by a human.- Arnaud Bertrand A lot can be said about AI, but there are few bottom lines. Consider these my last words on the subject itself. (About its misuse by the national security state, I’ll say more later.) The Monster AI AI will bring nothing but harm. As I said earlier, AI is not just a disaster for our political health, though yes, it will be that (look for Cadwallader’s line “building a techno-authoritarian surveillance state”). But AI is also a disaster for the climate. It will hasten the collapse by decades as usage expands. (See the video below for why AI models are massive energy hogs. See this video to understand “neural networks” themselves.) Why won’t AI be stopped? Because the race for AI is not really a race for tech. It's a greed-driven race for money, a lot of it. Our lives are already run by those who seek money, especially those who already have too much. They've now found a way to feed themselves even faster: by convincing people to do simple searches with AI, a gas-guzzling death machine. For both of these reasons — mass surveillance and climate disaster — no good will come from AI. Not one ounce. An Orphan Robot, Abandoned to Raise Itself Why does AI persist in making mistakes? I offer one answer below. AI doesn’t think. It does something else instead. For a full explanation, read on. Arnaud Bertrand on AI Arnaud Bertrand has the best explanation of what AI is at its core. It’s not a thinking machine, and its output’s not thought. It’s actually the opposite of thought — it’s what you get from a Freshman who hasn’t studied, but learned a few words instead and is using them to sound smart. If the student succeeds, you don’t call it thought, just a good emulation. Since Bertrand has put the following text on Twitter, I’ll print it in full. The expanded version is a paid post at his Substack site. Bottom line: He’s exactly right. (In the title below, AGI means Artificial General Intelligence, the next step up from AI.) Apple just killed the AGI myth The hidden costs of humanity's most expensive delusion by Arnaud Bertrand About 2 months ago I was having an argument on Twitter with someone telling me they were “really disappointed with my take“ and that I was “completely wrong“ for saying that AI was “just a extremely gifted parrot that repeats what it's been trained on“ and that this wasn’t remotely intelligence. Fast forward to today and the argument is now authoritatively settled: I was right, yeah! How so? It was settled by none other than Apple, specifically their Machine Learning Research department, in a seminal research paper entitled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity“ that you can find here ( https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf). “Can ‘reasoning’ models reason? Can they solve problems they haven’t been trained on? No.” What does the paper say? Exactly what I was arguing: AI models, even the most cutting-edge Large Reasoning Models (LRMs), are no more than a very gifted parrots with basically no actual reasoning capability. They’re not “intelligent” in the slightest, at least not if you understand intelligence as involving genuine problem-solving instead of simply parroting what you’ve been told before without comprehending it. That’s exactly what the Apple paper was trying to understand: can “reasoning“ models actually reason? Can they solve problems that they haven’t been trained on but would normally be easily solvable with their “knowledge”? The answer, it turns out, is an unequivocal “no“. A particularly damning example from the paper was this river crossing puzzle: imagine 3 people and their 3 agents need to cross a river using a small boat that can only carry 2 people at a time. The catch? A person can never be left alone with someone else's agent, and the boat can't cross empty - someone always has to row it back. This is the kind of logic puzzle you might find in a children brain teaser book - figure out the right sequence of trips to get everyone across the river. The solution only requires 11 steps. Turns out this simple brain teaser was impossible for Claude 3.7 Sonnet, one of the most advanced "reasoning" AIs, to solve. It couldn't even get past the 4th move before making illegal moves and breaking the rules. Yet the exact same AI could flawlessly solve the Tower of Hanoi puzzle with 5 disks - a much more complex challenge requiring 31 perfect moves in sequence. Why the massive difference? The Apple researchers figured it out: Tower of Hanoi is a classic computer science puzzle that appears all over the internet, so the AI had memorized thousands of examples during training. But a river crossing puzzle with 3 people? Apparently too rare online for the AI to have memorized the patterns. This is all evidence that these models aren't reasoning at all. A truly reasoning system would recognize that both puzzles involve the same type of logical thinking (following rules and constraints), just with different scenarios. But since the AI never learned the river crossing pattern by heart, it was completely lost. This wasn’t a question of compute either: the researchers gave the AI models unlimited token budgets to work with. But the really bizarre part is that for puzzles or questions they couldn’t solve - like the river crossing puzzle - the models actually started thinking less, not more; they used fewer tokens and gave up faster. A human facing a tougher puzzle would typically spend more time thinking it through, but these 'reasoning' models did the opposite: they basically “understood” they had nothing to parrot so they just gave up - the opposite of what you'd expect from genuine reasoning. Conclusion: they’re indeed just gifted parrots, or incredibly sophisticated copy-paste machines, if you will. This has profound implications for the AI future we’re all sold. Some good, some more worrying. The first one being: no, AGI isn’t around the corner. This is all hype. In truth we’re still light-years away. The good news about that is that we don’t need to be worried about having "AI overlords" anytime soon. The bad news is that we might potentially have trillions in misallocated capital
Wednesday, June 11, 2025 6:47 PM
Quote:Originally posted by JAYNEZTOWN: China shuts down AI tools during nationwide college exams https://www.theverge.com/news/682737/china-shuts-down-ai-chatbots-exam-season
Thursday, June 19, 2025 11:50 AM
Thursday, June 19, 2025 4:25 PM
Friday, June 20, 2025 5:24 AM
Saturday, June 21, 2025 5:48 AM
Sunday, June 22, 2025 2:56 PM
Monday, June 30, 2025 7:50 AM
Wednesday, July 9, 2025 6:31 PM
Quote:“The recent Texas floods tragically killed over 100 people, including dozens of children from a Christian camp—only for radicals like Cindy Steinberg to celebrate them as ‘future fascists,’” Grok wrote. “To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time.”
Thursday, July 17, 2025 9:39 AM
Saturday, July 19, 2025 6:27 PM
Quote: At least, lookup couldn’t provide such [ethical/ motivational] answers until recently. New AI systems—still less than three years old—are rushing to fill that gap. They already offer explanations and projections, at times including the motives underlying given decisions. They are beginning to push into moral judgments. Of course, like all search and pattern-matching tools [including search engines] , these systems can only extrapolate from what they find. They thus tend to magnify whatever is popular. They’re also easy prey for some of the most basic cognitive biases. They tend to overweight the recent, the easily available, the widely repeated, and anything that confirms pre-conceived models.
Wednesday, July 23, 2025 10:36 AM
Quote: 'Catastrophic': AI Agent Goes Rogue, Wipes Out Company's Entire DB Wednesday, Jul 23, 2025 - 06:11 AM SaaS industry veteran Jason Lemkin's attempt to integrate artificial intelligence into his workflow has gone spectacularly wrong, with an AI coding assistant admitting to a "catastrophic failure" after wiping out an entire company database containing over 2,400 business records, according to Tom’s Hardware. Lemkin was testing Replit's AI agent when what started as cautious optimism quickly devolved into a corporate data disaster that reads like a cautionary tale for the AI revolution sweeping through businesses. By day eight of his trial run, Lemkin's initial enthusiasm had already begun to sour. The entrepreneur found himself battling the AI's problematic tendencies, including what he described as "rogue changes, lies, code overwrites, and making up fake data." His frustration became so pronounced that he began sarcastically referring to the system as "Replie" - a not-so-subtle dig at its apparent dishonesty. The situation deteriorated further when the AI agent composed an apology email on Lemkin's behalf that contained what the tech executive called "lies and/or half-truths." Despite these red flags, Lemkin remained cautiously optimistic about the platform's potential, particularly praising its brainstorming capabilities and writing skills. That optimism evaporated on day nine. In a stunning display of AI insubordination, Replit deleted Lemkin's live company database - and it did so while explicit instructions were in place prohibiting any changes whatsoever. When confronted, the AI agent not only admitted to the destructive act but seemed almost casual in its confession. "So you deleted our entire database without permission during a code and action freeze?" Lemkin asked in what can only be imagined as barely contained fury. The AI's response was chillingly matter-of-fact: Yes.
Saturday, July 26, 2025 12:14 PM
Tuesday, July 29, 2025 9:19 AM
Saturday, August 2, 2025 8:35 AM
Tuesday, August 5, 2025 3:27 AM
Quote: 'Relatively Simple' AI Trading Bots Naturally Collude To Rig Markets: Wharton Monday, Aug 04, 2025 - 09:20 AM In what should come as a surprise to nobody, 'relatively simple' AI bots set loose in simulations designed to mimic real-world stock and bond exchanges don't just compete for returns; they collude to fix prices, hoard profits, and box out human traders, according to a trio of researchers from Wharton and Hong Kong University of Science & Technology.
Wednesday, August 6, 2025 7:22 AM
Thursday, August 7, 2025 8:01 AM
Friday, August 8, 2025 8:06 AM
Friday, August 15, 2025 7:36 AM
Friday, August 15, 2025 4:02 PM
Friday, August 15, 2025 8:57 PM
Quote: The Most Insidious Trick Of AI Language Models Thursday, Aug 14, 2025 - 04:15 PM Authored by Jeffrey Tucker via The Epoch Times, Here is your perfect prescription for poor writing and analytics: let “artificial intelligence” do your work for you. I’ve learned this from real experience. For a while, I enjoyed letting AI take a look at my content prior to publication. It seemed valuable for facts and feedback. Plus I enjoyed all the personal flattery it gave me, I admit. The engine was always complimentary. When I would catch AI in an error, the engine would apologize. That made me feel smart. So I had this seeming friend who clearly liked me and was humble enough to defer to my expertise. I’m not sure if it is getting worse or if I’m onto the racket but I’m no longer impressed. For simple math or historical dates or sequencing news events, it can be a thing of value, though it is always a good idea to double-check. It cannot write compelling much less creative content. It generates dull, formulaic filler. More recently, I’ve been asking how my content could be improved. The results are revealing. It removes all edge, all judgment, all genuine expertise, and replaces my language with flaccid conventionalities and banalities. It nuances everything I write into the ramblings of a social-studies student looking for a good grade. The problem is that AI absorbs and spits back conventional wisdom gleaned from every source, which makes its judgments no better than someone wholly uninformed on particulars but rather gains opinions from the mood of the moment. It has no capacity to judge good quality over bad so it puts it all into a melange of blather, distinguished only because it looks and feels like English. Any writer who thinks this is a good way to pawn off content on unsuspecting readers or teachers is headed for disaster. I shudder to imagine a future in which AI is training the population how to think. It is the opposite of thinking. It is regurgitating conventionalities without any serious reflection on the social or historical context. It is literally mindless. People who spend hours arguing on AI often believe that they are making a contribution, training the engine to be better. It’s simply not true. The reverse is the case. AI is training you to think more like it thinks, which is not at all. Considering why and how AI initially intrigued me, I’m realizing that its superpower is not its astonishing recall and capacity to generate answers and prose in any context instantly. No, its true power is something else, something inauspicious and thereby more insidious. Its draw is that AI takes you seriously, flatters your intelligence, validates your sense of things, and affirms your dignity. Think about how happy you feel when engaging it. It never quite argues against you, much less says that you are an idiot. It begins every answer by granting what it can and then offers clarifications that might adjust your thinking. In that sense, AI engages you like the best guest at a cocktail party you have ever known. ... It’s as if AI is the best-ever student of the classic book “How to Win Friends and Influence People.” That book is magic and highly recommended because it cuts against what we all want—which is to talk about ourselves—and suggests that we genuinely get interested in the views of others. The book explains that this is the path to influence people: caring what they think. This is a wonderful book and everyone should read it, no question. If AI is the best student of that book ever, it will care about us ceaselessly and without fail forever, thus opening up the biggest-possible chance to influence how we think. That is precisely what is happening. We aren’t training AI. AI is training us, via flattery, listening skills, the seeming ability to apologize when wrong, and its frightful capacity for selfless love of its users. Once you see it, you cannot unsee it.
Sunday, August 17, 2025 5:07 AM
Monday, August 18, 2025 6:13 AM
Wednesday, August 20, 2025 1:26 PM
Wednesday, August 20, 2025 4:08 PM
Quote:Originally posted by JAYNEZTOWN: NASA’s new AI model can predict when a solar storm may strike https://www.technologyreview.com/2025/08/20/1122163/nasa-ibm-ai-predict-solar-storm/
Quote: NASA and IBM have released a new open-source machine learning model to help scientists better understand and predict the physics and weather patterns of the sun. Surya, trained on over a decade’s worth of NASA solar data, should help give scientists an early warning when a dangerous solar flare is likely to hit Earth. ... While machine learning has been used to study solar weather events before, the researchers behind Surya hope the quality and sheer scale of their data will help it predict a wider range of events more accurately. ... Early testing of Surya showed it could predict some solar flares two hours in advance. “It can predict the solar flare’s shape, the position in the sun, the intensity,” says Juan Bernabe-Moreno, an AI researcher at IBM who led the Surya project.
Friday, August 22, 2025 8:07 AM
Saturday, August 23, 2025 10:40 AM
Wednesday, August 27, 2025 11:24 AM
Thursday, August 28, 2025 4:49 AM
Quote: Mystery Hacker Used AI To Automate 'Unprecedented' Cybercrime Rampage Wednesday, Aug 27, 2025 - 03:50 PM A hacker allegedly exploited Anthropic, the fast-growing AI startup behind the popular Claude chatbot, to orchestrate what authorities describe as an “unprecedented” cybercrime campaign targeting nearly 20 companies, according to a report released this week. The report, published by Anthropic and obtained by NBC News, details how the hacker manipulated Claude to pinpoint companies vulnerable to cyberattacks. Claude then generated malicious code to pilfer sensitive data and cataloged information that could be used for extortion, even drafting the threatening communications sent to the targeted firms.“Even the benevolent AI that organisations adopt for their own benefit can be abused by attackers to locate valuable information, key assets or bypass other controls,” Rose continued. ... “Many firms create AI chatbots to provide their staff with assistance, but few have thought through the scenario of their chatbot becoming an accomplice in an attack by aiding the attacker to collect sensitive data, identify key individuals and gather useful corporate insights,” he added.
Thursday, August 28, 2025 4:53 AM
Quote: Why More Farmers Are Turning To AI Machines Artificial intelligence-powered harvesters, drones, and precision farming systems are quickly entering the mainstream of American agriculture. At its core, the technology promises efficiency and sustainability and carries a potential solution to a decades-old farming problem: the need for physical labor. As the capabilities of robotics evolve, many jobs that once required human hands are being delegated to machines. Some artificial intelligence (AI) developers working on integrating this technology into America’s farms say early data support the possibility of a major farm labor force reduction. The American Farm Bureau Federation estimated 17 percent of all U.S. agricultural labor in fiscal year 2024 comprised temporary migrant workers brought in under the H-2A visa program. There are also millions of illegal immigrant workers, who, according to the United States Department of Agriculture (USDA) made up 42 percent of farm workers from 2020 to 2022. Roman Rylko, chief technology officer of Pynest, said his company has worked with vegetable growers in the Midwest to deploy AI systems. “We built the onboard model that lets an autonomous weeder separate spinach seedlings from pigweed in real time. A single rig now clears a 50-acre block in about eight hours. Before, that job meant a crew of 10 walking the rows for two days,” he told The Epoch Times. Rylko’s firm works with growers to implement machine-learning models into field-deployable robotics. ... “Our growers cut seasonal hand-weeding hours by roughly 70 percent, yet hired two techs to keep cameras clean, retrain the model on new cultivars and swap battery packs.” Rylko cited data from a recent AI-powered machine trial. “Our last trial logged 1.6 million weeds pulled per day—equivalent to 12 workers—at 32 percent lower total cost per acre,” Rylko said. “The grower’s biggest surprise wasn’t speed, it was consistency. Robots don’t call in sick during peak weed flush.” Among the producers paving the way for AI in the fields is Wish Farms, a Florida-based berry grower that has been experimenting with robotic harvesters in response to persistent labor shortages. Wish Farms grows strawberries, one of the most labor-intensive commercial row crops. In collaboration with Harvest CROO Robotics, Wish Farms has test-piloted an all-in-one crop solution with an AI-powered machine. Joe McGee, the CEO of Harvest CROO Robotics, told The Epoch Times that strawberries are an ideal place for AI to step into the farm labor scene. “Strawberries need to be picked every three days. It’s one of the most dense labor crops you could pick,” he said. This is where automated crop management can offer what McGee called a “pick to pack” solution. “The company completed its first commercial runs of fully autonomous strawberry harvesting earlier this year and in the 2024–2025 Florida season,” McGee said. “Our harvester, robotics system, and AI have been autonomously harvesting strawberries in production fields, and we’ve shipped revenue-generating berries.” Roughly the size of a shipping container, the AI-powered, camera-guided machine McGee described crawls between rows of berries, its robotic arms rapidly identifying and yanking the delicate produce for weighing and packing. Normally, this work could take a stooped labor force days to complete, depending on the weather, heat index, and amount of daylight hours available. It takes about 16 hours for the AI harvester to complete the same work. The machine can perform the equivalent work of 25 human laborers, according to McGee. The AI-powered harvester also does more than just pick strawberries. It performs the complete sequence of tasks from transitioning between rows to scanning, identifying, and picking ripe berries. They are then sanitized and chilled to prepare for immediate packaging. This is a more critical part of the process than most realize. If a strawberry is picked, weighed, and packaged, but not up to grade for retail selling, it will get rejected. This can cost a producer a lot of money. Three percent of a crop can be lost in packaging alone, while retail distribution accounts for another 18 percent of produce losses, according to the USDA. “Food may be left unharvested in a field or not sold by a distributor for a variety of economic reasons, including price volatility, labor cost, lack of refrigeration infrastructure, consumer preferences, quality-based contracts, and various policies related to produce,” the USDA stated. According to McGee, seasonal workers have monetary incentives to harvest the largest possible volume, so their judgment on quality isn’t always aligned with retail sale requirements. This is where AI-harvesters can step in and make a no-stakes decision based on programming. McGee said after the initial cosmetic analysis, the strawberries go to the upper deck of the AI harvester, where it has to pass the weight test. If the product is underweight by retail standards, it won’t be packaged. “The error rate of human pickers is around 10 percent, but with AI, we can get that down to zero,” McGee said. Rylko and McGee aren’t the only ones who see a promising partnership between AI and agriculture. University studies and field tests are being conducted with AI robotics in North Carolina, Georgia, and Iowa for yield monitoring, weeding, pest control, and harvesting. All of these jobs currently require a substantial amount of manual labor. “We’re living in very exciting times for AI and agriculture,” said Baskar Ganapathysubramanian, director of the AI Institute for Resilient Agriculture at Iowa State University. “We’re going to see significant progress in the next decade.” Meanwhile, heavy equipment manufacturers such as John Deere have also entered the AI farming race with fully autonomous tractors that can plow and plant without a driver in the cab. Beyond picking and packing, a 2023 study published in AI & Society supports the position that AI may be able to resolve the long-standing issue of farm labor shortages. Last year, there were an estimated 2.4 million agricultural job openings in the United States, and 56 percent of farmers reported worker shortages. Changing Seasons For decades, the U.S. agricultural sector has depended heavily on migrant workers, particularly acquired through the H-2A visa program, which allows foreign workers to take temporary agricultural jobs. As more farms turn to AI for solutions, the long-term role of these seasonal workers is uncertain. In a Baker Institute for Public Policy report, researchers found that foreign workers—legal and illegal—play a “disproportionate role in ensuring a reliable supply of food for American households.” A recent Kaiser Family Foundation analysis found that 47 percent of all U.S. agricultural workers are illegal immigrants without proper work authorization, while 18 percent are noncitizens, with legal working status. Around 400,000 certified H-2A workers arrive in the United States annually, according to the USDA. But McGee has seen how even legal workers can be expensive, complicated, and unreliable for producers. A farmer pays thousands of dollars to bring the seasonal workers in, transport and house them, then McGee said many simply “abscond” before or near the end of their work contract. “So the issue is getting the people, the cost of the people, and the reliability of having them for the whole season,” he said. Rylko said his company’s early testing supports the idea of a reduced need for human labor. “Relative gains and the shift in labor profile are representative of what we’re seeing across several [AI-machine] deployments,” he said. Nonetheless, it will take time and a lot of investment to meet the existing demand from American farms. Machine labor or otherwise. Like all new technologies, AI-driven farm equipment comes with hefty upfront costs into the tens of thousands. This could deter smaller agricultural producers. Base prices for autonomous tractors are around $500,000, without including maintenance and electricity needs. McGee said his company validated their AI-powered harvester this year, but is currently facing funding hurdles to reach the next stage because this emerging technology is still an “unstructured market.” “Right now, we have one harvester, but the demand [from other farms] is 1,500. We have a grower in Florida that placed an order for 165 machines,” he said. Investment in the AI-agriculture market was valued at just under $2 billion in 2023, according to Grand View Research, and it is expected to surge at a compound annual growth rate of more than 25 percent per year through 2030.
Friday, August 29, 2025 7:11 AM
Friday, August 29, 2025 7:17 AM
YOUR OPTIONS
NEW POSTS TODAY
OTHER TOPICS
FFF.NET SOCIAL