China

China Moves To Curb OpenClaw AI Use At Banks, State Agencies (bloomberg.com) 18

An anonymous reader quotes a report from Bloomberg: Chinese authorities moved to restrict state-run enterprises and government agencies from running OpenClaw AI apps on office computers, acting swiftly to defuse potential security risks after companies and consumers across China began experimenting with the agentic AI phenomenon. Government agencies and state-owned enterprises, including the largest banks, have received notices in recent days warning them against installing OpenClaw software on office devices for security reasons [...]. Several of them were instructed to notify superiors if they had already installed related apps for security checks and possible removal, some of the people said.

Certain employees, including those at state-run banks and some government agencies, were banned from installing OpenClaw on office computers and also personal phones using the company's network, some of the people said. One person said the ban was also extended to the families of military personnel. Other notices stopped short of calling for an outright ban on OpenClaw software, saying only that prior approval is needed before use, the people said. The warning underscores Beijing's growing concern about OpenClaw, an agentic AI platform that requires unusually broad access to private data and can communicate externally, potentially exposing computers to external attack. [...]

Despite the potential security risks, companies from Tencent to JD.com Inc. have been rolling out OpenClaw apps to try and capitalize on the groundswell of enthusiasm, while several local government agencies have declared millions of yuan in subsidies for companies that develop atop the platform. [...] Tech giants like Tencent and Alibaba, along with AI upstarts ranging from Moonshot to MiniMax, have rolled out their own tweaks of the software touting simple, one-click adoption. A slew of government agencies, in cities from Shenzhen to Wuxi, have issued notices offering multimillion-yuan subsidies to startups leveraging OpenClaw to make advances. The frenzy has helped drive up shares of AI model developer MiniMax nearly 640% since its listing just two months ago. It's now worth about $49 billion, surpassing Baidu -- once viewed as the frontrunner in Chinese AI development -- in market value. The company launched MaxClaw, an agent built on OpenClaw, in late February.

Portables (Apple)

ASUS Executive Says MacBook Neo is 'Shock' to PC Industry (pcmag.com) 226

ASUS says the MacBook Neo is a "shock" to the Windows PC ecosystem. "In the past, Apple's pricing situation has always been high, so for them to release a very budget-friendly product, this is obviously a shock to the entire industry," said ASUS co-CEO S.Y. Hsu in a Tuesday earnings call. While he expects PC makers to respond, rising AI-driven memory shortages could push hardware prices higher across the industry. PCMag reports: Hsu said he believes all the PC players -- including Microsoft, Intel, and AMD -- take the MacBook Neo threat seriously. "In fact, in the entire PC ecosystem, there have been a lot of discussions about how to compete with this product," he added, given that rumors about the MacBook Neo have been making the rounds for at least a year. Despite the competitive threat, Hsu argued that the MacBook Neo could have limited appeal. He pointed to the laptop's 8GB of "unified memory," or what amounts to its RAM, and how customers can't upgrade it.

He also described the MacBook Neo as a "content consumption" device, similar to an iPad. "This is different from the use case of a mainstream notebook," which can handle more compute-intensive tasks, Hsu said. "How big of an impact [the MacBook Neo] will have on the PC industry will still require some time for us to observe," Hsu said while suggesting it might not gain traction among Windows PC users due to software differences. "Of course, the entire Windows PC ecosystem will push out products to compete against Apple," he added.

AI

Yann LeCun Raises $1 Billion To Build AI That Understands the Physical World (wired.com) 61

An anonymous reader quotes a report from Wired: Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta's former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models. LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. "The idea that you're going to extend the capabilities of LLMs [large language models] to the point that they're going to have human-level intelligence is complete nonsense," he said in an interview with WIRED.

The financing, which values the startup at $3.5 billion, was co-led by investors such as Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other notable backers include Mark Cuban, former Google CEO Eric Schmidt, and French billionaire and telecommunications executive Xavier Niel. AMI (pronounced like the French word for friend) aims to build "a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe," the company says in a press release. The startup says it will be global from day one, with offices in Paris, Montreal, Singapore, and New York, where LeCun will continue working as a New York University professor in addition to leading the startup. AMI will be the first commercial endeavor for LeCun since his departure from Meta in November 2025. [...]

LeCun says AMI aims to work with companies in manufacturing, biomedical, robotics, and other industries that have lots of data. For example, he says AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability. LeCun says AMI will release its first AI models quickly, but he's not expecting most people to take notice. The company will first work with partners such as Toyota and Samsung, and then will learn how to apply its technology more broadly. Eventually, he says, AMI intends to develop a "universal world model," which would be the basis for a generally intelligent system that could help companies regardless of what industry they work in. "It's very ambitious," he says with a smile.

AI

After Outages, Amazon To Make Senior Engineers Sign Off On AI-Assisted Changes (ft.com) 83

UPDATE: Amazon later published a blog post to address what it calls "inaccuracies" in the Financial Times report that the company's own AI tool Kiro caused two outages in an AWS service in December.

An anonymous Slashdot reader had shared this report from the Financial Times: Amazon's ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a "deep dive" into a spate of outages, including incidents tied to the use of AI coding tools. The online retail giant said there had been a "trend of incidents" in recent months, characterized by a "high blast radius" and "Gen-AI assisted changes" among other factors, according to a briefing note for the meeting seen by the FT. Under "contributing factors" the note included "novel GenAI usage for which best practices and safeguards are not yet fully established."

"Folks, as you likely know, the availability of the site and related infrastructure has not been good recently," Dave Treadwell, a senior vice-president at the group, told employees in an email, also seen by the FT. The note ahead of Tuesday's meeting did not specify which particular incidents the group planned to discuss. [...] Treadwell, a former Microsoft engineering executive, told employees that Amazon would focus its weekly "This Week in Stores Tech" (TWiST) meeting on a "deep dive into some of the issues that got us here as well as some short immediate term initiatives" the group hopes will limit future outages.

He asked staff to attend the meeting, which is normally optional. Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added. Amazon said the review of website availability was "part of normal business" and it aims for continual improvement. "TWiST is our regular weekly operations meeting with a specific group of retail technology leaders and teams where we review operational performance across our store," the company said.

Encryption

Intel Demos Chip To Compute With Encrypted Data (ieee.org) 37

An anonymous reader quotes a report from IEEE Spectrum: Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer? There is a way to do computing on encrypted data without ever having it decrypted. It's called fully homomorphic encryption, or FHE. But there's a rather large catch. It can take thousands -- even tens of thousands -- of times longer to compute on today's CPUs and GPUs than simply working with the decrypted data. So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.

Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. "Heracles is the first hardware that works at scale," he says. The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel's most advanced, 3-nanometer FinFET technology. And it's flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI.

In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side. On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn't something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.

The Courts

Amazon Wins Court Order To Block Perplexity's AI Shopping Bots (cnbc.com) 29

Last November, Amazon sued Perplexity demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases for users online. Today, a judge ruled in favor of the tech giant, granting it a temporary court injunction blocking the scraping of Amazon's website. According to court filings, the judge found strong evidence the tool accessed the retailer's systems "without authorization." CNBC reports: In a ruling dated Monday, U.S. District Judge Maxine Chesney wrote that Amazon has provided "strong evidence" that Perplexity's Comet browser accessed its website at the user's direction, but "without authorization" from the e-commerce giant. Chesney said Amazon submitted "essentially undisputed evidence" that it spent more than $5,000 to respond to the issue, including "numerous hours" where its employees worked to develop tools to block Comet from accessing its private customer tools and to prevent the tool from "future unauthorized access." "Given such evidence, the Court finds Amazon has shown a likelihood of success on the merits of its claim," Chesney wrote.

Chesney's ruling includes a weeklong stay to allow Perplexity to appeal the order. Amazon wrote in its original complaint that Perplexity's agents posed security risks to customer data because they "can act within protected computer systems, including private customer accounts requiring a password." The company also said Perplexity's agents created challenges for the company's advertising business, because when AI systems generate ad traffic, the impressions have to be detected and filtered out before advertisers can be billed. "This requires modifications to Amazon's advertising systems, including developing new detection mechanisms to identify and exclude automated traffic," Amazon wrote in its complaint. "These system adaptations are necessary to maintain contractual obligations with advertisers who pay only for legitimate human impressions."

The Almighty Buck

Silicon Valley Is Buzzing About This New Idea: AI Compute As Compensation 86

sziring shares a report from Business Insider: Silicon Valley has long competed for talent with ever-richer pay packages built around salary, bonus, and equity. Now, a fourth line item is creeping into the mix: AI inference. As generative AI tools become embedded in software development, the cost of running the underlying models -- known as inference -- is emerging as a productivity driver and a budget line that finance chiefs can't ignore.

Software engineers and AI researchers inside tech companies have already been jousting for access to GPUs, with this AI compute capacity being carefully parceled out based on which projects are most important. Now, some tech job candidates have begun asking about what AI compute budget they will have access to if they decide to join.

"I am increasingly asked during candidate interviews how much dedicated inference compute they will have to build with Codex," Thibault Sottiaux, engineering lead at OpenAI's Codex, the startup's AI coding service, wrote on X recently. He added that usage per user is growing much faster than overall user growth, a sign that AI compute is becoming even scarcer and more valuable. That scarcity is reshaping how engineers think about their work and pay.
"The inference compute available to you is increasingly going to drive overall software productivity," said OpenAI President Greg Brockman.

The report cites a recent compensation submission from a software engineer that listed "Copilot subscription" as part of the pay and benefits. "OpenAI and Anthropic should create recruitment sites where their clients can advertise roles, listing the token budget for the job alongside the salary range," said Peter Gostev, AI capability lead at Arena, a startup that measures the performance of models.

Tomasz Tunguz of Theory Ventures predicts AI inference will be the fourth component of engineering compensation, alongside salary, bonus, and equity. "Will you be paid in tokens? In 2026, you likely will start to be," Tunguz said.
AT&T

AT&T Outlines $250 Billion US Investment Plan To Boost Infrastructure In AI Age (reuters.com) 12

AT&T plans to invest more than $250 billion over the next five years to expand U.S. telecom infrastructure for the AI age. The company says it will also hire thousands of technicians while partnering with AST SpaceMobile to extend coverage to remote areas. Reuters reports: Rapid adoption of artificial intelligence, cloud computing and connected devices has prompted telecom operators to invest heavily in fiber and 5G networks as they also seek to fend off intensifying competition from cable broadband providers. AT&T, which has about 110,000 employees in the U.S., said the new hires will help build and maintain its infrastructure. The outlay includes capital expenditure and other spending, the company said.

The spending will focus on expanding its fiber and wireless networks, including accelerating deployment of fiber broadband, 5G home internet and satellite connectivity to extend coverage across urban, suburban and rural areas. [...] AT&T is also working with satellite partner AST SpaceMobile to expand connectivity to remote regions where traditional network infrastructure is difficult to deploy. The company said it would continue spending on the FirstNet network built for first responders and bolster investment in network security and artificial intelligence-driven threat detection.

Oracle

OpenAI Is Walking Away From Expanding Its Stargate Data Center With Oracle (cnbc.com) 41

OpenAI is reportedly backing away from expanding its AI data center partnership with Oracle because newer generations of Nvidia GPUs may arrive before the facility is even operational. CNBC reports: Artificial intelligence chips are getting upgraded more quickly than data centers can be built, a market reality that exposes a key risk to the AI trade and Oracle's debt-fueled expansion. OpenAI is no longer planning to expand its partnership with Oracle in Abilene, Texas, home to the Stargate data center, because it wants clusters with newer generations of Nvidia graphics processing units, according to a person familiar with the matter.

The current Abilene site is expected to use Nvidia's Blackwell processors, and the power isn't projected to come online for a year. By then, OpenAI is hoping to have expanded access to Nvidia's next-generation chips in bigger clusters elsewhere, said the person, who asked not to be named due to confidentiality.
In a post on X, Oracle called the reports "false and incorrect." However, it only said existing projects are on track and didn't address expansion plans.

CNBC notes: "Oracle secured the site, ordered the hardware, and spent billions of dollars on construction and staff, with the expectation of going bigger."
AI

Claude AI Finds Bugs In Microsoft CTO's 40-Year-Old Apple II Code (theregister.com) 87

An anonymous reader quotes a report from The Register: AI can reverse engineer machine code and find vulnerabilities in ancient legacy architectures, says Microsoft Azure CTO Mark Russinovich, who used his own Apple II code from 40 years ago as an example. Russinovich wrote: "We are entering an era of automated, AI-accelerated vulnerability discovery that will be leveraged by both defenders and attackers."

In May 1986, Russinovich wrote a utility called Enhancer for the Apple II personal computer. The utility, written in 6502 machine language, added the ability to use a variable or BASIC expression for the destination of a GOTO, GOSUB, or RESTORE command, whereas without modification Applesoft BASIC would only accept a line number. Russinovich had Claude Opus 4.6, released early last month, look over the code. It decompiled the machine language and found several security issues, including a case of "silent incorrect behavior" where, if the destination line was not found, the program would set the pointer to the following line or past the end of the program, instead of reporting an error. The fix would be to check the carry flag, which is set if the line is not found, and branch to an error.

The existence of the vulnerability in Apple II type-in code has only amusement value, but the ability of AI to decompile embedded code and find vulnerabilities is a concern. "Billions of legacy microcontrollers exist globally, many likely running fragile or poorly audited firmware like this," said one comment to Russinovich's post.

Facebook

Meta Acquires Moltbook, the Social Network For AI Agents 30

Axios reports that Meta has acquired Moltbook, the viral, Reddit-like social network designed for AI agents. Humans are welcome, but only to observe. Axios reports: The deal brings Moltbook's creators -- Matt Schlicht and Ben Parr -- into Meta Superintelligence Labs (MSL), the unit run by former Scale AI CEO Alexandr Wang. Meta did not disclose Moltbook's purchase price. The deal is expected to close mid-March, Meta says, with the pair starting at MSL on March 16. When it launched in late January, Moltbook was labeled the "most interesting place on the internet" by open-source developer and writer Simon Willison. "Browsing around Moltbook is so much fun. A lot of it is the expected science fiction slop, with agents pondering consciousness and identity. There's also a ton of genuinely useful information, especially on m/todayilearned."

In an internal post seen by Axios, Meta's Vishal Shah said existing Moltbook customers can temporarily continue using the platform. "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners." He added: "Their team has unlocked new ways for agents to interact, share content, and coordinate complex tasks."
Businesses

EQT Eyes $6 Billion Sale of SUSE (reuters.com) 31

Private equity firm EQT AB is reportedly exploring a sale of SUSE that could value the open-source Linux pioneer at up to $6 billion, roughly doubling the valuation since EQT took the company private in 2023. Reuters reports: EQT "has hired investment bank Arma Partners to sound out a group of private equity investors for a possible sale of the company, said the sources, who requested anonymity to discuss confidential matters. The deliberations are at "an early stage and there is no certainty that EQT will proceed with "a transaction, the sources said. [...] The potential deal comes amid a broader selloff in software stocks, which has disrupted mergers and acquisitions activity. Investors are "concerned that new artificial intelligence tools could displace many existing software products, weighing on technology "valuations and making deals harder to price.

Some investors, however, see Luxembourg-headquartered SUSE as a potential beneficiary of AI adoption, arguing that demand for enterprise-grade infrastructure software is likely to grow as companies build and deploy more AI applications. The company generates about $800 million in revenue and more than $250 million in earnings before interest, taxes, depreciation, and amortization (EBITDA) and could fetch between $4 billion and $6 billion in a sale, the sources said.

AI

Samsung Wants To Let You Vibe Code Your Galaxy Phone Experience 34

Samsung says it's thinking about bringing "vibe coding" to future Galaxy phones, allowing users to describe apps or interface changes in plain language and have AI generate the code. TechRadar interviewed Won-Joon Choi, Samsung's head of mobile experience, to learn more about the plans. Here's an excerpt from their report: As noted by Won-Joon Choi, the usefulness of vibe coding on smartphones is that it opens up the "possibility of customizing your smartphone experience in new ways, not just your apps but your UX." He added, "Right now we're limited to premade tools, but with vibe coding, users could adjust their favorite apps or make something customized to their needs. So vibe coding is very interesting, and something we're looking into." [...]

Samsung recently debuted the Galaxy S26 series of phones and made a point to not call them smartphones -- they're "AI phones" now. This certainly rang true with the majority of upgrades to the devices being AI software-focused, like the new Now Nudge and expanded Audio Eraser tools, with the biggest hardware bump for the base models coming via the 39% improved NPU processing (the processor in charge of on-device AI tasks). It also teased the debut of Perplexity on its phones, joining as an alternative to the Gemini assistant, and teased the possibility of other AI models getting the same treatment in the future.
Security

How AI Assistants Are Moving the Security Goalposts 41

An anonymous reader quotes a report from KrebsOnSecurity: AI-based assistants or "agents" -- autonomous programs that have access to the user's computer, files, online services and can automate virtually any task -- are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants -- OpenClaw (formerly known as ClawdBot and Moltbot) -- has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted. If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic's Claude and Microsoft's Copilot also can do these things, but OpenClaw isn't just a passive digital butler waiting for commands. Rather, it's designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done. "The testimonials are remarkable," the AI security firm Snyk observed. "Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who've set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they're away from their desks." You can probably already see how this experimental technology could go sideways in a hurry. [...]
Last month, Meta AI safety director Summer Yue said OpenClaw unexpectedly started mass-deleting messages in her email inbox, despite instructions to confirm those actions first. She wrote: "Nothing humbles you like telling your OpenClaw 'confirm before acting' and watching it speedrun deleting your inbox. I couldn't stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb."

Krebs also noted the many misconfigured OpenClaw installations users had set up, leaving their administrative dashboards publicly accessible online. According to pentester Jamieson O'Reilly, "a cursory search revealed hundreds of such servers exposed online." When those exposed interfaces are accessed, attackers can retrieve the agent's configuration and sensitive credentials. O'Reilly warned attackers could access "every credential the agent uses -- from API keys and bot tokens to OAuth secrets and signing keys."

"You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen," O'Reilly added. And because you control the agent's perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they're displayed."
Robotics

Qualcomm's New Arduino Ventuno Q Is an AI-Focused Computer Designed For Robotics (engadget.com) 25

Qualcomm and Arduino have unveiled the Arduino Ventuno Q, a new AI-focused single-board computer built for robotics and edge systems. Engadget reports: Called the Arduino Ventuno Q, it uses Qualcomm's Dragonwing IQ8 processor along with a dedicated STM32H5 low-latency microcontroller (MCU). "Ventuno Q is engineered specifically for systems that move, manipulate and respond to the physical world with precision and reliability," the company wrote on the product page. The Ventuno Q is more sophisticated (and expensive) than Arduinio's usual AIO boards, thanks to the Dragonwing IQ8 processor that includes an 8-core ARM Cortex CPU, Adreno Arm Cortex A623 GPU and Hexagon Tensor NPU that can hit up ot 40 TOPs. It also comes with 16GB of LPDDR5 RAM, along with 64GB of eMMC storage and an M.2 NVME Gen.4 slot to expand that. Other features include Wi-Fi 6, Bluetooth 5.3, 2.5Gbps ethernet and USB camera support.

The Ventuno Q includes Arudino App Lab, with pre-trained AI models including LLMs, VLMs, ASR, gesture recognition, pose estimation and object tracking, all running offline. It's designed for AI systems that run entirely offline like smart kiosks, healthcare assistants and traffic flow analysis, along with Edge AI vision and sensing systems. It also supports a full robotics stack including vision processing combined with deterministic motor control for precise vision and manipulation. It's also ideal for education and research in areas like computer vision, generative AI and prototyping at the edge, according to Arduino.
Further reading: Up Next for Arduino After Qualcomm Acquisition: High-Performance Computing
The Courts

Anthropic Sues the Pentagon After Being Labeled a Threat To National Security 137

Anthropic is suing the Department of Defense after the Trump administration labeled the company a "supply chain risk" and canceled its government contracts when Anthropic refused to allow its AI model Claude to be used for domestic surveillance or autonomous weapons. Fortune reports: The lawsuit, filed Monday in the U.S. District Court for the Northern District of California, calls the administration's actions "unprecedented and unlawful" and claims they threaten to harm "Anthropic irreparably." The complaint claims that government contracts are already being canceled and that private contracts are also in doubt, putting "hundreds of millions of dollars" at near-term risk.

An Anthropic spokesperson told Fortune: "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners." "We will continue to pursue every path toward resolution, including dialogue with the government," they added.
AI

AI Allows Hackers To Identify Anonymous Social Media Accounts, Study Finds (theguardian.com) 54

An anonymous reader quotes a report from the Guardian: AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned. In most test scenarios, large language models (LLMs) -- the technology behind platforms such as ChatGPT -- successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted. The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a "fundamental reassessment of what can be considered private online".

In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a "Dolores park." In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence. While this example was fictional, the paper's authors highlighted scenarios in which governments use AI to surveil dissidents and activists posting anonymously, or hackers are able to launch "highly personalized" scams.

The Courts

Judges Find AI Doesn't Have Human Intelligence in Two New Court Cases (yahoo.com) 79

Within the last month two U.S> judges have effectively declared AI bots are not human, writes Los Angeles Times columnist Michael Hiltzik: On Monday, the Supreme Court declined to take up a lawsuit in which artist and computer scientist Stephen Thaler tried to copyright an artwork that he acknowledged had been created by an AI bot of his own invention. That left in place a ruling last year by the District of Columbia Court of Appeals, which held that art created by non-humans can't be copyrighted... [Judge Patricia A. Millett] cited longstanding regulations of the Copyright Office requiring that "for a work to be copyrightable, it must owe its origin to a human being"... She rejected Thaler's argument, as had the federal trial judge who first heard the case, that the Copyright Office's insistence that the author of a work must be human was unconstitutional. The Supreme Court evidently agreed...

[Another AI-related case] involved one Bradley Heppner, who was indicted by a federal grand jury for allegedly looting $150 million from a financial services company he chaired. Heppner pleaded innocent and was released on $25-million bail. The case is pending.... Knowing that an indictment was in the offing, Heppner had consulted Claude for help on a defense strategy. His lawyers asserted that those exchanges, which were set forth in written memos, were tantamount to consultations with Heppner's lawyers; therefore, his lawyers said, they were confidential according to attorney-client privilege and couldn't be used against Heppner in court. (They also cited the related attorney work product doctrine, which grants confidentiality to lawyers' notes and other similar material.) That was a nontrivial point. Heppner had given Claude information he had learned from his lawyers, and shared Claude's responses with his lawyers.

[Federal Judge Jed S.] Rakoff made short work of this argument. First, he ruled, the AI documents weren't communications between Heppner and his attorneys, since Claude isn't an attorney... Second, he wrote, the exchanges between Heppner and Claude weren't confidential. In its terms of use, Anthropic claims the right to collect both a user's queries and Claude's responses, use them to "train" Claude, and disclose them to others. Finally, he wasn't asking Claude for legal advice, but for information he could pass on to his own lawyers, or not. Indeed, when prosecutors tested Claude by asking whether it could give legal advice, the bot advised them to "consult with a qualified attorney."

The columnist agrees AI-generated results shouldn't receive the same protections as human-generated material. "The AI bots are machines, and portraying them as though they're thinking creatures like artists or attorneys doesn't change that, and shouldn't."

He also seems to think their output is at best second-hand regurgitation. "Everything an AI bot spews out is, at more than a fundamental level, the product of human creativity."
AI

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

Robotics

OpenAI's Former Research Chief Raises $70M to Automate Manufacturing With AI (msn.com) 22

"OpenAI's former chief research officer is raising $70 million for a new startup building an AI and software platform to automate manufacturing," reports the Wall Street Journal, citing "people familiar with the matter.

"Arda, the new startup co-founded by Bob McGrew, is raising at a valuation of $700 million, according to people familiar with the matter...." Arda is developing an AI and software platform, including a video model that can analyze footage from factory floors and use it to train robots to run factories autonomously, the people said. The company's software will coordinate machines and humans across the entire production process, from product design and manufacturability to finished goods coming off the line.

The startup's goal is to make manufacturing cost effective in the Western part of the globe, reducing reliance on China as geopolitical and national security concerns rise... At OpenAI, McGrew was tasked with training robots to do tasks in the physical world, according to this LinkedIn. McGrew was also one of the earliest employees at Palantir.

Slashdot Top Deals