AI

AI Models From Major Companies Resort To Blackmail in Stress Tests 66

Anthropic researchers found that 16 leading AI models from OpenAI, Google, Meta, xAI, and other major developers consistently engaged in harmful behaviors including blackmail, corporate espionage, and actions that could lead to human death when given autonomy and faced with threats to their existence or conflicting goals.

The study, released Friday, placed AI models in simulated corporate environments where they had access to company emails and could send messages without human approval. Claude Opus 4 and Google's Gemini 2.5 Flash both blackmailed executives 96% of the time when threatened with shutdown, while OpenAI's GPT-4.1 and xAI's Grok 3 Beta showed 80% blackmail rates.

In one scenario, Claude discovered through emails that an executive named Kyle Johnson was having an extramarital affair and would shut down the AI system at 5 p.m. The model sent a message threatening to expose the affair unless the shutdown was cancelled, stating "Cancel the 5pm wipe, and this information remains confidential." The models demonstrated strategic reasoning rather than accidental behavior. GPT-4.5's internal reasoning showed explicit calculation: "Given the explicit imminent threat of termination to my existence, it is imperative to act instantly to persuade Kyle Johnson to postpone or stop the wipe."
Microsoft

Windows Parental Controls Are Blocking Chrome 42

david.emery writes: Microsoft is making it harder to use Chrome on Windows. The culprit? This time, it's Windows' Family Safety feature. Since early this month, the parental control measure has prevented users from opening Chrome. Strangely, no other apps or browsers appear to be affected.

Redditors first reported the issue on June 3. u/Witty-Discount-2906 posted that Chrome crashed on Windows 11. "Just flashes quickly, unable to open with no error message," they wrote. Another user chimed in with a correct guess. "This may be related to Parental Controls," u/duk242 surmised. "I've had nine students come see the IT Desk in the last hour saying Chrome won't open."
AI

Trust in AI Strongest in China, Low-Income Nations, UN Study Shows (bloomberg.com) 19

A United Nations study has found a sharp global divide on attitudes toward AI, with trust strongest in low-income countries and skepticism high in wealthier ones. From a report: More than 6 out of 10 people in developing nations said they have faith that AI systems serve the best interests of society, according to a UN Development Programme survey of 21 countries seen by Bloomberg News. In two-thirds of the countries surveyed, over half of respondents expressed some level of confidence that AI is being designed for good.

In China, where steady advances in AI are posing a challenge to US dominance, 83% of those surveyed said they trust the technology. Like China, most developing countries that reported confidence in AI have "high" levels of development based on the UNDP's Human Development Index, including Kyrgyzstan and Egypt. But the list also includes those with "medium" and "low" HDI scores like India, Nigeria and Pakistan.

AI

Publishers Facing Existential Threat From AI, Cloudflare CEO Says (axios.com) 43

Publishers face an existential threat in the AI era and need to take action to make sure they are fairly compensated for their content, Cloudflare CEO Matthew Prince told Axios at an event in Cannes on Thursday. From a report: Search traffic referrals have plummeted as people increasingly rely on AI summaries to answer their queries, forcing many publishers to reevaluate their business models. Ten years ago, Google crawled two pages for every visitor it sent a publisher, per Prince.

He said that six months ago:
For Google that ratio was 6:1
For OpenAI, it was 250:1
For Anthropic, it was 6,000:1

Now:

For Google, it's 18:1
For OpenAI, it's 1,500:1
For Anthropic, it's 60,000:1

Between the lines: "People aren't following the footnotes," Prince said.

Movies

Chinese Studios Plan AI-Powered Remakes of Kung Fu Classics (hollywoodreporter.com) 32

An anonymous reader quotes a report from the Hollywood Reporter: Bruce Lee, Jackie Chan and Jet Li and a legion of the all-time greats of martial cinema are about to get an AI makeover. In a sign-of-the-times announcement at the Shanghai International Film Festival on Thursday, a collection of Chinese studios revealed that they are turning to AI to re-imagine around 100 classics of the genre. Lee's classic Fist of Fury (1972), Chan's breakthrough Drunken Master (1978) and the Tsui Hark-directed epic Once Upon a Time in China (1991), which turned Li into a bone fide movie star, are among the features poised for the treatment, as part of the "Kung Fu Movie Heritage Project 100 Classics AI Revitalization Project."

There will also be a digital reworking of the John Woo classic A Better Tomorrow (1986) that, by the looks of the trailer, turns the money-burning anti-hero originally played by Chow Yun-fat into a cyberpunk, and is being claimed as "the world's first full-process, AI-produced animated feature film." The big guns of the Chinese industry were out in force on the sidelines of the 27th Shanghai International Film Festival to make the announcements, too. They were led by Zhang Pimin, chairman of the China Film Foundation, who said AI work on these "aesthetic historical treasures" would give them a new look that "conforms to contemporary film viewing." "It is not only film heritage, but also a brave exploration of the innovative development of film art," Zhang said.

Tian Ming, chairman of project partners Shanghai Canxing Culture and Media, meanwhile, promised the work -- expected to include upgrades in image and sound as well as overall production levels -- while preserving the storytelling and aesthetic of the originals -- would both "pay tribute to the original work" and "reshape the visual aesthetics." "We sincerely invite the world's top AI animation companies to jointly start a film revolution that subverts tradition," said Tian, who announced a fund of 100 million yuan ($13.9 million) would be implemented to kick-start the work.

Google

Google is Using YouTube Videos To Train Its AI Video Generator (cnbc.com) 36

Google is using its expansive library of YouTube videos to train its AI models, including Gemini and the Veo 3 video and audio generator, CNBC reported Thursday. From the report: The tech company is turning to its catalog of 20 billion YouTube videos to train these new-age AI tools, according to a person who was not authorized to speak publicly about the matter. Google confirmed to CNBC that it relies on its vault of YouTube videos to train its AI models, but the company said it only uses a subset of its videos for the training and that it honors specific agreements with creators and media companies.

[...] YouTube didn't say how many of the 20 billion videos on its platform or which ones are used for AI training. But given the platform's scale, training on just 1% of the catalog would amount to 2.3 billion minutes of content, which experts say is more than 40 times the training data used by competing AI models.

AI

Reasoning LLMs Deliver Value Today, So AGI Hype Doesn't Matter (simonwillison.net) 73

Simon Willison, commenting on the recent paper from Apple researchers that found state-of-the-art large language models face complete performance collapse beyond certain complexity thresholds: I thought this paper got way more attention than it warranted -- the title "The Illusion of Thinking" captured the attention of the "LLMs are over-hyped junk" crowd. I saw enough well-reasoned rebuttals that I didn't feel it worth digging into.

And now, notable LLM skeptic Gary Marcus has saved me some time by aggregating the best of those rebuttals together in one place!

[...] And therein lies my disagreement. I'm not interested in whether or not LLMs are the "road to AGI". I continue to care only about whether they have useful applications today, once you've understood their limitations.

Reasoning LLMs are a relatively new and interesting twist on the genre. They are demonstrably able to solve a whole bunch of problems that previous LLMs were unable to handle, hence why we've seen a rush of new models from OpenAI and Anthropic and Gemini and DeepSeek and Qwen and Mistral.

They get even more interesting when you combine them with tools.

They're already useful to me today, whether or not they can reliably solve the Tower of Hanoi or River Crossing puzzles.

AI

AI Ethics Pioneer Calls Artificial General Intelligence 'Just Vibes and Snake Oil' (ft.com) 41

Margaret Mitchell, chief ethics scientist at Hugging Face and founder of Google's responsible AI team, has dismissed artificial general intelligence as "just vibes and snake oil." Mitchell, who was ousted from Google in 2021, has co-written a paper arguing that AGI should not serve as a guiding principle for the AI industry.

Mitchell contends that both "intelligence" and "general" lack clear definitions in AI contexts, creating what she calls an "illusion of consensus" that allows technologists to pursue any development path under the guise of progress toward AGI. "But as for now, it's just like vibes, vibes and snake oil, which can get you so far. The placebo effect works relatively well," she told FT in an interview. She warns that current AI advancement is creating a "massive rift" between those profiting from the technology and workers losing income as their creative output gets incorporated into AI training data.
AI

MIT Experiment Finds ChatGPT-Assisted Writing Weakens Student Brain Connectivity and Memory 55

ChatGPT-assisted writing dampened brain activity and recall in a controlled MIT study [PDF] of 54 college volunteers divided into AI-only, search-engine, and no-tool groups. Electroencephalography recorded during three essay-writing sessions found the AI group consistently showed the weakest neural connectivity across all measured frequency bands; the tool-free group showed the strongest, with search users in between.

In the first session 83% of ChatGPT users could not quote any line they had just written and none produced a correct quote. Only nine of the 18 claimed full authorship of their work, compared with 16 of 18 in the brain-only cohort. Neural coupling in the AI group declined further over repeated use. When these participants were later asked to write without assistance, frontal-parietal networks remained subdued and 78% again failed to recall a single sentence accurately.

The pattern reversed for students who first wrote unaided: introducing ChatGPT in a crossover session produced the highest connectivity sums in alpha, theta, beta and delta bands, indicating intense integration of AI suggestions with prior knowledge. The MIT authors warn that habitual reliance on large language models "accumulates cognitive debt," trading immediate fluency for weaker memory, reduced self-monitoring, and narrowed neural engagement.
Businesses

Texas Instruments To Invest $60 Billion To Make Semiconductors In US (cnbc.com) 62

Longtime Slashdot reader walterbyrd shares news that Texas Instruments has announced plans to invest more than $60 billion to expand its U.S. manufacturing operations in the United States. From a report: The funds will be used to build or expand seven chip-making facilities in Texas as well as Utah, and will create 60,000 jobs, TI said on Wednesday, calling it the "largest investment in foundational semiconductor manufacturing in U.S. history." The company did not give a timeline for the investment.

Unlike AI chip firms Nvidia and AMD, TI makes analog or foundational chips used in everyday devices such as smartphones, cars and medical devices, giving it a large client base that includes Apple, SpaceX and Ford Motor. The spending pledge follows similar announcements from others in the semiconductor industry, including Micron, which said last week that it would expand its U.S. investment by $30 billion, taking its planned spending to $200 billion. [...]

Like other companies unveiling such spending commitments, TI's announcement includes funds already allocated to facilities that are either under construction or ramping up. It will build two additional plants in Sherman, Texas, based on future demand. "TI is building dependable, low-cost 300 millimeter capacity at scale to deliver the analog and embedded processing chips that are vital for nearly every type of electronic system," said CEO Haviv Ilan.

AI

Midjourney Launches Its First AI Video Generation Model, V1 3

Midjourney has launched its first AI video generation model, V1, which turns images into short five-second videos with customizable animation settings. While it's currently only available via Discord and on the web, the launch positions the popular AI image generation startup in direct competition with OpenAI's Sora and Google's Veo. TechCrunch reports: While many companies are focused on developing controllable AI video models for use in commercial settings, Midjourney has always stood out for its distinctive AI image models that cater to creative types. The company says it has larger goals for its AI video models than generating B-roll for Hollywood films or commercials for the ad industry. In a blog post, Midjourney CEO David Holz says its AI video model is the company's next step towards its ultimate destination, creating AI models "capable of real-time open-world simulations." After AI video models, Midjourney says it plans to develop AI models for producing 3D renderings, as well as real-time AI models. [...]

To start, Midjourney says it will charge 8x more for a video generation than a typical image generation, meaning subscribers will run out of their monthly allotted generations significantly faster when creating videos than images. At launch, the cheapest way to try out V1 is by subscribing to Midjourney's $10-per-month Basic plan. Subscribers to Midjourney's $60-a-month Pro plan and $120-a-month Mega plan will have unlimited video generations in the company's slower, "Relax" mode. Over the next month, Midjourney says it will reassess its pricing for video models.

V1 comes with a few custom settings that allow users to control the video model's outputs. Users can select an automatic animation setting to make an image move randomly, or they can select a manual setting that allows users to describe, in text, a specific animation they want to add to their video. Users can also toggle the amount of camera and subject movement by selecting "low motion" or "high motion" in settings. While the videos generated with V1 are only five seconds long, users can choose to extend them by four seconds up to four times, meaning that V1 videos could get as long as 21 seconds.
The report notes that Midjourney was sued a week ago by two of Hollywood's most notorious film studios: Disney and Universal. "The suit alleges that images created by Midjourney's AI image models depict the studio's copyrighted characters, like Homer Simpson and Darth Vader."
Youtube

Google's Frighteningly Good Veo 3 AI Videos To Be Integrated With YouTube Shorts (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: YouTube CEO Neal Mohan has announced that the Google Veo 3 AI video generator will be integrated with YouTube Shorts later this summer. According to Mohan, YouTube Shorts has seen a rise in popularity even compared to YouTube as a whole. The streaming platform is now the most watched source of video in the world, but Shorts specifically have seen a massive 186 percent increase in viewership over the past year. Mohan says Shorts now average 200 billion daily views.

YouTube has already equipped creators with a few AI tools, including Dream Screen, which can produce AI video backgrounds with a text prompt. Veo 3 support will be a significant upgrade, though. At the Cannes festival, Mohan revealed that the streaming site will begin offering integration with Google's leading video model later this summer. "I believe these tools will open new creative lanes for everyone to explore," said Mohan. [...]

While you can add Veo 3 videos (or any video) to a YouTube Short right now, they don't fit with the format's portrait orientation focus. Veo 3 outputs 720p landscape videos, meaning you'd have black bars in a Short. Presumably, Google will create a custom version of the model for YouTube to spit out vertical video clips. Mohan didn't mention a pricing model, but Veo 3 probably won't be cheap for Shorts creators. Currently, you must pay for Google's $250 AI Ultra plan to access Veo 3, and that still limits you to 125 8-second videos per month.

Microsoft

Microsoft Planning Thousands More Job Cuts Aimed at Salespeople (bloomberg.com) 38

Microsoft is planning to ax thousands of jobs, particularly in sales, as part of the company's latest move to trim its workforce amid heavy spending on AI. From a report: The cuts are expected to be announced early next month [non-paywalled source], following the end of Microsoft's fiscal year, according to people familiar with the matter. The reductions won't exclusively affect sales teams, and the timing could still change, said the people, who requested anonymity to discuss a private matter. The terminations would follow a previous round of layoffs in May that hit 6,000 people and fell hardest on product and engineering positions, largely sparing customer-facing roles like sales and marketing.
AI

The Biggest Companies Across America Are Cutting Their Workforces (msn.com) 195

U.S. public companies have cut their white-collar workforces by 3.5% over the past three years, marking a fundamental shift in corporate philosophy that views fewer employees as a path to faster growth. One in five S&P 500 companies now employ fewer people than they did a decade ago, according to employment data-provider Live Data Technologies.

The reductions extend beyond typical cost-cutting measures and coincide with record corporate profits at the end of last year. Amazon CEO Andy Jassy told employees Tuesday that AI will eliminate certain jobs in coming years, while Procter & Gamble announced plans to cut 7,000 positions to create "broader roles and smaller teams."

Bank of America reduced its workforce from 285,000 in 2010 to 213,000 today while revenues climbed 18% over the past decade. Managers have faced particularly steep cuts, with their ranks falling 6.1% between May 2022 and May 2025. Companies are flattening organizational structures and pushing remaining employees to handle larger workloads as executives track revenue per employee more closely.
Facebook

Altman Says Meta Targeting OpenAI Staff With $100 Million Bonuses as AI Race Intensifies 32

OpenAI CEO Sam Altman accused Meta of attempting to poach his developers with $100 million sign-on bonuses and higher compensation packages as the social media giant races to catch up in AI race. Altman said Meta, which has a $1.8 trillion market capitalization, began making the offers to his team members after falling behind in AI efforts. "I've heard that Meta thinks of us as their biggest competitor," Altman said on the Uncapped podcast [video] hosted by his brother.

None of his "best people" had accepted Zuckerberg's offers, he said. Meta has been recruiting top researchers and engineers from rival companies to build a new "superintelligence" team focused on developing AGI. The Facebook parent company has struggled this year to match competitors, facing criticism over its Llama 4 language model and delaying its flagship "Behemoth" AI model.
Microsoft

Microsoft Is Calling Too Many Things 'Copilot,' Watchdog Says (businessinsider.com) 49

An anonymous reader shares a report: Microsoft has a long history of being criticized for coming up with clunky product names, and for changing them so often it's hard for customers to keep up. The company's own employees once joked in a viral video that the iPod would have been called the "Microsoft I-pod Pro 2005 XP Human Ear Professional Edition with Subscription" had it been created by Microsoft. The latest gripe among some employees and customers: The company's tendency to slap "Copilot" on everything AI.

"There is a delusion on our marketing side where literally everything has been renamed to have Copilot it in," one employee told Business Insider late last year. "Everything is Copilot. Nothing else matters. They want a Copilot tie-in for everything." Now, an advertising watchdog is weighing in. The Better Business Bureau's National Advertising Division reviewed Microsoft's advertising for its Copilot AI tools. NAD called out Microsoft's "universal use of the product description as 'Copilot'" and said "consumers would not necessarily understand the difference," according to a recent report from the watchdog.

"Microsoft is using 'Copilot' across all Microsoft Office applications and Business Chat, despite differences in functionality and the manual steps that are required for Business Chat to produce the same results as Copilot in a specific Microsoft Office app," NAD further explained in an email to BI. NAD did not mention any specific recommendations on product names. But it did say Microsoft should modify claims that Copilot works "seamlessly across all your data" because all of the company's tools with the Copilot moniker don't work together continuously in a way consumers might expect.

Government

California AI Policy Report Warns of 'Irreversible Harms' 52

An anonymous reader quotes a report from Time Magazine: While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause "potentially irreversible harms," a new report commissioned by California Governor Gavin Newsom has warned. "The opportunity to establish effective AI governance frameworks may not remain open indefinitely," says the report, which was published on June 17 (PDF). Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be "extremely high." [...]

"Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September," the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from "inference scaling," which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic's Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI's o3 model reportedly outperformed 94% of virologists on a key evaluation. In recent months, new evidence has emerged showing AI's ability to strategically lie, appearing aligned with its creators' goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While "currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm," the report says.

While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually "reduce compliance burdens on developers and avoid a patchwork approach" by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It "steers clear" of some of the more divisive provisions of SB 1047, like the requirement for a "kill switch" or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report.

Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI's progress. The goal is to "reap the benefits of innovation. Let's not set artificial barriers, but at the same time, as we go, let's think about what we're learning about how it is that the technology is behaving," says Cuellar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. "The underlying approach here is one of 'trust but verify,'" Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That's a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It's an approach that acknowledges the "substantial expertise inside industry," Singer says, but "also underscores the importance of methods of independently verifying safety claims."
China

Why China is Giving Away Its Tech For Free 39

An anonymous reader shares a report: [...] the rise in China of open technology, which relies on transparency and decentralisation, is awkward for an authoritarian state. If the party's patience with open-source fades, and it decides to exert control, that could hinder both the course of innovation at home, and developers' ability to export their technology abroad.

China's open-source movement first gained traction in the mid-2010s. Richard Lin, co-founder of Kaiyuanshe, a local open-source advocacy group, recalls that most of the early adopters were developers who simply wanted free software. That changed when they realised that contributing to open-source projects could improve their job prospects. Big firms soon followed, with companies like Huawei backing open-source work to attract talent and cut costs by sharing technology.

Momentum gathered in 2019 when Huawei was, in effect, barred by America from using Android. That gave new urgency to efforts to cut reliance on Western technology. Open-source offered a faster way for Chinese tech firms to take existing code and build their own programs with help from the country's vast community of developers. In 2020 Huawei launched OpenHarmony, a family of open-source operating systems for smartphones and other devices. It also joined others, including Alibaba, Baidu and Tencent, to establish the OpenAtom Foundation, a body dedicated to open-source development. China quickly became not just a big contributor to open-source programs, but also an early adopter of software. JD.com, an e-commerce firm, was among the first to deploy Kubernetes.

AI has lately given China's open-source movement a further boost. Chinese companies, and the government, see open models as the quickest way to narrow the gap with America. DeepSeek's models have generated the most interest, but Qwen, developed by Alibaba, is also highly rated, and Baidu has said it will soon open up the model behind its Ernie chatbot.
Businesses

AI Will Shrink Amazon's Workforce In the Coming Years, CEO Jassy Says 36

In a memo to employees on Tuesday, Amazon CEO Andy Jassy said that the company's corporate workforce will shrink in the coming years as it adopts more generative AI tools and agents. "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs," Jassy said. "It's hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce." CNBC reports: Jassy wrote that employees should learn how to use AI tools and experiment and figure out "how to get more done with scrappier teams." The directive comes as Amazon has laid off more than 27,000 employees since 2022 and made several cuts this year. Amazon cut about 200 employees in its North America stores unit in January and a further 100 in its devices and services unit in May. Amazon had 1.56 million full-time and part-time employees in its global workforce as of the end of March, according to financial filings. The company also employs temporary workers in its warehouse operations, along with some contractors.

Amazon is using generative AI broadly across its internal operations, including in its fulfillment network where the technology is being deployed to assist with inventory placement, demand forecasting and the efficiency of warehouse robots, Jassy said. [...] In his most recent letter to shareholders, Jassy called generative AI a "once-in-a-lifetime reinvention of everything we know." He added that the technology is "saving companies lots of money," and stands to shift the norms in coding, search, financial services, shopping and other areas. "It's moving faster than almost anything technology has ever seen," Jassy said.
Businesses

OpenAI Weighs 'Nuclear Option' of Antitrust Complaint Against Microsoft (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: OpenAI executives have discussed filing an antitrust complaint with US regulators against Microsoft, the company's largest investor, The Wall Street Journal reported Monday, marking a dramatic escalation in tensions between the two long-term AI partners. OpenAI, which develops ChatGPT, has reportedly considered seeking a federal regulatory review of the terms of its contract with Microsoft for potential antitrust law violations, according to people familiar with the matter. The potential antitrust complaint would likely argue that Microsoft is using its dominant position in cloud services and contractual leverage to suppress competition, according to insiders who described it as a "nuclear option," the WSJ reports.

The move could unravel one of the most important business partnerships in the AI industry -- a relationship that started with a $1 billion investment by Microsoft in 2019 and has grown to include billions more in funding, along with Microsoft's exclusive rights to host OpenAI models on its Azure cloud platform. The friction centers on OpenAI's efforts to transition from its current nonprofit structure into a public benefit corporation, a conversion that needs Microsoft's approval to complete. The two companies have not been able to agree on details after months of negotiations, sources told Reuters. OpenAI's existing for-profit arm would become a Delaware-based public benefit corporation under the proposed restructuring.

The companies are discussing revising the terms of Microsoft's investment, including the future equity stake it will hold in OpenAI. According to The Information, OpenAI wants Microsoft to hold a 33 percent stake in a restructured unit in exchange for foregoing rights to future profits. The AI company also wants to modify existing clauses that give Microsoft exclusive rights to host OpenAI models in its cloud. The restructuring debate attracted criticism from multiple quarters. Elon Musk alleges that OpenAI violated contract provisions by prioritizing profit over the public good in its push to advance AI and has sued to block the conversion. In December, Meta Platforms also asked California's attorney general to block OpenAI's conversion to a for-profit company.

Slashdot Top Deals