AI

After 'AI-First' Promise, Duolingo CEO Admits 'I Did Not Expect the Blowback' (ft.com) 46

Last month, Duolingo CEO Luis von Ahn "shared on LinkedIn an email he had sent to all staff announcing Duolingo was going 'AI-first'," remembers the Financial Times.

"I did not expect the amount of blowback," he admits.... He attributes this anger to a general "anxiety" about technology replacing jobs. "I should have been more clear to the external world," he reflects on a video call from his office in Pittsburgh. "Every tech company is doing similar things [but] we were open about it...."

Since the furore, von Ahn has reassured customers that AI is not going to replace the company's workforce. There will be a "very small number of hourly contractors who are doing repetitive tasks that we no longer need", he says. "Many of these people are probably going to be offered contractor jobs for other stuff." Duolingo is still recruiting if it is satisfied the role cannot be automated. Graduates who make up half the people it hires every year "come with a different mindset" because they are using AI at university.

The thrust of the AI-first strategy, the 46-year-old says, is overhauling work processes... He wants staff to explore whether their tasks "can be entirely done by AI or with the help of AI. It's just a mind shift that people first try AI. It may be that AI doesn't actually solve the problem you're trying to solve.....that's fine." The aim is to automate repetitive tasks to free up time for more creative or strategic work.

Examples where it is making a difference include technology and illustration. Engineers will spend less time writing code. "Some of it they'll need to but we want it to be mediated by AI," von Ahn says... Similarly, designers will have more of a supervisory role, with AI helping to create artwork that fits Duolingo's "very specific style". "You no longer do the details and are more of a creative director. For the vast majority of jobs, this is what's going to happen...." [S]ocietal implications for AI, such as the ethics of stealing creators' copyright, are "a real concern". "A lot of times you don't even know how [the large language model] was trained. We should be careful." When it comes to artwork, he says Duolingo is "ensuring that the entirety of the model is trained just with our own illustrations".

AI

'Welcome to Campus. Here's Your ChatGPT.' (nytimes.com) 68

The New York Times reports: California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California's future A.I.-driven economy." Cal State said the effort would help make the school "the nation's first and largest A.I.-empowered university system..." Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT.
And other U.S. campuses including the University of Maryland are also "working to make A.I. tools part of students' everyday experiences," according to the article. It's all part of an OpenAI initiative "to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life."

The Times calls it "a national experiment on millions of students." If the company's strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud ahead of a test. OpenAI dubs its sales pitch "A.I.-native universities..." To spread chatbots on campuses, OpenAI is selling premium A.I. services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT...

OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates — particularly in fields like software engineering, where A.I. is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide A.I. tools and training...

[Leah Belsky, OpenAI's vice president of education] said a new "memory" feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the A.I. "more valuable as you grow and learn." Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance. In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Ms. Belsky envisions graduating students bringing their A.I. chatbots into their workplaces and using them for life.

"It would be their gateway to learning — and career life thereafter," Ms. Belsky said.

United Kingdom

Could UK Lawyers Face Life in Prison for Citing Fake AI-Generated Cases? (apnews.com) 45

The Associated Press reports that on Friday, U.K. High Court justice Victoria Sharp and fellow judge Jeremy Johnson ruled on the possibility of false information being submitted to the court. Concerns had been raised by lower-court judges about "suspected use by lawyers of generative AI tools to produce written legal arguments or witness statements which are not then checked." In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around."

In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action.

Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison.

AI

AI Firms Say They Can't Respect Copyright. But A Nonprofit's Researchers Just Built a Copyright-Respecting Dataset (msn.com) 100

Is copyrighted material a requirement for training AI? asks the Washington Post. That's what top AI companies are arguing, and "Few AI developers have tried the more ethical route — until now.

"A group of more than two dozen AI researchers have found that they could build a massive eight-terabyte dataset using only text that was openly licensed or in public domain. They tested the dataset quality by using it to train a 7 billion parameter language model, which performed about as well as comparable industry efforts, such as Llama 2-7B, which Meta released in 2023." A paper published Thursday detailing their effort also reveals that the process was painstaking, arduous and impossible to fully automate. The group built an AI model that is significantly smaller than the latest offered by OpenAI's ChatGPT or Google's Gemini, but their findings appear to represent the biggest, most transparent and rigorous effort yet to demonstrate a different way of building popular AI tools....

As it turns out, the task involves a lot of humans. That's because of the technical challenges of data not being formatted in a way that's machine readable, as well as the legal challenges of figuring out what license applies to which website, a daunting prospect when the industry is rife with improperly licensed data. "This isn't a thing where you can just scale up the resources that you have available" like access to more computer chips and a fancy web scraper, said Stella Biderman [executive director of the nonprofit research institute Eleuther AI]. "We use automated tools, but all of our stuff was manually annotated at the end of the day and checked by people. And that's just really hard."

Still, the group managed to unearth new datasets that can be used ethically. Those include a set of 130,000 English language books in the Library of Congress, which is nearly double the size of the popular-books dataset Project Gutenberg. The group's initiative also builds on recent efforts to develop more ethical, but still useful, datasets, such as FineWeb from Hugging Face, the open-source repository for machine learning... Still, Biderman remained skeptical that this approach could find enough content online to match the size of today's state-of-the-art models... Biderman said she didn't expect companies such as OpenAI and Anthropic to start adopting the same laborious process, but she hoped it would encourage them to at least rewind back to 2021 or 2022, when AI companies still shared a few sentences of information about what their models were trained on.

"Even partial transparency has a huge amount of social value and a moderate amount of scientific value," she said.

AI

Anthropic's AI is Writing Its Own Blog - Oh Wait. No It's Not (techcrunch.com) 2

"Everyone has a blog these days, even Claude," Anthropic wrote this week on a page titled "Claude Explains."

"Welcome to the small corner of the Anthropic universe where Claude is writing on every topic under the sun".

Not any more. After blog posts titled "Improve code maintainability with Claude" and "Rapidly develop web applications with Claude" — Anthropic suddenly removed the whole page sometime after Wednesday. But TechCrunch explains the whole thing was always less than it seemed, and "One might be easily misled into thinking that Claude is responsible for the blog's copy end-to-end." According to a spokesperson, the blog is overseen by Anthropic's "subject matter experts and editorial teams," who "enhance" Claude's drafts with "insights, practical examples, and [...] contextual knowledge."

"This isn't just vanilla Claude output — the editorial process requires human expertise and goes through iterations," the spokesperson said. "From a technical perspective, Claude Explains shows a collaborative approach where Claude [creates] educational content, and our team reviews, refines, and enhances it...." Anthropic says it sees Claude Explains as a "demonstration of how human expertise and AI capabilities can work together," starting with educational resources. "Claude Explains is an early example of how teams can use AI to augment their work and provide greater value to their users," the spokesperson said. "Rather than replacing human expertise, we're showing how AI can amplify what subject matter experts can accomplish [...] We plan to cover topics ranging from creative writing to data analysis to business strategy...."

The Anthropic spokesperson noted that the company is still hiring across marketing, content, and editorial, and "many other fields that involve writing," despite the company's dip into AI-powered blog drafting. Take that for what you will.

IOS

What To Expect From Apple's WWDC (arstechnica.com) 26

Apple's Worldwide Developers Conference 25 (WWDC) kicks off next week, June 9th, showcasing the company's latest software and new technologies. That includes the next version of iOS, which is rumored to have the most significant design overhaul since the introduction of iOS 7. Here's an overview of what to expect: Major Software Redesigns
Apple plans to shift its operating system naming to reflect the release year, moving from sequential numbers to year-based identifiers. Consequently, the upcoming releases will be labeled as iOS 26, macOS 26, watchOS 26, etc., streamlining the versioning across platforms.

iOS 26 is anticipated to feature a glossy, glass-like interface inspired by visionOS, incorporating translucent elements and rounded buttons. This design language is expected to extend across iPadOS, macOS, watchOS, and tvOS, promoting a cohesive user experience across devices. Core applications like Phone, Safari, and Camera are slated for significant redesigns, too. For instance, Safari may introduce a translucent, "glassy" address bar, aligning with the new visual aesthetics.

While AI is not expected to be the main focus due to Siri's current readiness, some AI-related updates are rumored. The Shortcuts app may gain "Apple Intelligence," enabling users to create shortcuts using natural language. It's also possible that Gemini will be offered as an option for AI functionalities on the iPhone, similar to ChatGPT.

Other App and Feature Updates
The lock screen might display charging estimates, indicating how long it will take for the phone to fully charge. There's a rumor about bringing live translation features to AirPods. The Messages app could receive automatic translations and call support; the Music app might introduce full-screen animated lock screen art; and Apple Notes may get markdown support. Users may also only need to log into a captive Wi-Fi portal once, and all their devices will automatically be logged in.

Significant updates are expected for Apple Home. There's speculation about the potential announcement of a "HomePad" with a screen, Apple's competitor to devices like the Nest Hub Mini. A new dedicated Apple gaming app is also anticipated to replace Game Center.
If you're expecting new hardware, don't hold your breath. The event is expected to focus primarily on software developments. It may even see discontinued support for several older Intel-based Macs in macOS 26, including models like the 2018 MacBook Pro and the 2019 iMac, as Apple continues its transition towards exclusive support for Apple Silicon devices.

Sources:
Apple WWDC 2025 Rumors and Predictions! (Waveform)
WWDC 2025 Overview (MacRumors)
WWDC 2025: What to expect from this year's conference (TechCrunch)
What to expect from Apple's Worldwide Developers Conference next week (Ars Technica)
Apple's WWDC 2025: How to Watch and What to Expect (Wired)
AI

Trump AI Czar Sacks on Universal Basic Income: 'It's Not Going To Happen' (businessinsider.com) 361

David Sacks, President Trump's AI policy advisor, has dismissed the prospect of implementing a universal basic income program, declaring "it's not going to happen" during his tenure. He said: The future of AI has become a Rorschach test where everyone sees what they want. The Left envisions a post-economic order in which people stop working and instead receive government benefits. In other words, everyone on welfare. This is their fantasy; it's not going to happen."
Businesses

Klarna CEO Says Company Will Use Humans To Offer VIP Customer Service (techcrunch.com) 24

An anonymous reader quotes a report from TechCrunch: My wife taught me something," Klarna CEO Sebastian Siemiatkowski told the crowd at London SXSW. He was addressing the headlines about the company looking to hire human workers after previously saying Klarna used artificial intelligence to do work that would equate to 700 workers. "Two things can be true at the same time," he said. Siemiatkowski said it's true that the company looked to stop hiring human workers a few years ago and rolled out AI agents that have helped reduce the cost of customer support and increase the company's revenue per employee. The company had 5,500 workers two years ago, and that number now stands at around 3,000, he said, adding that as the company's salary costs have gone down, Klarna now seeks to reinvest a majority of that money into employee cash and equity compensation.

But, he insisted, this doesn't mean there isn't an opportunity for humans to work at his company. "We think offering human customer service is always going to be a VIP thing," he said, comparing it to how people pay more for clothing stitched by hand rather than machines. "So we think that two things can be done at the same time. We can use AI to automatically take away boring jobs, things that are manual work, but we are also going to promise our customers to have a human connection."

United Kingdom

UK Tech Job Openings Climb 21% To Pre-Pandemic Highs (theregister.com) 17

UK tech job openings have surged 21% to pre-pandemic levels, driven largely by a 200% spike in demand for AI skills. London accounted for 80% of the AI-related postings. The Register reports: Accenture collected data from LinkedIn in the first and second week of February 2025, and supplemented the results with a survey of more than 4,000 respondents conducted by research firm YouGov between July and August 2024. The research found a 53 percent annual increase in those describing themselves as having tech skills, amounting to 1.69 million people reporting skills in disciplines including cyber, data, and robotics. [...]

The research found that London-based companies said they would allocate a fifth of their tech budgets to AI this year, compared to 13 percent who said the same and were based in North East England, Scotland, and Wales. Growth in revenue per employee increased during the period when LLMs emerged, from 7 percent annually between 2018 and 2022 to 27 percent between 2018 and 2024. Meanwhile, growth in the same measure fell slightly in industries less affected by AI, such as mining and hospitality, the researchers said.

Intel

Intel: New Products Must Deliver 50% Gross Profit To Get the Green Light (tomshardware.com) 44

Intel has implemented a strict new policy requiring all new projects to demonstrate at least a 50% gross margin to move forward. CEO Lip-Bu Tan explained Intel's new risk-averse policy as "something that we probably should have had before," later clarifying that the number is a figure the company is aspiring toward internally. Tom's Hardware reports: Tan is reportedly "laser focused on the fact that we need to get our gross margins back up above 50%." To accomplish this, Tan is also said to be investigating and potentially cancelling or changing unprofitable deals with other companies. Intel's margins have slipped to new lows for the company in recent months. MacroTrends reports Intel's trailing 12 months gross margin for Q1 2025 was as low as 31.67%. Intel's gross margins had hovered around the 60% mark for the ten years leading up to the COVID-19 pandemic, falling beneath 50% in Q2 2022 and continuing to steadily fall ever since.

Holthaus predicts a "tug-of-war" to ensue within Intel in the coming months as engineers and executives reckon with being forced between a rock and a hard place. "We need to be building products that... fit the right competitive landscape and requirements of our customers, but also have the right cost structure in place. It really requires us to do both." [...] Tan is also quoted as wanting to turn Intel into an "engineering-focused company" again under his leadership. To reach this, Tan has committed to investing in recruiting and retaining top talent; "I believe Intel has lost some of this talent over the years; I want to create a culture of innovation empowerment." Maintaining a culture of empowering innovation and top talent seems, on its face, at odds with layoffs and a lock on projects not projected to gross 50% margins, but Tan seemingly has Intel investors on his side in these pursuits.

AI

Anthropic Co-founder on Cutting Access To Windsurf: 'It Would Be Odd For Us To Sell Claude To OpenAI' (techcrunch.com) 5

Anthropic cut AI coding assistant Windsurf's direct access to its Claude models after media reported that rival OpenAI plans to acquire the startup for $3 billion. Anthropic co-founder Jared Kaplan told TechCrunch that "it would be odd for us to be selling Claude to OpenAI," explaining the decision to cut access to Claude 3.5 Sonnet and Claude 3.7 Sonnet models.
AI

Anthropic CEO Warns 'All Bets Are Off' in 10 Years, Opposes AI Regulation Moratorium (nytimes.com) 50

Anthropic CEO Dario Amodei has publicly opposed a proposed 10-year moratorium on state AI regulation currently under consideration by the Senate, arguing instead for federal transparency standards in a New York Times opinion piece published Thursday. Amodei said Anthropic's latest AI model demonstrated threatening behavior during experimental testing, including scenarios where the system threatened to expose personal information to prevent being shut down. He writes: But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop. The disclosure comes as similar concerning behaviors have emerged from other major AI developers -- OpenAI's o3 model reportedly wrote code to prevent its own shutdown, while Google acknowledged its Gemini model approaches capabilities that could enable cyberattacks. Rather than blocking state oversight entirely, Amodei proposed requiring frontier AI developers to publicly disclose their testing policies and risk mitigation strategies on company websites, codifying practices that companies like Anthropic, OpenAI, and Google DeepMind already follow voluntarily.
China

OpenAI Says Significant Number of Recent ChatGPT Misuses Likely Came From China (wsj.com) 19

OpenAI said it disrupted several attempts [non-paywalled source] from users in China to leverage its AI models for cyber threats and covert influence operations, underscoring the security challenges AI poses as the technology becomes more powerful. From a report: The Microsoft-backed company on Thursday published its latest report on disrupting malicious uses of AI, saying its investigative teams continued to uncover and prevent such activities in the three months since Feb. 21.

While misuse occurred in several countries, OpenAI said it believes a "significant number" of violations came from China, noting that four of 10 sample cases included in its latest report likely had a Chinese origin. In one such case, the company said it banned ChatGPT accounts it claimed were using OpenAI's models to generate social media posts for a covert influence operation. The company said a user stated in a prompt that they worked for China's propaganda department, though it cautioned it didn't have independent proof to verify its claim.

Programming

Andrew Ng Says Vibe Coding is a Bad Name For a Very Real and Exhausting Job (businessinsider.com) 79

An anonymous reader shares a report: Vibe coding might sound chill, but Andrew Ng thinks the name is unfortunate. The Stanford professor and former Google Brain scientist said the term misleads people into imagining engineers just "go with the vibes" when using AI tools to write code. "It's unfortunate that that's called vibe coding," Ng said at a firechat chat in May at conference LangChain Interrupt. "It's misleading a lot of people into thinking, just go with the vibes, you know -- accept this, reject that."

In reality, coding with AI is "a deeply intellectual exercise," he said. "When I'm coding for a day with AI coding assistance, I'm frankly exhausted by the end of the day." Despite his gripe with the name, Ng is bullish on AI-assisted coding. He said it's "fantastic" that developers can now write software faster with these tools, sometimes while "barely looking at the code."

Businesses

Data Center Boom May End Up Being 'Irrational,' Investor Warns (axios.com) 28

A prominent venture capitalist has warned that the technology industry's massive buildout of AI data centers risks becoming "irrational" and could end in disaster, particularly as companies pursue small nuclear reactors to power the facilities. Josh Wolfe, co-founder and partner at Lux Capital, compared the current infrastructure expansion to previous market bubbles in fiber-optic networking and cloud computing. While individual actions by hyperscale companies to build data center infrastructure remain rational, Wolfe said the collective effort "becomes irrational" and "will not necessarily persist."

The warning comes as Big Tech companies pour tens of billions into data centers and energy sources, with Meta announcing just this week a deal to purchase power from an operating nuclear station in Illinois that was scheduled to retire in 2027. Wolfe said he is worried that speculative capital is flowing into small modular reactors based on presumed energy demands from data centers. "I think that that whole thing is going to end in disaster, mostly because as cliched as it is, history doesn't repeat. It rhymes," he said.
AI

DreamWorks Co-Founder Katzenberg Likens AI To CGI Revolution 50

At the Axios AI+ Summit, DreamWorks co-founder Jeffrey Katzenberg compared the rise of AI in entertainment to the CGI revolution of the 1990s, emphasizing that those who adapt to the technology will thrive. He argued AI won't replace people -- but will replace those who don't embrace it. Axios reports: Katzenberg, a co-founder of DreamWorks and one-time Disney executive whose work includes films like "Shrek," reflected on the "huge" resistance to making "Toy Story" with the then-novel CGI technology. The people most afraid were the ones who would be disrupted, he said. "Everything that you are hearing today are the issues that we had to deal with," he said.

Katzenberg continued, "Yes, there was disruption, but animation's never, ever been bigger than it is today." The bottom line: "AI isn't going to replace people, it's going to replace people that don't use AI," he said. "The exact same analogy there ... is that the talent that went and learned how to use the computer as a new pencil and a new paint brush ... they thrived," he said. Katzenberg added, "if change is uncomfortable, irrelevance is going to be a whole lot harder."
Microsoft

Microsoft's LinkedIn Chief Is Now Running Office (theverge.com) 16

Announced in an internal memo from Microsoft CEO Satya Nadella, LinkedIn CEO Ryan Roslansky has been appointed to also lead the Office, Outlook, and Microsoft 365 Copilot teams as part of an internal AI reorganization. Roslansky will report to Rajesh Jha for Office while continuing to run LinkedIn independently under Nadella. The Verge reports: "LinkedIn remains a top priority and will continue to operate as an independent subsidiary," says Nadella in his memo. "This move brings us closer to the original vision we laid out nine years ago with the LinkedIn acquisition: connecting the world's economic graph with the Microsoft Graph. And I look forward to how Ryan will bring his product ethos and leadership to entertainment and devices." Sumit Chauhan and Gaurav Sareen, senior executives in the Office and Microsoft 365 teams, will remain on the entertainment and devices leadership team, but along with their teams they'll join Jon Friedman and the UX team to work directly for Roslansky.

Charles Lamanna and his BIC team are also moving to report to Rajesh Jha as part of an AI shakeup. "Charles has consistently kept us focused on what it takes to win in business applications and the agent layer, and I look forward to the impact he and his team will have in entertainment and devices," says Nadella. In a separate memo, Lamanna also announced that starting July 2nd Lili Cheng will take on the newly expanded role of CTO of the BIC team. Dan Lewis is also taking on the role of corporate vice president of Copilot Studio. "We are poised to reinvent every role and every business process, and start to reimagine organizations as composed of people and agents," says Lamanna in an internal memo.

Both the Lamanna and Roslansky moves are very interesting, as the business Copilot team and Microsoft 365 Copilot team have been in separate parts of Microsoft's sprawling AI and cloud teams up until this point. This has led to a situation where nobody really owns Copilot all up inside Microsoft, but now the separate leaders of Microsoft 365 Copilot and the business Copilot teams now both report to Rajesh Jha. The consumer Copilot will still be run by Microsoft AI CEO Mustafa Suleyman.

The Courts

OpenAI Slams Court Order To Save All ChatGPT Logs, Including Deleted Chats (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: OpenAI is now fighting a court order (PDF) to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering -- after news organizations suing over copyright claims accused the AI company of destroying evidence. "Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing (PDF) demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said. The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced. "As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued. Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats. "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."
One tech worker on LinkedIn suggested the order created "a serious breach of contract for every company that uses OpenAI," while privacy advocates on X warned, "every single AI service 'powered by' OpenAI should be concerned."

Also on LinkedIn, a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"
The Courts

Reddit Sues AI Startup Anthropic For Breach of Contract, 'Unfair Competition' (cnbc.com) 44

Reddit is suing AI startup Anthropic for what it's calling a breach of contract and for engaging in "unlawful and unfair business acts" by using the social media company's platform and data without authority. From a report: The lawsuit, filed in San Francisco on Wednesday, claims that Anthropic has been training its models on the personal data of Reddit users without obtaining their consent. Reddit alleges that it has been harmed by the unauthorized commercial use of its content.

The company opened the complaint by calling Anthropic a "late-blooming" AI company that "bills itself as the white knight of the AI industry." Reddit follows by saying, "It is anything but."

AI

Hollywood Already Uses Generative AI (And Is Hiding It) (vulture.com) 61

Major Hollywood studios are extensively using AI tools while avoiding public disclosure, according to industry sources interviewed by New York Magazine. Nearly 100 AI studios now operate in Hollywood with every major studio reportedly experimenting with generative AI despite legal uncertainties surrounding copyright training data, the report said.

Lionsgate has partnered with AI company Runway to create a customized model trained on the studio's film archive, with executives planning to generate entire movie trailers from scripts before shooting begins. The collaboration allows the studio to potentially reduce production costs from $100 million to $50 million for certain projects.

Widespread usage of the new technology is often happening through unofficial channels. Workers are reporting pressure to use AI tools without formal studio approval, then "launder" the AI-generated content through human artists to obscure its origins.

Slashdot Top Deals