Electronic Frontier Foundation

EFF Is Leaving X (eff.org) 184

After nearly 20 years on the platform, The Electronic Frontier Foundation (EFF) says it is leaving X. "This isn't a decision we made lightly, but it might be overdue," the digital rights group said. "The math hasn't worked out for a while now." From the report: We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago. [...]

When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis.

EFF takes on big fights, and we win. We do that by putting our time, skills, and our members' support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we're here to help you take back control.

Social Networks

Bluesky CEO Jay Graber Is Stepping Down (wired.com) 48

Bluesky CEO Jay Graber is stepping down after overseeing the platform's growth from a Twitter research project into a 40-million-user alternative to X. "As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things," Graber wrote in a statement.

She will be transitioning to a new Chief Innovation Officer role while Venture capitalist Toni Schneider will serve as interim CEO until the board searches for a permanent replacement. Wired reports: Graber joined Bluesky in 2019, when it was a research project within Twitter focused on developing a decentralized framework for the social web. She became the company's first chief executive officer in 2021, when it spun out into an independent entity. She oversaw the platform's remarkable rise and the growing pains it experienced as it transformed from a quirky Twitter offshoot to a full-fledged alternative to X. Schneider tells WIRED that he intends to help Bluesky "become not just the best open social app, but the foundation for a whole new generation of user-owned networks."

Schneider, who will continue working as a partner at the venture capital firm True Ventures while at Bluesky, was previously CEO of the Wordpress parent company, Automattic, from 2006 to 2014. He also served as its CEO again in 2024 while top executive Matt Mullenweg went on a sabbatical. During that time, Schneider met Graber and became an adviser to Bluesky's leadership. In a blog post announcing his new role, Schneider said he plans to emphasize scaling, describing his job as "to help set up Bluesky's next phase of growth."

This isn't the end for Graber and Bluesky. She will transition to become the company's chief innovation officer, a role focused on Bluesky's technology stack rather than its business operations. The position was created for her. Graber, who began her career as a software engineer, has always sounded the most enthusiastic when discussing Bluesky's technology rather than its revenue streams. Bluesky's board of directors will appoint the next permanent CEO. The members include Jabber founder Jeremie Miller, crypto-focused VC Kinjal Shah, TechDirt founder Mike Masnick, and Graber. (Twitter founder Jack Dorsey was originally part of the board but quit in 2024.) This means Graber will have input on her successor. The talent search is still in early stages.

AI

Anthropic CEO Dario Amodei Calls OpenAI's Messaging Around Military Deal 'Straight Up Lies' (arstechnica.com) 28

An anonymous reader quotes a report from TechCrunch: Anthropic co-founder and CEO Dario Amodei is not happy -- perhaps predictably so -- with OpenAI chief Sam Altman. In a memo to staff, reported by The Information, Amodei referred to OpenAI's dealings with the Department of Defense as "safety theater." "The main reason [OpenAI] accepted [the DoD's deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses," Amodei wrote.

Last week, Anthropic and the U.S. Department of Defense (DoD) failed to come to an agreement over the military's request for unrestricted access to the AI company's technology. Anthropic, which already had a $200 million contract with the military, insisted the DoD affirm that it would not use the company's AI to enable domestic mass surveillance or autonomous weaponry. Instead, the DoD -- known under the Trump administration as the Department of War -- struck a deal with OpenAI. Altman stated that his company's new defense contract would include protections against the same red lines that Anthropic had asserted.

In a letter to staff, Amodei refers to OpenAI's messaging as "straight up lies," stating that Altman is falsely "presenting himself as a peacemaker and dealmaker." Amodei might not be speaking solely from a position of bitterness, here. Anthropic specifically took issue with the DoD's insistence on the company's AI being available for "any lawful use." [...] "I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI's deal with the DoW as sketchy or suspicious, and see us as the heroes (we're #2 in the App Store now!)," Amodei wrote to his staff. "It is working on some Twitter morons, which doesn't matter, but my main worry is how to make sure it doesn't work on OpenAI employees."

AI

HSBC To Investors: If India Couldn't Build an Enterprise Software Challenger, Neither Can AI (x.com) 54

India's IT services giants have spent decades deploying, customizing, and maintaining the world's largest enterprise software platforms, putting hundreds of thousands of engineers in daily contact with the business logic and proprietary architectures of vendors like SAP and Oracle. None of them have built a competing product that gained meaningful traction against the U.S. incumbents, HSBC said in a note to clients, using this history to argue AI-generated code faces the same structural barriers.

The bank's analysts contend that enterprise software competition turns on factors that have little to do with the ability to write code -- sales teams, cross-licensing agreements, patented IP, first-mover lock-in, brand awareness, and go-to-market infrastructure. If a massive, low-cost, domain-expert workforce couldn't crack the market over several decades, HSBC argues, the idea that AI-generated code will do so is, in the words of Nvidia's Jensen Huang that the report approvingly cites, "illogical."
Social Networks

Elon Musk: X's New Algorithm Will Be Made Open Source in Seven Days (msn.com) 90

"We will make the new ð algorithm...open source in 7 days," Elon Musk posted Saturday on X.com. Musk says this is "including all code used to determine what organic and advertising posts are recommended to users," and "This will be repeated every 4 weeks, with comprehensive developer notes, to help you understand what changed."

Some context from Engadget: Musk has been making promises of open-sourcing the algorithm since his takeover of Twitter, and in 2023 published the code for the site's "For You" feed on GitHub. But the code wasn't all that revealing, leaving out key details, according to analyses at the time. And it hasn't been kept up to date.
Bloomberg also reported on Saturday's announcement: The billionaire didn't say why X was making its algorithm open source. He and the company have clashed several times with regulators over content being shown to users.

Some X users had previously complained that they were receiving fewer posts on the social media platform from people they follow. In October, Musk confirmed in a post on X that the company had found a "significant bug" in the platform's "For You" algorithm and pledged a fix. The company has also been working to incorporate more artificial intelligence into its recommendation algorithm for X, using Grok, Musk's artificial intelligence chatbot...

In September, Musk wrote that the goal was for X's recommendation engine to "be purely AI" and that the company would share its open source algorithm about every two weeks. "To the degree that people are seeing improvements in their feed, it is not due to the actions of specific individuals changing heuristics, but rather increasing use of Grok and other AI tools," Musk wrote in October. The company was working to have all of the more than 100 million daily posts published to X evaluated by Grok, which would then offer individual users the posts most likely to interest them, Musk wrote. "This will profoundly improve the quality of your feed." He added that the company was planning to roll out the new features by November.

Social Networks

AI-Powered Social Media App Hopes To Build More Purposeful Lives (msn.com) 32

A founder of Twitter and a founder of Pinterest are now working on "social media for people who hate social media," writes a Washington Post columnist.

"When I heard that this platform would harness AI to help us live more meaningful lives, I wanted to know more..." Their bid for redemption is West Co. — the Workshop for Emotional and Spiritual Technology Corporation — and the platform they're testing is called Tangle, a "purpose discovery tool" that uses AI to help users define their life purposes, then encourages them to set intentions toward achieving those purposes, reminds them periodically and builds a community of supporters to encourage steps toward meeting those intentions. "A lot of people, myself included, have been on autopilot," Stone said. "If all goes well, we'll introduce a lot of people to the concept of turning off autopilot."

But will all go well? The entrepreneurs have been at it for two years, and they've scrapped three iterations before even testing them. They still don't have a revenue model. "This is a really hard thing to do," Stone admitted. "If we were a traditional start-up, we would have probably been folded by now." But the two men, with a combined net worth of at least hundreds of millions, and possibly billions, had the luxury of self-funding for a year, and now they have $29 million in seed funding led by Spark Capital...

[T]he project revolves around training existing AI models in "what good intentions and helpful purposes look like," explained Long Cheng, the founding designer. When you join Tangle, which is invitation-only until this spring at the earliest, the AI peruses your calendar, examines your photos, asks you questions and then produces "threads," or categories that define your life purpose. You're free to accept, reject or change the suggestions. It then encourages you to make "intentions" toward achieving your threads, and to add "reflections" when you experience something meaningful in your life. Users then receive encouragement from friends, or "supporters." A few of the "threads" on Tangle are about personal satisfaction (traveler, connoisseur), but the vast majority involve causes greater than self: family (partner, parent, sibling), community (caregiver, connector, guardian), service (volunteer, advocate, healer) and spirituality (seeker, believer). Even the work-related threads (mentor, leader) suggest a higher purpose.

The column includes this caveat. "I have no idea whether they will succeed. But as a columnist writing about how to keep our humanity in the 21st century, I believe it's important to focus on people who are at least trying..."

"Quite possibly, West Co. and the various other enterprises trying to nudge technology in a more humane direction will find that it doesn't work socially or economically — they don't yet have a viable product, after all — but it would be a noble failure."
AI

Did Tim Cook Post AI Slop in His Christmas Message Promoting 'Pluribus'? (daringfireball.net) 23

Artist Keith Thomson is a modern (and whimsical) Edward Hopper. And Apple TV says he created the "festive artwork" shared on X by Apple CEO Tim Cook on Christmas Eve, "made on MacBook Pro."

Its intentionally-off picture of milk and cookies was meant to tease the season finale of Pluribus. ("Merry Christmas Eve, Carol..." Cook had posted.)

But others were convinced that the weird image was AI-generated.

Tech blogger John Gruber was blunt. "Tim Cook posts AI Slop in Christmas message on Twitter/X, ostensibly to promote 'Pluribus'." As for sloppy details, the carton is labeled both "Whole Milk" and "Lowfat Milk", and the "Cow Fun Puzzle" maze is just goofily wrong. (I can't recall ever seeing a puzzle of any kind on a milk carton, because they're waxy and hard to write on. It's like a conflation of milk cartons and cereal boxes.)
Tech author Ben Kamens — who just days earlier had blogged about generating mazes with AI — said the image showed the "specific quirks" of generative AI mazes (including the way the maze couldn't be solved, expect by going around the maze altogether). Former Google Ventures partner M.G. Siegler even wondered if AI use intentionally echoed the themes of Pluribus — e.g., the creepiness of a collective intelligence — since otherwise "this seems far too obvious to be a mistake/blunder on Apple's part." (Someone on Reddit pointed out that in Pluribus's dystopian world, milk plays a key role — and the open spout of the "natural" milk's carton does touch a suspiciously-shining light on the Christmas tree...)

Slashdot contacted artist Keith Thomson to try to ascertain what happened...
Social Networks

Operation Bluebird Wants To Relaunch 'Twitter' For a New Social Network (theverge.com) 83

A startup called Operation Bluebird is petitioning the US Patent and Trademark Office to strip X Corp of the "Twitter" and "tweet" trademarks, hoping to relaunch a new Twitter with the old brand, bird logo, and "town square" vibe. "The TWITTER and TWEET brands have been eradicated from X Corp.'s products, services, and marketing, effectively abandoning the storied brand, with no intention to resume use of the mark," the petition states. "The TWITTER bird was grounded." Ars Technica reports: If successful, two leaders of the group tell Ars, Operation Bluebird would launch a social network under the name Twitter.new, possibly as early as late next year. (Twitter.new has created a working prototype and is already inviting users to reserve handles.)

Michael Peroff, an Illinois attorney and founder of Operation Bluebird, said that in the intervening years, more Twitter-like social media networks have sprung up or gained traction -- like Threads, Mastodon, and Bluesky. But none have the scale or brand recognition that Twitter did prior to Musk's takeover. "There certainly are alternatives," Peroff said. "I don't know that any of them at this point in time are at the scale that would make a difference in the national conversation, whereas a new Twitter really could."

Similarly, Peroff's business partner, Stephen Coates, an attorney who formerly served as Twitter's general counsel, said that Operation Bluebird aims to recreate some of the magic that Twitter once had. "I remember some time ago, I've had celebrities react to my content on Twitter during the Super Bowl or events," he told Ars. "And we want that experience to come back, that whole town square, where we are all meshed in there."
"Mere 'token use' won't be enough to reserve the mark," said Mark Lemley, a Stanford Law professor and expert in trademark law. "Or [X] could defend if it can show that it plans to go back to using Twitter. Consumers obviously still know the brand name. It seems weird to think someone else could grab the name when consumers still associate it with the ex-social media site of that name. But that's what the law says."
Ruby

Is Ruby Still a 'Serious' Programming Language? (wired.com) 80

Wired published an article by California-based writer/programmer Sheon Han arguing that Ruby "is not a serious programming language."

Han believes that the world of programming has "moved on", and "everything Ruby does, another language now does better, leaving it without a distinct niche. Ruby is easy on the eyes. Its syntax is simple, free of semicolons or brackets. More so even thanPython — a language known for its readability — Ruby reads almost like plain English... Ruby, you might've guessed, is dynamically typed. Python and JavaScript are too, but over the years, those communities have developed sophisticated tools to make them behave more responsibly. None of Ruby's current solutions are on par with those. It's far too conducive to what programmers call "footguns," features that make it all too easy to shoot yourself in the foot.

Critically, Ruby's performance profile consistently ranks near the bottom (read: slowest) among major languages. You may remember Twitter's infamous "fail whale," the error screen with a whale lifted by birds that appeared whenever the service went down. You could say that Ruby was largely to blame. Twitter's collapse during the 2010 World Cup served as a wake-up call, and the company resolved to migrate its backend to Scala, a more robust language.

The move paid off: By the 2014 World Cup, Twitter handled a record 32 million tweets during the final match without an outage. Its new Scala-based backend could process up to 100 times faster than Ruby. In the 2010s, a wave of companies replaced much of their Ruby infrastructure, and when legacy Ruby code remained, new services were written in higher-performance languages.

You may wonderwhy people are still using Ruby in 2025. It survives because of its parasitic relationship with Ruby on Rails, the web framework that enabled Ruby's widespread adoption and continues to anchor its relevance.... Rails was the framework of choice for a new generation of startups. The main code bases of Airbnb, GitHub, Twitter, Shopify, and Stripe were built on it.

He points out on Stack Overflow's annual developer survey, Ruby has slipped from a top-10 technology in 2013 to #18 this year — "behind evenAssembly" — calling Ruby "a kind of professional comfort object, sustained by the inertia of legacy code bases and the loyalty of those who first imprinted upon it." But the article drew some criticism on X.com. ("You should do your next piece about how Vim isn't a serious editor and continue building your career around nerd sniping developers.")

Other reactions...
  • "Maybe WIRED is just not a serious medium..."
  • "FWIW — Ruby powered Shopify through another Black Friday / Cyber Monday — breaking last year's record."
  • "Maybe you should have taken a look at TypeScript..."

Wired's subheading argues that Ruby "survives on affection, not utility. Let's move on." Are they right? Share your own thoughts and experiences in the comments.

Is Ruby still a 'serious' programming language?


AI

Epic's Sweeney Says Platforms Should Stop Tagging Games Made With AI (gamesindustry.biz) 69

The CEO of Epic Games, Tim Sweeney, has argued that platforms like Steam should not label games that are made using AI. From a report: Responding to a post on Twitter from a user who suggested that storefronts drop this tag, the industry exec said that it "makes no sense" to flag such content. Sweeney added that soon AI will be a part of the way all games are made. "The AI tag is relevant to art exhibits for authorship disclosure, and to digital content licensing marketplaces where buyers need to understand the rights situation," Sweeney said. "It makes no sense for game stores, where AI will be involved in nearly all future production."
Social Networks

New Research Finds America's Top Social Media Sites: YouTube (84%) Facebook (71%), Instagram (50%) (pewresearch.org) 84

Pew Research surveyed 5,022 Americans this year (between February 5 and June 18), asking them "do you ever use" YouTube, Facebook, and nine of the other top social media platforms. The results?
YouTube 84%
Facebook 71%
Instagram 50%
TikTok 37%
WhatsApp 32%
Reddit 26%
Snapchat 25%
X.com (formerly Twitter) 21%
Threads 8%
Bluesky 4%
Truth Social 3%

An announcement from Pew Research adds some trends and demographics: The Center has long tracked use of many of these platforms. Over the past few years, four of them have grown in overall use among U.S. adults — TikTok, Instagram, WhatsApp and Reddit. 37% of U.S. adults report using TikTok, which is slightly up from last year and up from 21% in 2021. Half of U.S. adults now report using Instagram, which is on par with last year but up from 40% in 2021. About a third say they use WhatsApp, up from 23% in 2021. And 26% today report using Reddit, compared with 18% four years ago.

While YouTube and Facebook continue to sit at the top, the shares of Americans who report using them have remained relatively stable in recent years... YouTube and Facebook are the only sites asked about that a majority in all age groups use, though for YouTube, the youngest adults are still the most likely to do so. This differs from Facebook, where 30- to 49-year-olds most commonly say they use it (80%).

Other interesting statistics:
  • "More than half of women report using Instagram (55%), compared with under half of men (44%). Alternatively, men are more likely to report using platforms such as X and Reddit."
  • "Democrats and Democratic-leaning independents are more likely to report using WhatsApp, Reddit, TikTok, Bluesky and Threads."

Social Networks

Jack Dorsey Funds diVine, a Vine Reboot That Includes Vine's Video Archive (techcrunch.com) 20

An anonymous reader quotes a report from TechCrunch: As generative AI content starts to fill our social apps, a project to bring back Vine's six-second looping videos is launching with Twitter co-founder Jack Dorsey's backing. On Thursday, a new app called diVine will give access to more than 100,000 archived Vine videos, restored from an older backup that was created before Vine's shutdown. The app won't just exist as a walk down memory lane; it will also allow users to create profiles and upload their own new Vine videos. However, unlike on traditional social media, where AI content is often haphazardly labeled, diVine will flag suspected generative AI content and prevent it from being posted. According to TechCrunch, a volunteer preservation group called the Archive Team saved Vine's content when it shut down in 2016. The only problem was that everything was stored in massive 40-50 GB binary blob files that were basically unusable for casual viewing.

Evan Henshaw-Plath (who goes by the name Rabble), an early Twitter employee and member of Jack Dorsey's nonprofit "and Other Stuff," dug into those backup files to try and salvage as much as he could. He spent months writing big-data extraction scripts, reverse-engineering how the archived binaries were structured, and reconstructing the original video files, old user info, view counts, and more. "I wasn't able to get all of them out, but I was able to get a lot out and basically reconstruct these Vines and these Vine users, and give each person a new user [profile] on this open network," he said.

Rabble estimates that through this process he was able to successfully recover 150,000-200,000 Vine videos from around 60,000 creators. diVine then rebuilt user profiles on top of the decentralized Nostr protocol so creators can reclaim their accounts, request takedowns, or upload missing videos.

You can check out the app for yourself at diVine.video. It's available in beta form on both iOS and Android.
AI

Researchers Surprised That With AI, Toxicity is Harder To Fake Than Intelligence (arstechnica.com) 42

Researchers from four universities have released a study revealing that AI models remain easily detectable in social media conversations despite optimization attempts. The team tested nine language models across Twitter/X, Bluesky and Reddit, developing classifiers that identified AI-generated replies at 70 to 80% accuracy rates. Overly polite emotional tone served as the most persistent indicator. The models consistently produced lower toxicity scores than authentic human posts across all three platforms.

Instruction-tuned models performed worse than their base counterparts at mimicking humans, and the 70-billion-parameter Llama 3.1 showed no advantage over smaller 8-billion-parameter versions. The researchers found a fundamental tension: models optimized to avoid detection strayed further from actual human responses semantically.
Music

Pitchfork Is Beta Testing User Reviews and Comments As It Approaches 30 (theverge.com) 8

As it nears its 30th anniversary, Pitchfork is testing user reviews and comments in a major shift from its long-standing critic-only model. The site will now let readers rate albums and leave comments, combining those into an aggregated "reader score" alongside the official Pitchfork score. The Verge reports: Pitchfork has historically been a one-sided affair. While it ran the occasional reader poll, there was no way for readers to directly voice their opinion on the site. If you thought that Jet's Shine On deserved better than a 0.0 (first off, you're wrong), there was no way to let the author know other than shouting into the void of this new thing at the time called Twitter. Now the site is considering letting users comment directly on reviews and give albums scores of their own. And then those scores will be averaged up into a single reader score for each album.
Books

Was the Web More Creative and Human 20 Years Ago? (bookforum.com) 77

Readers in 2025 "may struggle to remember the optimism of the aughts, when the internet seemed to offer endless possibilities for virtual art and writing that was free..." argues a new review at Bookforum. "The content we do create online, if we still create, often feels unreflectively automatic: predictable quote-tweet dunks, prefabricated poses on Instagram, TikTok dances that hit their beats like clockwork, to say nothing of what's literally thoughtlessly churned out by LLM-powered bots."

They write that author Joanna Walsh "wants us to remember how truly creative, and human, the internet once was," in the golden age of user-generated content — and funny cat picture sites like I Can Has Cheezburger: I Can Has Cheezburger... was an amateur project, an outlet for tech professionals who wanted an easier way to exchange cute cat pics after a hard day at work. In Amateurs!: How We Built Internet Culture and Why It Matters, Walsh documents how unpaid creative labor is the basis for almost everything that's good (and much that's bad) online, including the open-source code Linux, developed by Linus Torvalds when he was still in school ("just as a hobby, won't be big and professional"), and even, in Walsh's account, the World Wide Web itself. The platforms that emerged in the 2000s as "Web 2.0," including Facebook, YouTube, Reddit, and Twitter, allowed anyone to experiment in a space that had been reserved for coders and hackers, making the internet interactive even for the inexpert and virtually unlimited in potential audience. The explosion in amateur creativity that followed took many forms, from memes to tweeted one-liners to diaristic blogs to durational digital performances to sloppy Photoshops to the formal and informal taxonomic structures — wikis, neologisms, digitally native dialects...

[U]ser-generated content was also, at bottom, about the bottom line, a business model sold to us under the guise of artistic empowerment. Even referring to an anonymous amateur as a "user," Walsh argues, cedes ground: these platforms are populated by producers, but their owners see us as, and turn us into, "helpless addicts." For some, online amateurism translated to professional success, a viral post earning an author a book deal, or a reputation as a top commenter leading to a staff writing job on a web publication... But for most, these days, participation in the online attention economy feels like a tax, or maybe a trickle of revenue, rather than free fun or a ticket to fame. The few remaining professionals in the arts and letters have felt pressured to supplement their full-time jobs with social media self-promotion, subscription newsletters, podcasts, and short-form video. On what was once called Twitter, users can pay, and sometimes get paid, to post with greater reach...

The chapters are bookended by an introduction on the early promise of 2004 and a coda on the defeat of 2025 and supplemented by an appendix with a straightforward timeline of the major events and publications that serve as the book's touchstones... The online spaces where amateur content creators once "created and steered online culture" have been hollowed out and replaced by slop, but what really hurts is that the slop is being produced by bots trained on precisely that amateur content.

The Internet

Reddit Cofounder Says 'Much of the Internet is Now Dead' (businessinsider.com) 93

Alexis Ohanian, who helped build Reddit, says much of the internet has become dominated by bots and AI. Speaking on the podcast TBPN, he described the internet as increasingly "quasi-AI" and filled with what he called "LinkedIn slop." Ohanian referenced dead internet theory, the assertion that bot activity exceeds human activity on the web. In September, Sam Altman, OpenAI's CEO, posted that while he had not taken the theory seriously, he now sees "a lot of LLM-run twitter accounts."
AI

Hollywood Demands Copyright Guardrails from Sora 2 - While Users Complain That's Less Fun (yahoo.com) 56

Enthusiasm for Sora 2 "wasn't shared in Hollywood," reports the Los Angeles Times, "where the new AI tools have created a swift backlash" that "appears to be only just the beginning of a bruising legal fight that could shape the future of AI use in the entertainment business." [OpenAI] executives went on a charm offensive last year. They reached out to key players in the entertainment industry — including Walt Disney Co. — about potential areas for collaboration and trying to assuage concerns about its technology. This year, the San Francisco-based AI startup took a more assertive approach. Before unveiling Sora 2 to the general public, OpenAI executives had conversations with some studios and talent agencies, putting them on notice that they need to explicitly declare which pieces of intellectual property — including licensed characters — were being opted-out of having their likeness depicted on the AI platform, according to two sources familiar with the matter who were not authorized to comment. Actors would be included in Sora 2 unless they opted out, the people said. OpenAI disputes the claim and says that it was always the company's intent to give actors and other public figures control over how their likeness is used.

The response was immediate.... [Big talent agencies objected, along with performers' unions and major studios.] "Decades of enforceable copyright law establishes that content owners do not need to 'opt out' to prevent infringing uses of their protected IP," Warner Bros. Discovery said in a statement... The strong pushback from the creative community could be a strategy to force OpenAI into entering licensing agreements for the content they need, legal experts said... One challenge is figuring out a way that fairly compensates talent and rights holders. Several people who work within the entertainment industry ecosystem said they don't believe a flat fee works.

Meanwhile, "the complete copyright-free-for-all approach that OpenAI took to its new AI video generation model, Sora 2, lasted all of one week," writes Gizmodo. But that means the service has "now pissed off its users." As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can't make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was "the only reason this app was so fun."
Futurism published more reactions, including ""It's official, Sora 2 is completely boring and useless with these copyright restrictions." Others accused OpenAI of abusing copyright to hype up its new app. "This is just classic OpenAI at this point," another user wrote. "They do this s*** all the time. Let people have fun for a day or two and then just start censoring like crazy." The app now has a measly 2.9-star rating on the App Store, indicative of growing disillusionment and frustration with censorship... [It's not dropped to 2.8.]

In an apparent effort to save face, Altman claimed this week that many copyright holders are actually begging to have their characters appear on Sora, instead of complaining about the trend. "In the case of Sora, we've heard from a lot of concerned rightsholders and also a lot of rightsholders who are like 'My concern is you won't put my character in enough,'" he told the a16z podcast earlier this week. "So I can completely see a world where subject to the decisions that a rightsholder has, they get more upset with us for not generating their character often enough than too much," he added. Whether most rightsholders would agree with that sentiment remains to be seen.

Business Insider offers another reaction. After watching Sora 2's main public feed, they write that Sora 2 "seems to be overrun with teenage boys."
Books

Can Cory Doctorow's 'Enshittification' Transform the Tech Industry Debate? (nytimes.com) 76

An anonymous reader quotes a report from the New York Times: Over the course of a nearly four-decade career, Cory Doctorow has written 15 novels, four graphic novels, dozens of short stories, six nonfiction books, approximately 60,000 blog posts and thousands of essays. And yet for all the millions of words he's published, these days the award-winning science fiction author and veteran internet activist is best known for just a single one: Enshittification. The term, which Doctorow, 54, popularized in essays in 2022 and 2023, refers to the way that online platforms become worse to use over time, as the corporations that own them try to make more money. Though the coinage is cheeky, in Doctorow's telling the phenomenon it describes is a specific, nearly scientific process that progresses according to discrete stages, like a disease.

Since then, the meaning has expanded to encompass a general vibe -- a feeling far greater than frustration at Facebook, which long ago ceased being a good way to connect with friends, or Google, whose search is now baggy with SEO spam. Of late, the idea has been employed to describe everything from video games to television to American democracy itself. "It's frustrating. It's demoralizing. It's even terrifying," Doctorow said in a 2024 speech. On Tuesday, Farrar Straus & Giroux will release "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Doctorow's book-length elaboration on his essays, complete with case studies (Uber, Twitter, Photoshop) and his prescriptions for change, which revolve around breaking up big tech companies and regulating them more robustly.
Further reading: The Enshittification Hall of Shame
Social Networks

Sam Altman Says Bots Are Making Social Media Feel 'Fake' (techcrunch.com) 83

An anonymous reader quotes a report from TechCrunch: X enthusiast and Reddit shareholder Sam Altman had an epiphany on Monday: Bots have made it impossible to determine whether social media posts are really written by humans, he posted. The realization came while reading (and sharing) some posts from the r/Claudecode subreddit, which were praising OpenAI Codex. OpenAI launched the software programming service that takes on Anthropic's Claude Code in May. Lately, that subreddit has been so filled with posts from self-proclaimed Code users announcing that they moved to Codex that one Reddit user even joked: "Is it possible to switch to codex without posting a topic on Reddit?"

This left Altman wondering how many of those posts were from real humans. "I have had the strangest experience reading this: I assume it's all fake/bots, even though in this case I know codex growth is really strong and the trend here is real," he confessed on X. He then live-analyzed his reasoning. "I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very 'it's so over/we're so back' extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots)."

[...] Altman also throws a dig at the incentives when social media sites and creators rely on engagement to make money. Fair enough. But then Altman confesses that one of the reasons he thinks the pro-OpenAI posts in this subreddit might be bots is because OpenAI has also been "astroturfed." That typically involves posts by people or bots paid for by the competitor, or paid by some third-degree contractor, giving the competitor plausible deniability. [...] Altman surmises, "The net effect is somehow AI twitter/AI Reddit feels very fake in a way it really didn't a year or two ago." If that's true, who's fault is it? GPT has led models to become so good at writing, that LLMs have become a plague not just to social media sites (which have always had a bot problem) but to schools, journalism, and the courts.

United States

FTC Warns Tech Giants Not To Bow To Foreign Pressure on Encryption (bleepingcomputer.com) 56

The Federal Trade Commission is warning major U.S. tech companies against yielding to foreign government demands that weaken data security, compromise encryption, or impose censorship on their platforms. From a report: FTC Chairman Andrew N. Ferguson signed the letter sent to large American companies like Akamai, Alphabet (Google), Amazon, Apple, Cloudflare, Discord, GoDaddy, Meta, Microsoft, Signal, Snap, Slack, and X (Twitter). Ferguson stresses that weakening data security at the request of foreign governments, especially if they don't alert users about it, would constitute a violation of the FTC Act and expose companies to legal consequences.

Ferguson's letter specifically cites foreign laws such as the EU's Digital Services Act and the UK's Online Safety and Investigatory Powers Acts. Earlier this year, Apple was forced to remove support for iCloud end-to-end encryption in the United Kingdom rather than give in to demands to add a backdoor for the government to access encrypted accounts. The UK's demand would have weakened Apple's encryption globally, but it was retracted last week following U.S. diplomatic pressure.

Slashdot Top Deals