AI

OpenAI CEO Says Meta Tried Poaching ChatGPT Engineers With $100M Bonuses (the-independent.com) 25

The Independent notes a remarkable-if-true figure that's being bandied around this week.

Meta "started making these, like, giant offers to a lot of people on our team," OpenAI CEO Sam Altman told his brother Jack on his podcast. "You know, like, $100 million signing bonuses, more than that [in] compensation per year... I'm really happy that, at least so far, none of our best people have decided to take him up on that."

Previous reports have also suggested that Meta is targeting employees at Google DeepMind, offering similar levels of compensation. Some of these efforts appear to have been successful, with DeepMind researcher Jack Rae joining Meta's 'Superintelligence' team earlier this month...

During the podcast, which was published on Tuesday, Mr Altman also gave details about future AI products that OpenAI is hoping to build, claiming that they will enable "crazy new social experiences" and "virtual employees". The most important breakthrough over the next decade, he said, would involve radical new discoveries powered by AI. "The thing that I think will be the most impactful in that five-to-10 year timeframe is AI will actually discover new science," he said.

The Washington Post notes that Zuckerberg "responded to recent reports of his compensation offers in an interview posted by the Information on YouTube on Tuesday, saying that 'a lot of the numbers specifically have been inaccurate" but acknowledging there is "an absolute premium for the best and most talented people." Zuckerberg's recent hires and other comments this week suggest he's not taking any chances of being left behind. He announced plans for a giant data center campus large enough to obscure Manhattan to power future AI projects by his superintelligence team.
The Courts

Meta Investors, Mark Zuckerberg Reach Settlement To End $8 Billion Trial Over Facebook Privacy Litigation (nbcnews.com) 8

An anonymous reader quotes a report from NBC News: Mark Zuckerberg and current and former directors and officers of Meta Platforms agreed on Thursday to settle claims seeking $8 billion for the damage they allegedly caused the company by allowing repeated violations of Facebook users' privacy, a lawyer for the shareholders told a Delaware judge on Thursday. The parties did not disclose details of the settlement and defense lawyers did not address the judge, Kathaleen McCormick of the Delaware Court of Chancery. McCormick adjourned the trial just as it was to enter its second day and she congratulated the parties. The plaintiffs' lawyer, Sam Closic, said the agreement just came together quickly.

Billionaire venture capitalist Marc Andreessen, who is a defendant in the trial and a Meta director, was scheduled to testify on Thursday. Shareholders of Meta sued Zuckerberg, Andreessen and other former company officials including former Chief Operating Officer Sheryl Sandberg in hopes of holding them liable for billions of dollars in fines and legal costs the company paid in recent years. The Federal Trade Commission fined Facebook $5 billion in 2019 after finding that it failed to comply with a 2012 agreement with the regulator to protect users' data. The shareholders wanted the 11 defendants to use their personal wealth to reimburse the company. The defendants denied the allegations, which they called "extreme claims."
"This settlement may bring relief to the parties involved, but it's a missed opportunity for public accountability," said Jason Kint, the head of Digital Content Next, a trade group for content providers.

"Facebook has successfully remade the 'Cambridge Analytica' scandal about a few bad actors rather than an unraveling of its entire business model of surveillance capitalism and the reciprocal, unbridled sharing of personal data. That reckoning is now left unresolved."
United Kingdom

Thousands of Afghans Secretly Moved To Britain After Data Leak (reuters.com) 76

The UK secretly relocated thousands of Afghans to the UK after their personal details were disclosed in one of the country's worst ever data breaches, putting them at risk of Taliban retaliation. The operation cost around $2.7 billion and remained under a court-imposed superinjunction until recently lifted. Reuters reports: The leak by the Ministry of Defence in early 2022, which led to data being published on Facebook the following year, and the secret relocation program, were subject to a so-called superinjunction preventing the media reporting what happened, which was lifted on Tuesday by a court. British defence minister John Healey apologised for the leak, which included details about members of parliament and senior military officers who supported applications to help Afghan soldiers who worked with the British military and their families relocate to the UK. "This serious data incident should never have happened," Healey told lawmakers in the House of Commons. It may have occurred three years ago under the previous government, but to all whose data was compromised I offer a sincere apology."

The incident ranks among the worst security breaches in modern British history because of the cost and risk posed to the lives of thousands of Afghans, some of whom fought alongside British forces until their chaotic withdrawal in 2021. Healey said about 4,500 Afghans and their family members have been relocated or were on their way to Britain under the previously secret scheme. But he added that no-one else from Afghanistan would be offered asylum because of the data leak, citing a government review which found little evidence of intent from the Taliban to seek retribution against former officials.

AI

Meta's Superintelligence Lab Considers Shift To Closed AI Model (yahoo.com) 13

An anonymous reader quotes a report from Investing.com: Meta's newly formed superintelligence lab is discussing potential changes to the company's artificial intelligence strategy that could represent a major shift for the social media giant. A small group of top members of the lab, including 28-year-old Alexandr Wang, Meta's new chief A.I. officer, talked last week about abandoning the company's most powerful open source A.I. model, called Behemoth, in favor of developing a closed model, according to a report in the New York Times, citing people familiar with the matter.

Meta has traditionally open sourced its A.I. models, making the computer code public for other developers to build upon, and any shift toward a closed A.I. model would mark a significant philosophical change for Meta. Meta had completed training its Behemoth model by feeding in data to improve it, but delayed its release due to poor internal performance. After the company announced the formation of the superintelligence lab last month, teams working on the Behemoth model, which is considered a "frontier" model, stopped conducting new tests on it. The discussions within the superintelligence lab remain preliminary, and no decisions have been finalized. Any potential changes would require approval from Meta CEO Mark Zuckerberg.

Social Networks

Are a Few People Ruining the Internet For the Rest of Us? 150

A small fraction of hyperactive social media users generates the vast majority of toxic online content, according to research by New York University psychology professor Jay Van Bavel and colleagues Claire Robertson and Kareena del Rosario. The study found that 10% of users produce roughly 97% of political tweets, while just 0.1% of users share 80% of fake news.

Twelve accounts known as the "disinformation dozen" created most vaccine misinformation on Facebook during the pandemic, the research found. In experiments, researchers paid participants to unfollow divisive political accounts on X. After one month, participants reported 23% less animosity toward other political groups. Nearly half declined to refollow hostile accounts after the study ended, and those maintaining healthier newsfeeds reported reduced animosity 11 months later. The research describes social media as a "funhouse mirror" that amplifies extreme voices while muting moderate perspectives.
Facebook

Zuckerberg Pledges Hundreds of Billions For AI Data Centers in Superintelligence Push (reuters.com) 57

Mark Zuckerberg said on Monday that Meta would spend hundreds of billions of dollars to build several massive AI data centers for superintelligence, intensifying his pursuit of a technology that he has chased with a talent war for top AI engineers. From a report: The social media giant is among the large technology companies that have chased high-profile deals and doled out multi-million-dollar pay packages in recent months to fast-track work on machines that can outthink humans on most tasks.

Unveiling the spending commitment in a Threads post on Monday, CEO Zuckerberg touted the strength in the company's core advertising business to support the massive spending that has raised concerns among tech investors about potential payoffs. "We have the capital from our business to do this," Zuckerberg said. He also cited a report from a chip industry publication Semianalysis that said Meta is on track to be the first lab to bring online a 1-gigawatt-plus supercluster, which refers to a massive data center built to train advanced AI models.

Nintendo

Nintendo Banned Switch 2 Owner For Playing a Used Switch 1 Game They Bought Online (tomshardware.com) 84

"A Nintendo Switch 2 user reportedly got his brand-new console banned by Nintendo after buying used Switch 1 games and patching them on his console," reports Tom's Hardware: According to Reddit user dmanthey, they purchased four used titles off the Facebook marketplace, inserted them into the Switch 2, and had them all updated. When they turned on their handhelds the following day, they received a message saying that they were restricted from Nintendo's online services and that they couldn't even download the games they had already bought...

[T]hey were able to prove their innocence by pulling up the Facebook Marketplace listing for their games and sending the photos of their purchased cartridges. According to the Redditor, the process was painless and fast, and it was "so much easier than getting support from Microsoft or Sony...." Other users warned, though, that this isn't always a guaranteed resolution.

Nintendo is known for being protective of its intellectual property and delivers harsh penalties to anyone caught violating it. We've already had several reports of users getting banned for using Mig Flash, even on their own ROMs. And while it's not true that getting banned turns your Switch 2 into a brick, it will still prevent you from accessing the company's online services, which severely restricts its features and usability.

"Nintendo attaches unique codes to its Switch game cartridges to prevent piracy," notes Engadget. "However, bad actors can copy games onto a third-party device, like the MIG Flash, and then resell the physical game card. Once Nintendo detects two instances of its unique code being online at the same time, it will ban any accounts using it..." This anti-piracy policy isn't new — Nintendo has long had a reputation for fiercely combating any type of piracy — but it has become relevant again thanks to the recently released Switch 2, which offers backwards compatibility with original Switch titles. The company even recently amended its user agreement to allow itself the power to brick a Nintendo Switch that's caught running pirated games or mods.
Youtube

YouTube Can't Put Pandora's AI Slop Back in the Box (gizmodo.com) 75

Longtime Slashdot reader SonicSpike shares a report from Gizmodo: YouTube is inundated with AI-generated slop, and that's not going to change anytime soon. Instead of cutting down on the total number of slop channels, the platform is planning to update its policies to cut out some of the worst offenders making money off "spam." At the same time, it's still full steam ahead adding tools to make sure your feeds are full of mass-produced brainrot.

In an update to its support page posted last week, YouTube said it will modify guidelines for its Partner Program, which lets some creators with enough views make money off their videos. The video platform said it requires YouTubers to create "original" and "authentic" content, but now it will "better identify mass-produced and repetitious content." The changes will take place on July 15. The company didn't advertise whether this change is related to AI, but the timing can't be overlooked considering how more people are noticing the rampant proliferation of slop content flowing onto the platform every day.

The AI "revolution" has resulted in a landslide of trash content that has mired most creative platforms. Alphabet-owned YouTube has been especially bad recently, with multiple channels dedicated exclusively to pumping out legions of fake and often misleading videos into the sludge-filled sewer that has become users' YouTube feeds. AI slop has become so prolific it has infected most social media platforms, including Facebook and Instagram. Last month, John Oliver on "Last Week Tonight" specifically highlighted several YouTube channels that crafted obviously fake stories made to show White House Press Secretary Karoline Leavitt in a good light. These channels and similar accounts across social media pump out these quick AI-generated videos to make a quick buck off YouTube's Partner Program.

The Courts

German Court Rules Meta Tracking Tech Violates EU Privacy Laws (therecord.media) 14

An anonymous reader quotes a report from The Record: A German court has ruled that Meta must pay $5,900 to a German Facebook user who sued the platform for embedding tracking technology in third-party websites -- a ruling that could open the door to large fines down the road over data privacy violations relating to pixels and similar tools. The Regional Court of Leipzig in Germany ruled Friday that Meta tracking pixels and software development kits embedded in countless websites and apps collect users' data without their consent and violate the continent's General Data Protection Regulation (GDPR).

The ruling in favor of the plaintiff sets a precedent which the court acknowledged will allow countless other users to sue without "explicitly demonstrating individual damages," according to a Leipzig Regional Court press release. "Every user is individually identifiable to Meta at all times as soon as they visit the third-party websites or use an app, even if they have not logged in via the Instagram and Facebook account," the press release said.
"This may very well be one of the most substantial rulings coming out of Europe this year," said Ronni K. Gothard Christiansen, the CEO of AesirX, a consultancy which helps businesses comply with data privacy laws. "$5,900 in damages for one visitor adds up quickly if you have tens of thousands of visitors, or even millions."
The Internet

Browser Extensions Turn Nearly 1 Million Browsers Into Website-Scraping Bots (arstechnica.com) 28

Over 240 browser extensions with nearly a million total installs have been covertly turning users' browsers into web-scraping bots. "The extensions serve a wide range of purposes, including managing bookmarks and clipboards, boosting speaker volumes, and generating random numbers," reports Ars Technica. "The common thread among all of them: They incorporate MellowTel-js, an open source JavaScript library that allows developers to monetize their extensions." Ars Technica reports: Some of the data swept up in the collection free-for-all included surveillance videos hosted on Nest, tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive and Intuit.com, vehicle identification numbers of recently bought automobiles along with the names and addresses of the buyers, patient names and the doctors they saw, travel itineraries hosted on Priceline, Booking.com, and airline websites, Facebook Messenger attachments and Facebook photos, even when the photos were set to be private. The dragnet also collected proprietary information belonging to Tesla, Blue Origin, Amgen, Merck, Pfizer, Roche, and dozens of other companies.

Tuckner said in an email Wednesday that the most recent status of the affected extensions is:

- Of 45 known Chrome extensions, 12 are now inactive. Some of the extensions were removed for malware explicitly. Others have removed the library.
- Of 129 Edge extensions incorporating the library, eight are now inactive.
- Of 71 affected Firefox extensions, two are now inactive.

Some of the inactive extensions were removed for malware explicitly. Others have removed the library in more recent updates. A complete list of extensions found by Tuckner is here.

Businesses

Meta Invests $3.5 Billion in World's Largest Eye-Wear Maker in AI Glasses Push 37

Meta has acquired a $3.5 billion stake in Ray-Ban maker EssilorLuxottica, "a deal that increases the U.S. tech giant's financial commitment to the fast-growing smart glasses industry," reports Bloomberg. From the report: Meta's investment in the eyewear giant deepens the relationship between the two companies, which have partnered over the past several years to develop AI-powered smart glasses. Meta currently sells a pair of Ray-Ban glasses, first debuted in 2021, with built-in cameras and an AI assistant. Last month, it launched separate Oakley-branded glasses with EssilorLuxottica. EssilorLuxottica Chief Executive Officer Francesco Milleri said last year that Meta was interested in taking a stake the company, but that plan hadn't materialized until now.

The deal aligns with Meta CEO Mark Zuckerberg's commitment to AI, which has become a top priority and major expense for the company. Smart glasses are a key part of that plan. While Meta has historically had to deliver its apps and services via smartphones created by competitors, glasses offer Meta a chance to build its own hardware and control its own distribution, Zuckerberg has said. The arrangement gives Meta the advantage of having more detailed manufacturing knowledge and global distribution networks, fundamental to turning its smart glasses into mass-market products. For EssilorLuxottica, the deal provides a deeper presence in the tech world, which would be helpful if Meta's futuristic bets pay off. Meta is also betting on the idea that people will one day work and play while wearing headsets or glasses.
China

The Startup-Filled Coder 'Village' at the Heart of China's AI Frenzy (msn.com) 6

China "is pouring money into building an AI supply chain with as little reliance on the U.S. as possible," the Wall Street Journal noted this weekend.

But what does that look like? The New York Times visits Liangzhu, "the coder 'village' at the heart of China's AI frenzy... a quiet suburb of the southern Chinese city of Hangzhou... As China faces off with the United States over tech primacy, Hangzhou has become the centre of China's AI frenzy," with its proximity to tech companies like Alibaba and DeepSeek..." In Liangzhu, many engineers said they were killing time until they could create their own startups, waiting out noncompete agreements they had signed at bigger companies like ByteDance... But some said the government support for Hangzhou's tech scene had scared off some investors. Several company founders, who asked not to be named so they could discuss sensitive topics, said it was difficult for them to attract funds from foreign venture capital firms, frustrating their ambitions to grow outside China. The nightmare situation, they said, would be to end up like ByteDance, the Chinese parent of TikTok, whose executives have been questioned before Congress about the company's ties to the Chinese government. Founders described choosing between two paths for their companies' growth: Take government funding and tailor their product to the Chinese market, or raise enough money on their own to set up offices in a country like Singapore to pitch foreign investors. For most, the first was the only feasible option.

Another uncertainty is access to the advanced computer chips that power artificial intelligence systems. Washington has spent years trying to prevent Chinese companies from buying these chips, and Chinese companies like Huawei and Semiconductor Manufacturing International Corp. are racing to produce their own. So far, the Chinese-made chips work well enough to help companies like ByteDance provide some of their AI services in China. Many Chinese companies have created stockpiles of Nvidia chips despite Washington's controls. But it is not clear how long that supply will last, or how quickly China's chipmakers can catch up to their American counterparts...

Liangzhu villagers have been hosting film nights. They had recently gathered to watch "The Matrix." Afterward, they decided the movie should be required viewing, Lin said. Its theme — people finding their way out of a vast system controlling society — provided spot-on inspiration. Aspiring founders in Liangzhu, even those who did not go to top universities, believe they could start the next world-changing tech company, said Felix Tao [a 36-year-old former Facebook and Alibaba employee.] "Many of them are super brave to make a choice to explore their own way, because in China that is not the common way to live your life."

AI

Police Department Apologizes for Sharing AI-Doctored Evidence Photo on Social Media (boston.com) 93

A Maine police department has now acknowledged "it inadvertently shared an AI-altered photo of drug evidence on social media," reports Boston.com: The image from the Westbrook Police Department showed a collection of drug paraphernalia purportedly seized during a recent drug bust on Brackett Street, including a scale and white powder in plastic bags. According to Westbrook police, an officer involved in the arrests snapped the evidence photo and used a photo editing app to insert the department's patch. "The patch was added, and the photograph with the patch was sent to one of our Facebook administrators, who posted it," the department explained in a post. "Unbeknownst to anyone, when the app added the patch, it altered the packaging and some of the other attributes on the photograph. None of us caught it or realized it."

It wasn't long before the edited image's gibberish text and hazy edges drew criticism from social media users. According to the Portland Press Herald, Westbrook police initially denied AI had been used to generate the photo before eventually confirming its use of the AI chatbot ChatGPT. The department issued a public apology Tuesday, sharing a side-by-side comparison of the original and edited images.

"It was never our intent to alter the image of the evidence," the department's post read. "We never realized that using a photoshop app to add our logo would alter a photograph so substantially."

EU

EU Sticks With Timeline For AI Rules (reuters.com) 25

Reuters: The European Union's landmark rules on AI will be rolled out according to the legal timeline in the legislation, the European Commission said on Friday, dismissing calls from some companies and countries for a pause.

Google owner Alphabet, Facebook owner Meta and other U.S. companies as well as European businesses such as Mistral and ASML have in recent days urged the Commission to delay the AI Act by years.
Financial Times adds: In an open letter, seen by the Financial Times, the heads of 44 major firms on the continent called on European Commission President Ursula von der Leyen to introduce a two-year pause, warning that unclear and overlapping regulations are threatening the bloc's competitiveness in the global AI race.

[...] The current debate surrounds the drafting of a "code of practice," which will provide guidance to AI companies on how to implement the act that applies to powerful AI models such as Google's Gemini, Meta's Llama and OpenAI's GPT-4. Brussels has already delayed publishing the code, which was due in May, and is now expected to water down the rules.

Privacy

Facebook Is Asking To Use Meta AI On Photos In Your Camera Roll You Haven't Yet Shared (techcrunch.com) 19

Facebook is prompting users to opt into a feature that uploads photos from their camera roll -- even those not shared on the platform -- to Meta's servers for AI-driven suggestions like collages and stylized edits. While Meta claims the content is private and not used for ads, opting in allows the company to analyze facial features and retain personal data under its broad AI terms, raising privacy concerns. TechCrunch reports: The feature is being suggested to Facebook users when they're creating a new Story on the social networking app. Here, a screen pops up and asks if the user will opt into "cloud processing" to allow creative suggestions. As the pop-up message explains, by clicking "Allow," you'll let Facebook generate new ideas from your camera roll, like collages, recaps, AI restylings, or photo themes. To work, Facebook says it will upload media from your camera roll to its cloud (meaning its servers) on an "ongoing basis," based on information like time, location, or themes.

The message also notes that only you can see the suggestions, and the media isn't used for ad targeting. However, by tapping "Allow," you are agreeing to Meta's AI Terms. This allows your media and facial features to be analyzed by AI, it says. The company will additionally use the date and presence of people or objects in your photos to craft its creative ideas. [...] According to Meta's AI Terms around image processing, "once shared, you agree that Meta will analyze those images, including facial features, using AI. This processing allows us to offer innovative new features, including the ability to summarize image contents, modify images, and generate new content based on the image," the text states.

The same AI terms also give Meta's AIs the right to "retain and use" any personal information you've shared in order to personalize its AI outputs. The company notes that it can review your interactions with its AIs, including conversations, and those reviews may be conducted by humans. The terms don't define what Meta considers personal information, beyond saying it includes "information you submit as Prompts, Feedback, or other Content." We have to wonder whether the photos you've shared for "cloud processing" also count here.

Social Networks

Brazil Supreme Court Rules Digital Platforms Are Liable For Users' Posts (ft.com) 41

Brazil's supreme court has ruled that social media platforms can be held legally responsible for their users' posts. From a report: Companies such as Facebook, TikTok and X will have to act immediately to remove material such as hate speech, incitement to violence or "anti-democratic acts," even without a prior judicial takedown order, as a result of the decision in Latin America's largest nation late on Thursday.
Facebook

Meta Beats Copyright Suit From Authors Over AI Training on Books (bloomberglaw.com) 83

An anonymous reader shares a report: Meta escaped a first-of-its-kind copyright lawsuit from a group of authors who alleged the tech giant hoovered up millions of copyrighted books without permission to train its generative AI model called Llama.

San Francisco federal Judge Vince Chhabria ruled Wednesday that Meta's decision to use the books for training is protected under copyright law's fair use defense, but he cautioned that his opinion is more a reflection on the authors' failure to litigate the case effectively. "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," Chhabria said.

Movies

Aaron Sorkin's The Social Network Sequel Officially in Development (theguardian.com) 29

Aaron Sorkin is officially working on a sequel to The Social Network. From a report: Last year, the Oscar-winning writer revealed he was working on a film that would revisit the subject of Facebook, and Deadline has now reported that The Social Network Part II is in development at Sony Pictures yet isn't a "straight sequel."

The original film, which traced the early days of Facebook and its creator Mark Zuckerberg, was directed by David Fincher. Sorkin is rumoured to be directing the follow-up. "I blame Facebook for January 6," he said in 2024 on a special edition of The Town podcast, live from Washington DC. When asked to explain why, he responded: "You're gonna need to buy a movie ticket."

The Social Network was an adaptation of Ben Mezrich's book The Accidental Billionaires, and the sequel will be based on the Wall Street Journal series The Facebook Files. The 2021 investigation examined the damage caused by the social networking site and how internal findings had been buried. Subjects included the influence on the January 6 riot and the mental health of teenage users.

Australia

Australia Regulator and YouTube Spar Over Under-16s Social Media Ban 26

Australia's eSafety Commissioner has urged the government to deny YouTube an exemption from upcoming child safety regulations, citing research showing it exposes more children to harmful content than any other platform. YouTube pushed back, calling the commissioner's stance inconsistent with government data and parental feedback. "The quarrel adds an element of uncertainty to the December rollout of a law being watched by governments and tech leaders around the world as Australia seeks to become the first country to fine social media firms if they fail to block users aged under 16," reports Reuters. From the report: The centre-left Labor government of Anthony Albanese has previously said it would give YouTube a waiver, citing the platform's use for education and health. Other social media companies such as Meta's Facebook and Instagram, Snapchat, and TikTok have argued such an exemption would be unfair. eSafety Commissioner Julie Inman Grant said she wrote to the government last week to say there should be no exemptions when the law takes effect. She added that the regulator's research found 37% of children aged 10 to 15 reported seeing harmful content on YouTube -- the most of any social media site. [...]

YouTube, in a blog post, accused Inman Grant of giving inconsistent and contradictory advice, which discounted the government's own research which found 69% of parents considered the video platform suitable for people under 15. "The eSafety commissioner chose to ignore this data, the decision of the Australian Government and other clear evidence from teachers and parents that YouTube is suitable for younger users," wrote Rachel Lord, YouTube's public policy manager for Australia and New Zealand.

Inman Grant, asked about surveys supporting a YouTube exemption, said she was more concerned "about the safety of children and that's always going to surpass any concerns I have about politics or being liked or bringing the public onside". A spokesperson for Communications Minister Anika Wells said the minister was considering the online regulator's advice and her "top priority is making sure the draft rules fulfil the objective of the Act and protect children from the harms of social media."

Slashdot Top Deals