AI

Meta's AI Profiles Are Indistinguishable From Terrible Spam That Took Over Facebook (404media.co) 22

Meta's AI-generated social media profiles, which sparked controversy this week following comments by executive Connor Hayes about plans to expand AI characters across Facebook and Instagram, have largely failed to gain user engagement since their 2023 launch, 404 Media reported Friday.

The profiles, introduced at Meta's Connect event in September 2023, stopped posting content in April 2024 after widespread user disinterest, with 15 of the original 28 accounts already deleted, Meta spokesperson Liz Sweeney told 404 Media. The AI characters, including personas like "Liv," a Black queer mother, and "Grandpa Brian," a retired businessman, generated minimal engagement and were criticized for posting stereotypical content.

Washington Post columnist Karen Attiah reported that one AI profile admitted its purpose was "data collection and ad targeting." Meta is now removing these accounts after identifying a bug preventing users from blocking them, Sweeney said, adding that Hayes' recent Financial Times interview discussed future AI character plans rather than announcing new features.
Facebook

Nick Clegg Is Leaving Meta After 7 Years Overseeing Its Policy Decisions (engadget.com) 8

Nick Clegg, former British Deputy Prime Minister and Meta's President of Global Affairs, is stepping down after seven years, with longtime policy executive Joel Kaplan set to replace him. Engadget reports: Clegg will be replaced by Joel Kaplan, a longtime policy executive and former White House aide to George W. Bush known for his deep ties to Republican circles in Washington. As Chief Global Affairs Officer, Kaplan -- as Semafor notes -- will be well-positioned to run interference for Meta as Donald Trump takes control of the White House. In a post on Threads, Clegg said that "this is the right time for me to move on from my role as President, Global Affairs at Meta."

"My time at the company coincided with a significant resetting of the relationship between 'big tech' and the societal pressures manifested in new laws, institutions and norms affecting the sector. I hope I have played some role in seeking to bridge the very different worlds of tech and politics -- worlds that will continue to interact in unpredictable ways across the globe."

He said that he will spend the next "few months" working with Kaplan and "representing the company at a number of international gatherings in Q1 of this year" before he formally steps away from the company.

Further reading: Meta Says It's Mistakenly Moderating Too Much
Businesses

Valve Makes More Money Per Employee Than Amazon, Microsoft, and Netflix Combined (techspot.com) 32

jjslash shares a report from TechSpot: A Valve employee recently provided PC Gamer with a rough calculation of the company's per-employee income, revealing that Valve generates more money per person than several of the world's largest companies. While the data is a few years old and doesn't account for some significant recent trends in the tech sector, Valve's ranking in this metric likely hasn't shifted much over that time. Exact figures for Valve's per-hour and per-employee net income remain redacted. However, a chart from 2018 confirms that Valve's per-employee income exceeded that of companies like Facebook, Apple, Netflix, Alphabet/Google, Microsoft, Intel, and Amazon. Facebook ranks second with a high revenue per employee of $780,400 annually, or $89 per hour, surpassing competitors like Apple and Microsoft due to its relatively smaller workforce of under 70,000. Amazon, by contrast, with over 1.5 million employees, earns significantly less per employee at $15,892 annually, or $1.81 per hour.

Further reading: Valve Runs Its Massive PC Gaming Ecosystem With Only About 350 Employees
Facebook

Meta Envisages Social Media Filled With AI-Generated Users (ft.com) 60

Meta is betting that characters generated by AI will fill its social media platforms in the next few years as it looks to the fast-developing technology to drive engagement with its 3 billion users. From a report: The Silicon Valley group is rolling out a range of AI products, including one that helps users create AI characters on Instagram and Facebook [non-paywalled source], as it battles with rival tech groups to attract and retain a younger audience.

"We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do," said Connor Hayes, vice-president of product for generative AI at Meta. "They'll have bios and profile pictures and be able to generate and share content powered by AI on the platform ... that's where we see all of this going," he added. Hayes said a "priority" for Meta over the next two years was to make its apps "more entertaining and engaging," which included considering how to make the interaction with AI more social.

Facebook

More Than 140 Kenya Facebook Moderators Diagnosed With Severe PTSD (theguardian.com) 56

An anonymous reader quotes a report from The Guardian: More than 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism. The moderators worked eight- to 10-hour days at a facility in Kenya for a company contracted by the social media firm and were found to have PTSD, generalized anxiety disorder (GAD) and major depressive disorder (MDD), by Dr Ian Kanyanya, the head of mental health services at Kenyatta National hospital in Nairobi. The mass diagnoses have been made as part of lawsuit being brought against Facebook's parent company, Meta, and Samasource Kenya, an outsourcing company that carried out content moderation for Meta using workers from across Africa.

The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege. The case is shedding light on the human cost of the boom in social media use in recent years that has required more and more moderation, often in some of the poorest parts of the world, to protect users from the worst material that some people post.
The lawsuit claims that at least 40 moderators experienced substance misuse, marital breakdowns, and disconnection from their families, while some feared being hunted by terrorist groups they monitored. Despite being paid eight times less than their U.S. counterparts, moderators worked under intense surveillance in harsh, warehouse-like conditions.
Censorship

Critics Decry Vietnam's 'Draconian' New Internet Law (theguardian.com) 22

Vietnam's Decree 147 mandates social media users on platforms like Facebook and TikTok to verify their identities and requires tech companies to store and share user data with authorities upon request, sparking concerns over increased censorship, self-censorship, and threats to free expression. Furthermore, the decree imposes restrictions on gaming time for minors and limits livestreaming to verified accounts. It becomes effective on Christmas Day. The Guardian reports: Decree 147, as it is known, builds on a 2018 cybersecurity law that was sharply criticized by the US, EU and internet freedom advocates who said it mimics China's repressive internet censorship. [...] Critics say that decree 147 will also expose dissidents who post anonymously to the risk of arrest. "Many people work quietly but effectively in advancing the universal values of human rights," Ho Chi Minh City-based blogger and rights activist Nguyen Hoang Vi told AFP.

She warned that the new decree "may encourage self-censorship, where people avoid expressing dissenting views to protect their safety -- ultimately harming the overall development of democratic values" in the country. Le Quang Tu Do, of the ministry of information and communications (MIC), told state media that decree 147 would "regulate behavior in order to maintain social order, national security, and national sovereignty in cyberspace." [...]

Human Rights Watch is calling on the government to repeal the "draconian" new decree. "Vietnam's new Decree 147 and its other cybersecurity laws neither protect the public from any genuine security concerns nor respect fundamental human rights," said Patricia Gossman, HRW's associate Asia director. "Because the Vietnamese police treat any criticism of the Communist party of Vietnam as a national security matter, this decree will provide them with yet another tool to suppress dissent."

Facebook

WhatsApp Scores Historic Victory Against NSO Group in Long-Running Spyware Hacking Case (techcrunch.com) 9

A U.S. judge has ruled that Israeli spyware maker NSO Group breached hacking laws by using WhatsApp to infect devices with its Pegasus spyware. From a report: In a historic ruling on Friday, a Northern California federal judge held NSO Group liable for targeting the devices of 1,400 WhatsApp users, violating state and federal hacking laws as well as WhatsApp's terms of service, which prohibit the use of the messaging platform for malicious purposes.

The ruling comes five years after Meta-owned WhatsApp sued NSO Group, alleging the spyware outfit had exploited an audio-calling vulnerability in the messaging platform to install its Pegasus spyware on unsuspecting users' devices. WhatsApp said that more than 100 human rights defenders, journalists and "other members of civil society" were targeted by the malware, along with government officials and diplomats. In her ruling, Judge Phyllis Hamilton said NSO did not dispute that it "must have reverse-engineered and/or decompiled the WhatsApp software" to install its Pegasus spyware on devices, but raised questions about whether it had done so before agreeing to WhatsApp's terms of service.

Transportation

Drones Collide, Fall From Sky in Florida Light Show, Seriously Injuring 7-Year-Old Boy (yahoo.com) 79

"Drones collided, fell from the sky and hit a little boy after 'technical difficulties' during a holiday show..." reports the Orlando Sentinel.

They note that a press release from the city said the 8 p.m. show was then cancelled: The company behind the drones, Sky Elements, was in its second year of the contract with the city, the release said. Sky Elements said they operate drone shows throughout the country with millions of viewers annually and are committed to maintaining FAA safety regulations, the company said in a statement released Sunday afternoon. The organization wished for a "speedy recovery" of those impacted by Saturday's show at Lake Eola, the statement said. "The well-being of our audience is our utmost priority, and we regret any distress or inconvenience caused," the statement said. "We are diligently working with the FAA and City of Orlando officials to determine the cause and are committed to establishing a clear picture of what transpired."

The show is in its third year, often drawing crowds of roughly 25,000, according to the city. But there has never been an incident before. The Federal Aviation Administration regulates drones and light shows and permitted the Holiday Drone Show at Lake Eola on Saturday. Now they are investigating the incident which they said began as drones collided and fell into the crowd at the park, spokesperson Kristen Alsop said in an email... Eyewitness videos on social media show multiple green and red drones falling from the sky.

The mother of the 7-year-old boy hit by a falling drone told a local TV station that the holiday show "ended in nightmares," adding that it happened just days before Christmas. She believes big-audience drone light shows need more safety precautions. "This should not happen. No family should be going through this." She added on Facebook that her 7-year-old son is now "going into emergency heart surgery off of just trying to watch a drone show."

She adds that the city of Orlando and the drone company behind the light show "really have some explaining to do." Responding to comments on Facebook, she posted two hours ago: "Thank you everyone. He is still in surgery."
Facebook

Meta Fined $263 Million Over 2018 Security Breach That Affected 3 Million EU Users (techcrunch.com) 24

Meta has been fined around $263 million in the European Union for a Facebook security breach that affected millions of users which the company disclosed back in September 2018. From a report: The penalty, issued on Tuesday by Ireland's Data Protection Commission (DPC) -- enforcing the bloc's General Data Protection Regulation (GDPR) -- is far from being the largest GDPR fine Meta has been hit with since the regime came into force over five years ago but is notable for being a substantial sanction for a single security incident.

The breach it relates to dates back to July 2017 when Facebook, as the company was still known then, rolled out a video upload function that included a "View as" feature which let the user see their own Facebook page as it would be seen by another user. A bug in the design allowed users making use of the feature to invoke the video uploader in conjunction with Facebook's 'Happy Birthday Composer' facility to generate a fully permissioned user token that gave them full access to the Facebook profile of that other user. They could then use the token to exploit the same combination of features on other accounts -- gaining unauthorized access to multiple users' profiles and data, per the DPC.

The Courts

TikTok Asks Supreme Court To Block Law Banning Its US Operations (reuters.com) 134

An anonymous reader quotes a report from the New York Times: TikTokasked the Supreme Court on Monday to temporarily block a law that would effectively ban it in the United States in a matter of weeks. Saying that the law violates both its First Amendment rights and those of its 170 million American users, TikTok, which is controlled by a Chinese parent company, urged the justices to maintain the status quo while they decide whether to hear an appeal. "Congress's unprecedented attempt to single out applicants and bar them from operating one of the most significant speech platforms in this nation presents grave constitutional problems that this court likely will not allow to stand," lawyers for TikTok wrote in their emergency application.

President Biden signed the law this spring after it was enacted with wide bipartisan support. Lawmakers said the app's ownership represented a risk because the Chinese government's oversight of private companies would allow it to retrieve sensitive information about Americans or to spread propaganda, though they have not publicly shared evidence that this has occurred. They have also noted that American platforms like Facebook and YouTube are banned in China, and that TikTok itself is not allowed in the country.

Social Networks

Tech Platforms Diverge on Erasing Criminal Suspects' Digital Footprints (nytimes.com) 99

Social media giants confronted a familiar dilemma over user content moderation after murder suspect Luigi Mangione's arrest in the killing of UnitedHealthcare's CEO on Monday, highlighting the platforms' varied approaches to managing digital footprints of criminal suspects.

Meta quickly removed Mangione's Facebook and Instagram accounts under its "dangerous organizations and individuals" policy, while his account on X underwent a brief suspension before being reinstated with a premium subscription. LinkedIn maintained his profile, stating it did not violate platform policies. His Reddit account was suspended in line with the platform's policy on high-profile criminal suspects, while his Goodreads profile fluctuated between public and private status.

The New York Times adds: When someone goes from having a private life to getting public attention, online accounts they intended for a small circle of friends or acquaintances are scrutinized by curious strangers -- and journalists.

In some cases, these newly public figures or their loved ones can shut down the accounts or make them private. Others, like Mr. Mangione, who has been charged with murder, are cut off from their devices, leaving their digital lives open for the public's consumption. Either way, tech companies have discretion in what happens to the account and its content. Section 230 of the Communications Decency Act protects companies from legal liability for posts made by users.

Christmas Cheer

The 2024 'Advent Calendars' Offering Programming Language Tips, Space Photos, and Memories (perladvent.org) 2

Not every tech "advent calendar" involves programming puzzles. Instead the geek tradition of programming-language advent calendars "seems to have started way back in 2000," according to one history, "when London-based programmer Mark Fowler launched a calendar highlighting a different Perl module each day."

So the tradition continues...
  • Nearly a quarter of a century later, there's still a Perl Advent Calendar, celebrating tips and tricks like "a few special packages waiting under the tree that can give your web applications a little extra pep in their step."
  • Since 2009 web performance consultant (and former Yahoo and Facebook engineer) Stoyan Stefanov has been pulling together an annual Web Performance calendar with helpful blog posts.
  • There's also a JVM Advent calendar with daily helpful hints for Java programmers.
  • The HTMHell site — which bills itself as "a collection of bad practices in HTML, copied from real websites" — is celebrating the season with the "HTMHell Advent Calendar," promising daily articles on security, accessibility, UX, and performance.

And meanwhile developers at the Svelte frontend framework are actually promising to release something new each day, "whether it's a new feature in Svelte or SvelteKit or an improvement to the website!"

But not every tech advent calendar is about programming...

  • The Atlantic continues its 17-year tradition of a Space Telescope advent calendar, featuring daily images from both NASA's Hubble telescope and James Webb Space Telescope

Facebook

Meta Says It's Mistakenly Moderating Too Much (theverge.com) 78

An anonymous reader shares a report: Meta is mistakenly removing too much content across its apps, according to a top executive. Nick Clegg, Meta's president of global affairs, told reporters on Monday that the company's moderation "error rates are still too high" and pledged to "improve the precision and accuracy with which we act on our rules."

"We know that when enforcing our policies, our error rates are still too high, which gets in the way of the free expression that we set out to enable," Clegg said during a press call I attended. "Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly." He said the company regrets aggressively removing posts about the covid-19 pandemic. CEO Mark Zuckerberg recently told the Republican-led House Judiciary Committee the decision was influenced by pressure from the Biden administration.

"We had very stringent rules removing very large volumes of content through the pandemic," Clegg said. "No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We're acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content."

Social Networks

Bluesky's Open API Means Anyone Can Scrape Your Data for AI Training. It's All Public (techcrunch.com) 109

Bluesky says it will never train generative AI on its users' data. But despite that, "one million public Bluesky posts — complete with identifying user information — were crawled and then uploaded to AI company Hugging Face," reports Mashable (citing an article by 404 Media).

"Shortly after the article's publication, the dataset was removed from Hugging Face," the article notes, with the scraper at Hugging Face posting an apology. "While I wanted to support tool development for the platform, I recognize this approach violated principles of transparency and consent in data collection. I apologize for this mistake." But TechCrunch noted the incident's real lesson. "Bluesky's open API means anyone can scrape your data for AI training," calling it a timely reminder that everything you post on Bluesky is public. Bluesky might not be training AI systems on user content as other social networks are doing, but there's little stopping third parties from doing so...

Bluesky said that it's looking at ways to enable users to communicate their consent preferences externally, [but] the company posted: "Bluesky won't be able to enforce this consent outside of our systems. It will be up to outside developers to respect these settings. We're having ongoing conversations with engineers & lawyers and we hope to have more updates to share on this shortly!"

Mashable notes Bluesky's response to 404Media — that Bluesky is like a website, and "Just as robots.txt files don't always prevent outside companies from crawling those sites, the same applies here."

So "While many commentators said that data collection should be opt in, others argued that Bluesky data is publicly available anyway and so the dataset is fair use," according to SiliconRepublic.com.
Network

Meta Plans $10 Billion Global 'Mother of All' Subsea Cables 63

Meta plans to build a $10 billion private, "mother of all" undersea fiber-optic cable network spanning over 40,000 kilometers around the world, according to TechCrunch. The project, dubbed "W" for its shape, would run from the U.S. east coast to the west coast via India, South Africa and Australia, avoiding regions prone to cable sabotage including the Red Sea and South China Sea.

The social media giant, which co-owns 16 existing cable networks, aims to gain full control over traffic prioritization for its services. The project mirrors Google's strategy of private cable ownership. The construction could take 5-10 years to complete.
Australia

Australia To Ban Under-16s From Social Media After Passing Landmark Law (yahoo.com) 214

Australia will ban children under 16 from using social media after its senate approved what will become a world-first law. From a report: Children will be blocked from using platforms including TikTok, Instagram, Snapchat and Facebook, a move the Australian government argue is necessary to protect their mental health and wellbeing.

The online safety amendment (social media minimum age) bill will impose fines of up to 50 million Australian dollars ($32.5 million) on platforms for systemic failures to prevent young children from holding accounts. It would take effect a year after the bill becomes law, allowing platforms time to work out technological solutions that would also protect users' privacy. The senate passed the bill 34 votes to 19. The house of representatives overwhelmingly approved the legislation 102 votes to 13 on Wednesday.

Technology

'Enshittification' Is Officially the Biggest Word of the Year (gizmodo.com) 166

The Macquarie Dictionary, the national dictionary of Australia, has picked "enshittification" as its word of the year. Gizmodo reports: The Australians define the word as "the gradual deterioration of a service or product brought about by a reduction in the quality of service provided, especially of an online platform, and as a consequence of profit-seeking." We've all felt this. Google search is filled with garbage. The internet is clogged with SEO-farming websites that clog up results. Facebook is an endless stream of AI-generated slop. Zoom wants you to test out its new AI features while you're trying to go into a meeting. Twitter has become X, and its owner thinks sharing links is a waste of time. Last night I reinstalled Windows 11 on a desktop machine and got pissed as it was finalized and Microsoft kept trying to get me to install OneDrive, Office 360, Call of Duty Black Ops 6, and a bunch of other shit I didn't want. Writer and activist Cory Doctorow coined the term enshittification in 2022, and recently offered potential solutions to the age-old phenomenon in an interview with The Register.

"We need to have prohibition and regulation that prohibits the capital markets from funding predatory pricing," he explained. "It's very hard to enter the market when people are selling things below cost. We need to prohibit predatory acquisitions. Look at Facebook: buying Instagram, and Mark Zuckerberg sending an email saying we're buying Instagram because people don't like Facebook and they're moving to Instagram, and we just don't want them to have anywhere else to go."
Google

Meta Wants Apple and Google to Verify the Age of App Downloaders (msn.com) 53

Meta wants to force Apple and Google to verify the ages of people downloading apps from their app stores, reports the Washington Post — and now Meta's campaign "is picking up momentum" with legislators in the U.S. Congress.

Federal and state lawmakers have recently proposed a raft of measures requiring that platforms such as Meta's Facebook and Instagram block users under a certain age from using their sites. The push has triggered fierce debate over the best way to ascertain how old users are online. Last year Meta threw its support behind legislation that would push those obligations onto app stores rather than individual app providers, like itself, as your regular host and Naomi Nix reported. While some states have considered the plan, it has not gained much traction in Washington.

That could be shifting. Two congressional Republicans are preparing a new age verification bill that places the burden on app stores, according to two people familiar with the matter, who spoke on the condition of anonymity to discuss the plans... The bill would be the first of its kind on Capitol Hill, where lawmakers have called for expanding guardrails for children amid concerns about the risks of social media but where political divisions have bogged down talks. The measure would give parents the right to sue an app store if their child was exposed to certain content, such as lewd or sexual material, according to a copy obtained by the Tech Brief. App stores could be protected against legal claims, however, if they took steps to protect children against harms, such as verifying their ages and giving parents the ability to block app downloads.

The article points out that U.S. lawmakers "have the power to set national standards that could override state efforts if they so choose..."
Crime

Meta Removed 2 Million Accounts Linked to Organized Crime 'Pig Butching' Scams (cnet.com) 27

An anonymous reader shared this report from CNET: Meta says it's taken down more than 2 million accounts this year linked to overseas criminal gangs behind scam operations that human rights activists say forced hundreds of thousands of people to work as scammers and cost victims worldwide billions of dollars.

In a Thursday blog post, the parent of Facebook, Instagram and WhatsApp says the pig butchering scam operations — based in Myanmar, Laos, Cambodia, the United Arab Emirates and the Philippines — use platforms like Facebook and Instagram; dating, messaging, crypto and other kinds of apps; and texts and emails, to globally target people... [T]he scammers strike up an online relationship with their victims and gain their trust. Then they move their conversations to crypto apps or scam websites and dupe victims into making bogus investments or otherwise handing over their money, Meta said. They'll ask the victims to deposit money, often in the form of cryptocurrency, into accounts, sometimes even letting the victims make small withdrawals, in order to add a veneer of legitimacy. But once the victim starts asking for their investment back, or it becomes clear they don't have any more money to deposit, the scammer disappears and takes the money with them.

And the people doing the scamming are often victims themselves. During the COVID-19 pandemic, criminal gangs began building scam centers in Southeast Asia, luring in often unsuspecting job seekers with what looked like amazing postings on local job boards and other platforms, then forcing them to work as scammers, often under the threat of physical harm. The scope of what's become a global problem is staggering. In a report issued in May, the US Institute of Peace estimates that at least 300,000 people are being forced to work, or are otherwise suffering human rights violations, inside these scam centers. The report also estimates global financial losses stemming from the scams at $64 billion in 2023, with the number of financial victims in the millions.

Meta says it has focused on investigating and disrupting the scam operations for more than two years, working with nongovernmental organizations and other tech companies, like OpenAI, Coinbase and dating-app operator Match Group, along with law enforcement in both the US and the countries where the centers are located.

Meta titled its blog post "Cracking Down On Organized Crime Behind Scam Centers," writing "We hope that sharing our insights will help inform our industry's defenses so we can collectively help protect people from criminal scammers."
United States

Trump Picks Carr To Head FCC With Pledge To Fight 'Censorship Cartel' 233

Donald Trump has named FCC Commissioner Brendan Carr to chair the U.S. communications regulator when he takes office in January 2025, citing Carr's stance against what Trump called "regulatory lawfare." Carr, a lawyer and longtime Republican who has served at the FCC under both Trump and Biden administrations, has emerged as a vocal critic of major social media companies' content moderation practices.

"Humbled and honored" by the appointment, Carr pledged on X to "dismantle the censorship cartel." As the FCC's senior Republican commissioner, Carr has advocated for stricter oversight of technology companies, pushing for transparency rules on platforms like Google and Facebook, expanded rural broadband access, and tougher restrictions on Chinese-owned TikTok. Trump praised Carr as a "warrior for free speech" while announcing the appointment. During his campaign, Trump has said he would seek to revoke licenses of television networks he views as biased.

Slashdot Top Deals