Businesses

Peloton To Start Charging Subscribers With Used Equipment $95 Activation Fee (cnbc.com) 137

Peloton on Thursday said it will start charging new subscribers a one-time $95 activation fee if they bought their hardware on the secondary market as more consumers snag lightly used equipment for a fraction of the typical retail price. From a report: The used equipment activation fee for subscribers in the U.S. and Canada comes as Peloton starts to see a meaningful increase in new members who bought used Bikes or Treads from peer-to-peer markets such as Facebook Marketplace. During its fiscal fourth quarter, which ended June 30, Peloton said it saw a "steady stream of paid connected fitness subscribers" who bought hardware on the secondary market. The company said the segment grew 16% year over year.

"We believe a meaningful share of these subscribers are incremental, and they exhibit lower net churn rates than rental subscribers," the company said in a letter to shareholders. "It's also worth highlighting that this activation fee will be a source of incremental revenue and gross profit for us, helping to support our investments in improving the fitness experience for our members," interim co-CEO Christopher Bruzzo later added on a call with analysts.

Censorship

Russia Blocks Signal Messaging App (apnews.com) 47

Russia has blocked access to the encrypted Signal messaging app to "prevent the messenger's use of terrorist and extremist purposes." YouTube is also facing mass outages following repeated slowdowns in recent weeks. The Associated Press reports: Russian authorities expanded their crackdown on dissent and free media after Russian President Vladimir Putin sent troops into Ukraine in February 2022. They have blocked multiple independent Russian-language media outlets critical of the Kremlin, and cut access to Twitter, which later became X, as well as Meta's Facebook and Instagram.

In the latest blow to the freedom of information, YouTube faced mass outages on Thursday following repeated slowdowns in recent weeks. Russian authorities have blamed the slowdowns on Google's failure to upgrade its equipment in Russia, but many experts have challenged the claim, arguing that the likely reason for the slowdowns and the latest outage was the Kremlin's desire to shut public access to a major platform that carries opposition views.

Piracy

Mayor Shows Pirated Movie On Town Square Big Screen In Brazil (torrentfreak.com) 76

An anonymous reader quotes a report from TorrentFreak: In Brazil, there was a [...] unbelievable display of public piracy last week that went on to make national headlines. The mayor of the municipality Acopiara, in the north-east of the country, invited citizens of the small town Trussu to join a screening of the blockbuster "Inside Out 2" at the local town square. With little more than a thousand inhabitants, many of whom have limited means, this appeared to be a kind gesture. The mayor, Anthony Almeida Neto, could use some positive marks too; he was removed from office three times on suspicion of being involved in corruption schemes, and was most recently reinstated in March. The mayor officially announced the public screening of 'Inside Out 2' via Instagram and Facebook, inviting people to join him. That worked well as a sizable crowd showed up, allowing the controversial mayor to proudly boast the event's popularity in public through his social media channels.

Taking place in an outside theater created just for this occasion, the screening was a unique opportunity for the small town's residents. There are no official movie theaters nearby, so locals would normally have to travel for several hours to see a film that's still in cinemas. Thanks to the mayor, people could see 'Inside Out 2' in their hometown instead. The mayor was pleased with the turnout too and proudly broadcasted it through a livestream on Instagram. Amidst all this joy, however, people started to notice a watermark on the film that was clearly associated with piracy. In addition, it was apparent that the copy had been sourced from pirate streaming site, Obaflix. All signs indicate that the public event wasn't authorized or licensed. Instead, it appeared to be an improvised screening of a low-quality TS release of the film, which is widely available through pirate sites. When this 'revelation' was picked up in the Brazilian press, mayor Anthony Almeida was quick to respond with assurances that he only had honest intentions.

AI

Where Facebook's AI Slop Comes From (404media.co) 8

Facebook's AI-generated content problem is being fueled by its own creator bonus program, according to an investigation by 404 Media. The program incentivizes users, particularly from developing countries, to flood the platform with AI-generated images for financial gain. The outlet found that influencers in India and Southeast Asia are teaching followers how to exploit Facebook's algorithms and content moderation systems to go viral with AI-generated images. Many use Microsoft's Bing Image Creator to produce bizarre, often emotive content that garners high engagement.

"The post you are seeing now is of a poor man that is being used to generate revenue," said Indian YouTuber Gyan Abhishek in a video, pointing to an AI image of an emaciated elderly man. He claimed users could earn "$100 for 1,000 likes" through Facebook's bonus program. While exact payment rates vary, 404 Media verified that consistent viral posting can lead to significant earnings for users in countries like India. Meta has defended the program to 404 Media, stating it works as intended if content meets community standards and engagement is authentic.
Social Networks

Founder of Collapsed Social Media Site 'IRL' Charged With Fraud Over Faked Users (bbc.com) 22

This week America's Securities and Exchange Commission filed fraud charges against the former CEO of the startup social media site "IRL"

The BBC reports: IRL — which was once considered a potential rival to Facebook — took its name from its intention to get its online users to meet up in real life. However, the initial optimism evaporated after it emerged most of IRL's users were bots, with the platform shutting in 2023...

The SEC says it believes [CEO Abraham] Shafi raised about $170m by portraying IRL as the new success story in the social media world. It alleges he told investors that IRL had attracted the vast majority its supposed 12 million users through organic growth. In reality, it argues, IRL was spending millions of dollars on advertisements which offered incentives to prospective users to download the IRL app. That expenditure, it is alleged, was subsequently hidden in the company's books.

IRL received multiple rounds of venture capital financing, eventually reaching "unicorn status" with a $1.17 billion valuation, according to TechCrunch. But it shut down in 2023 "after an internal investigation by the company's board found that 95% of the app's users were 'automated or from bots'."

TechCrunch notes it's the second time in the same week — and at least the fourth time in the past several months — that the SEC has charged a venture-backed founder on allegations of fraud... Earlier this week, the SEC charged BitClout founder Nader Al-Naji with fraud and unregistered offering of securities, claiming he used his pseudonymous online identity "DiamondHands" to avoid regulatory scrutiny while he raised over $257 million in cryptocurrency. BitClout, a buzzy crypto startup, was backed by high-profile VCs such as a16z, Sequoia, Chamath Palihapitiya's Social Capital, Coinbase Ventures and Winklevoss Capital.

In June, the SEC charged Ilit Raz, CEO and founder of the now-shuttered AI recruitment startup Joonko, with defrauding investors of at least $21 million. The agency alleged Raz made false and misleading statements about the quantity and quality of Joonko's customers, the number of candidates on its platform and the startup's revenue.

The agency has also gone after venture firms in recent months. In May, the SEC charged Robert Scott Murray and his firm Trillium Capital LLC with a fraudulent scheme to manipulate the stock price of Getty Images Holdings Inc. by announcing a phony offer by Trillium to purchase Getty Images.

Privacy

Meta To Pay Record $1.4 Billion To Settle Texas Facial Recognition Suit (texastribune.org) 43

Meta will pay Texas $1.4 billion to settle a lawsuit alleging the company used personal biometric data without user consent, marking the largest privacy-related settlement ever obtained by a state. The Texas Tribune reports: The 2022 lawsuit, filed by Texas Attorney General Ken Paxton in state court, alleged that Meta had been using facial recognition software on photos uploaded to Facebook without Texans' consent. The settlement will be paid over five years. The attorney general's office did not say whether the money from the settlement would go into the state's general fund or if it would be distributed in some other way. The settlement, announced Tuesday, does not act as an admission of guilt and Meta maintains no wrongdoing. This was the first lawsuit Paxton's office argued under a 2009 state law that protects Texans' biometric data, like fingerprints and facial scans. The law requires businesses to inform and get consent from individuals before collecting such data. It also limits sharing this data, except in certain cases like helping law enforcement or completing financial transactions. Businesses must protect this data and destroy it within a year after it's no longer needed.

In 2011, Meta introduced a feature known as Tag Suggestions to make it easier for users to tag people in their photos. According to Paxton's office, the feature was turned on by default and ran facial recognition on users' photos, automatically capturing data protected by the 2009 law. That system was discontinued in 2021, with Meta saying it deleted over 1 billion people's individual facial recognition data. As part of the settlement, Meta must notify the attorney general's office of anticipated or ongoing activities that may fall under the state's biometric data laws. If Texas objects, the parties have 60 days to attempt to resolve the issue. Meta officials said the settlement will make it easier for the company to discuss the implications and requirements of the state's biometric data laws with the attorney general's office, adding that data protection and privacy are core priorities for the firm.

Security

Passkey Adoption Has Increased By 400 Percent In 2024 (theverge.com) 21

According to new report, password manager Dashlane has seen a 400 percent increase in passkey authentications since the beginning of the year, "with 1 in 5 active Dashlane users now having at least one passkey in their Dashlane vault," reports The Verge. From the report: Over 100 sites now offer passkey support, though Dashlane says the top 20 most popular apps account for 52 percent of passkey authentications. When split into industry sectors, e-commerce (which includes eBay, Amazon, and Target) made up the largest share of passkey authentications at 42 percent. So-called "sticky apps" -- meaning those used on a frequent basis, such as social media, e-commerce, and finance or payment sites -- saw the fastest passkey adoption between April and June of this year.

Other domains show surprising growth, though -- while Roblox is the only gaming category entry within the top 20 apps, its passkey adoption is outperforming giant platforms like Facebook, X, and Adobe, for example. Dashlane's report also found that passkey usage increased successful sign-ins by 70 percent compared to traditional passwords.

AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

Data Storage

LZ4 Compression Algorithm Gets Multi-Threaded Update (linuxiac.com) 44

Slashdot reader Seven Spirals brings news about the lossless compression algorithm LZ4: The already wonderful performance of the LZ4 compressor just got better with multi-threaded additions to it's codebase. In many cases, LZ4 can compress data faster than it can be written to disk giving this particular compressor some very special applications. The Linux kernel as well as filesystems like ZFS use LZ4 compression extensively. This makes LZ4 more comparable to the Zstd compression algorithm, which has had multi-threaded performance for a while, but cannot match the LZ4 compressor for speed, though it has some direct LZ4.
From Linuxiac.com: - On Windows 11, using an Intel 7840HS CPU, compression time has improved from 13.4 seconds to just 1.8 seconds — a 7.4 times speed increase.
- macOS users with the M1 Pro chip will see a reduction from 16.6 seconds to 2.55 seconds, a 6.5 times faster performance.
- For Linux users on an i7-9700k, the compression time has been reduced from 16.2 seconds to 3.05 seconds, achieving a 5.4 times speed boost...

The release supports lesser-known architectures such as LoongArch, RISC-V, and others, ensuring LZ4's portability across various platforms.

AI

Open Source AI Better for US as China Will Steal Tech Anyway, Zuckerberg Argues (fb.com) 37

Meta CEO Mark Zuckerberg has advocated for open-source AI development, asserting it as a strategic advantage for the United States against China. In a blog post, Zuckerberg argued that closing off AI models would not effectively prevent Chinese access, given their espionage capabilities, and would instead disadvantage U.S. allies and smaller entities. He writes: Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities. Plus, constraining American innovation to closed development increases the chance that we don't lead at all. Instead, I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure they can best take advantage of the latest advances and achieve a sustainable first-mover advantage over the long term.
Facebook

Meta Warns EU Regulatory Efforts Risk Bloc Missing Out on AI Advances 35

Meta has warned that the EU's approach to regulating AI is creating the "risk" that the continent is cut off from accessing cutting-edge services, while the bloc continues its effort to rein in the power of Big Tech. From a report: Rob Sherman, the social media group's deputy privacy officer and vice-president of policy, confirmed a report that it had received a request from the EU's privacy watchdog to voluntarily pause the training of its future AI models on data in the region. He told the Financial Times this was in order to give local regulators time to "get their arms around the issue of generative AI." While the Facebook owner is adhering to the request, Sherman said such moves were leading to a "gap in the technologies that are available in Europe versus" the rest of the world. He added that, with future and more advanced AI releases, "it's likely that availability in Europe could be impacted." Sherman said: "If jurisdictions can't regulate in a way that enables us to have clarity on what's expected, then it's going to be harder for us to offer the most advanced technologies in those places ... it is a realistic outcome that we're worried about."
AI

Meta Launches Powerful Open-Source AI Model Llama 3.1 20

Meta has released Llama 3.1, its largest open-source AI model to date, in a move that challenges the closed approaches of competitors like OpenAI and Google. The new model, boasting 405 billion parameters, is claimed by Meta to outperform GPT-4o and Claude 3.5 Sonnet on several benchmarks, with CEO Mark Zuckerberg predicting that Meta AI will become the most widely used assistant by year-end.

Llama 3.1, which Meta says was trained using over 16,000 Nvidia H100 GPUs, is being made available to developers through partnerships with major tech companies including Microsoft, Amazon, and Google, potentially reducing deployment costs compared to proprietary alternatives. The release includes smaller versions with 70 billion and 8 billion parameters, and Meta is introducing new safety tools to help developers moderate the model's output. While Meta isn't disclosing what all data it used to train its models, the company confirmed it used synthetic data to enhance the model's capabilities. The company is also expanding its Meta AI assistant, powered by Llama 3.1, to support additional languages and integrate with its various platforms, including WhatsApp, Instagram, and Facebook, as well as its Quest virtual reality headset.
Facebook

Meta Risks Sanctions Over 'Sneaky' Ad-Free Plans Confusing Users, EU Says (arstechnica.com) 23

An anonymous reader quotes a report from Ars Technica: The European Commission (EC) has finally taken action to block Meta's heavily criticized plan to charge a subscription fee to users who value privacy on its platforms. Surprisingly, this step wasn't taken under laws like the Digital Services Act (DSA), the Digital Markets Act (DMA), or the General Data Protection Regulation (GDPR). Instead, the EC announced Monday that Meta risked sanctions under EU consumer laws if it could not resolve key concerns about Meta's so-called "pay or consent" model. Meta's model is seemingly problematic, the commission said, because Meta "requested consumers overnight to either subscribe to use Facebook and Instagram against a fee or to consent to Meta's use of their personal data to be shown personalized ads, allowing Meta to make revenue out of it." Because users were given such short notice, they may have been "exposed to undue pressure to choose rapidly between the two models, fearing that they would instantly lose access to their accounts and their network of contacts," the EC said. To protect consumers, the EC joined national consumer protection authorities, sending a letter to Meta requiring the tech giant to propose solutions to resolve the commission's biggest concerns by September 1.

That Meta's "pay or consent" model may be "misleading" is a top concern because it uses the term "free" for ad-based plans, even though Meta "can make revenue from using their personal data to show them personalized ads." It seems that while Meta does not consider giving away personal information to be a cost to users, the EC's commissioner for justice, Didier Reynders, apparently does. "Consumers must not be lured into believing that they would either pay and not be shown any ads anymore, or receive a service for free, when, instead, they would agree that the company used their personal data to make revenue with ads," Reynders said. "EU consumer protection law is clear in this respect. Traders must inform consumers upfront and in a fully transparent manner on how they use their personal data. This is a fundamental right that we will protect." Additionally, the EC is concerned that Meta users might be confused about how "to navigate through different screens in the Facebook/Instagram app or web-version and to click on hyperlinks directing them to different parts of the Terms of Service or Privacy Policy to find out how their preferences, personal data, and user-generated data will be used by Meta to show them personalized ads." They may also find Meta's "imprecise terms and language" confusing, such as Meta referring to "your info" instead of clearly referring to consumers' "personal data."
A Meta spokesperson said in a statement: "Subscriptions as an alternative to advertising are a well-established business model across many industries. Subscription for no ads follows the direction of the highest court in Europe and we are confident it complies with European regulation."
Facebook

Nigeria Fines Meta $220 Million For Violating Consumer, Data Laws (reuters.com) 15

Nigeria fined Meta for $220 million on Friday, alleging the tech giant violated the country's local consumer, data protection and privacy laws. Reuters reports: Nigeria's Federal Competition and Consumer Protection Commission (FCCPC) said Meta appropriated the data of Nigerian users on its platforms without their consent, abused its market dominance by forcing exploitative privacy policies on users, and meted out discriminatory and disparate treatment on Nigerians, compared with other jurisdictions with similar regulations. FCCPC chief Adamu Abdullahi said the investigations were jointly held with Nigeria's Data Protection Commission and spanned over 38 months. The investigations found Meta policies don't allow users the option or opportunity to self-determine or withhold consent to the gathering, use, and sharing of personal data, Abdullahi said.

"The totality of the investigation has concluded that Meta over the protracted period of time has engaged in conduct that constituted multiple and repeated, as well as continuing infringements... particularly, but not limited to abusive, and invasive practices against data subjects in Nigeria," Abdullahi said. "Being satisfied with the significant evidence on the record, and that Meta has been provided every opportunity to articulate any position, representations, refutations, explanations or defences of their conduct, the Commission have now entered a final order and issued a penalty against Meta," Abdullahi said. The final order mandates steps and actions Meta must take to comply with local laws, Abdullahi said.

Facebook

Meta Won't Release Its Multimodal Llama AI Model in the EU (theverge.com) 26

Meta says it won't be launching its upcoming multimodal AI model -- capable of handling video, audio, images, and text -- in the European Union, citing regulatory concerns. From a report: The decision will prevent European companies from using the multimodal model, despite it being released under an open license. Just last week, the EU finalized compliance deadlines for AI companies under its strict new AI Act. Tech companies operating in the EU will generally have until August 2026 to comply with rules around copyright, transparency, and AI uses like predictive policing. Meta's decision follows a similar move by Apple, which recently said it would likely exclude the EU from its Apple Intelligence rollout due to concerns surrounding the Digital Markets Act.
Facebook

Meta Opens Pilot Program For Researchers To Study Instagram's Impact On Teen Mental Health (theatlantic.com) 13

An anonymous reader quotes a report from The Atlantic: Now, after years of contentious relationships with academic researchers, Meta is opening a small pilot program that would allow a handful of them to access Instagram data for up to about six months in order to study the app's effect on the well-being of teens and young adults. The company will announce today that it is seeking proposals that focus on certain research areas -- investigating whether social-media use is associated with different effects in different regions of the world, for example -- and that it plans to accept up to seven submissions. Once approved, researchers will be able to access relevant data from study participants -- how many accounts they follow, for example, or how much they use Instagram and when. Meta has said that certain types of data will be off-limits, such as user-demographic information and the content of media published by users; a full list of eligible data is forthcoming, and it is as yet unclear whether internal information related to ads that are served to users or Instagram's content-sorting algorithm, for example, might be provided. The program is being run in partnership with the Center for Open Science, or COS, a nonprofit. Researchers, not Meta, will be responsible for recruiting the teens, and will be required to get parental consent and take privacy precautions.
EU

Meta Won't Offer Future Multimodal AI Models In EU (axios.com) 33

According to Axios, Meta will withhold future multimodel AI models from customers in the European Union "due to the unpredictable nature of the European regulatory environment." From the report: Meta plans to incorporate the new multimodal models, which are able to reason across video, audio, images and text, in a wide range of products, including smartphones and its Meta Ray-Ban smart glasses. Meta says its decision also means that European companies will not be able to use the multimodal models even though they are being released under an open license. It could also prevent companies outside of the EU from offering products and services in Europe that make use of the new multimodal models. The company is also planning to release a larger, text-only version of its Llama 3 model soon. That will be made available for customers and companies in the EU, Meta said.

Meta's issue isn't with the still-being-finalized AI Act, but rather with how it can train models using data from European customers while complying with GDPR -- the EU's existing data protection law. Meta announced in May that it planned to use publicly available posts from Facebook and Instagram users to train future models. Meta said it sent more than 2 billion notifications to users in the EU, offering a means for opting out, with training set to begin in June. Meta says it briefed EU regulators months in advance of that public announcement and received only minimal feedback, which it says it addressed. In June -- after announcing its plans publicly -- Meta was ordered to pause the training on EU data. A couple weeks later it received dozens of questions from data privacy regulators from across the region.

The United Kingdom has a nearly identical law to GDPR, but Meta says it isn't seeing the same level of regulatory uncertainty and plans to launch its new model for U.K. users. A Meta representative told Axios that European regulators are taking much longer to interpret existing law than their counterparts in other regions. A Meta representative told Axios that training on European data is key to ensuring its products properly reflect the terminology and culture of the region.

Facebook

Facebook Ads For Windows Desktop Themes Push Info-Stealing Malware (bleepingcomputer.com) 28

Cybercriminals are using Facebook business pages and advertisements to promote fake Windows themes that infect unsuspecting users with the SYS01 password-stealing malware. From a report: Trustwave researchers who observed the campaigns said the threat actors also promote fake downloads for pirated games and software, Sora AI, 3D image creator, and One Click Active. While using Facebook advertisements to push information-stealing malware is not new, the social media platform's massive reach makes these campaigns a significant threat.

The threat actors take out advertisements that promote Windows themes, free game downloads, and software activation cracks for popular applications, like Photoshop, Microsoft Office, and Windows. These advertisements are promoted through newly created Facebook business pages or by hijacking existing ones. When using hijacked Facebook pages, the threat actors rename them to suit the theme of their advertisement and to promote the downloads to the existing page members.

IT

Shipt's Pay Algorithm Squeezed Gig Workers. They Fought Back (ieee.org) 35

Workers at delivery company Shipt "found that their paychecks had become...unpredictable," according to an article in IEEE Spectrum. "They were doing the same work they'd always done, yet their paychecks were often less than they expected. And they didn't know why...."

The article notes that "Companies whose business models rely on gig workers have an interest in keeping their algorithms opaque." But "The workers showed that it's possible to fight back against the opaque authority of algorithms, creating transparency despite a corporation's wishes." On Facebook and Reddit, workers compared notes. Previously, they'd known what to expect from their pay because Shipt had a formula: It gave workers a base pay of $5 per delivery plus 7.5 percent of the total amount of the customer's order through the app. That formula allowed workers to look at order amounts and choose jobs that were worth their time. But Shipt had changed the payment rules without alerting workers. When the company finally issued a press release about the change, it revealed only that the new pay algorithm paid workers based on "effort," which included factors like the order amount, the estimated amount of time required for shopping, and the mileage driven. The company claimed this new approach was fairer to workers and that it better matched the pay to the labor required for an order. Many workers, however, just saw their paychecks dwindling. And since Shipt didn't release detailed information about the algorithm, it was essentially a black box that the workers couldn't see inside.

The workers could have quietly accepted their fate, or sought employment elsewhere. Instead, they banded together, gathering data and forming partnerships with researchers and organizations to help them make sense of their pay data. I'm a data scientist; I was drawn into the campaign in the summer of 2020, and I proceeded to build an SMS-based tool — the Shopper Transparency Calculator [written in Python, using optical character recognition and Twilio, and running on a home server] — to collect and analyze the data. With the help of that tool, the organized workers and their supporters essentially audited the algorithm and found that it had given 40 percent of workers substantial pay cuts...

This "information asymmetry" helps companies better control their workforces — they set the terms without divulging details, and workers' only choice is whether or not to accept those terms... There's no technical reason why these algorithms need to be black boxes; the real reason is to maintain the power structure... In a fairer world where workers have basic data rights and regulations require companies to disclose information about the AI systems they use in the workplace, this transparency would be available to workers by default.

The tool's creator was attracted to the idea of helping a community "control and leverage their own data," and ultimately received more than 5,600 screenshots from over 200 workers. 40% were earning at least 10% less — and about 33% were earning less than their state's minimum wage. Interestingly, "Sharing data about their work was technically against the company's terms of service; astoundingly, workers — including gig workers who are classified as 'independent contractors' — often don't have rights to their own data...

"[O]ur experiment served as an example for other gig workers who want to use data to organize, and it raised awareness about the downsides of algorithmic management. What's needed is wholesale changes to platforms' business models... The battles that gig workers are fighting are the leading front in the larger war for workplace rights, which will affect all of us. The time to define the terms of our relationship with algorithms is right now."

Thanks to long-time Slashdot reader mspohr for sharing the article.
Social Networks

Threads Hits 175 Million Users After a Year (theverge.com) 35

Ahead of its one-year anniversary, Meta CEO Mark Zuckerberg announced that Threads has reached more than 175 million monthly active users. The Verge reports: Back when it arrived in the App Store on July 5th, 2023, Musk was taking a wrecking ball to the service formerly called Twitter and goading Zuckerberg into a literal cage match that never happened. A year later, Threads is still growing at a steady clip -- albeit not as quickly as its huge launch -- while Musk hasn't shared comparable metrics for X since he took over.

As with any social network, and especially for Threads, monthly users only tell part of the growth story. It's telling that, unlike Facebook, WhatsApp, and Instagram, Meta hasn't shared daily user numbers yet. That omission suggests Threads is still getting a lot of flyby traffic from people who have yet to become regular users. I've heard from Meta employees in recent months that much of the app's growth is still coming from it being promoted inside Instagram. Both apps share the same account system, which isn't expected to change.

Slashdot Top Deals