Privacy

Meta To Pay Record $1.4 Billion To Settle Texas Facial Recognition Suit (texastribune.org) 43

Meta will pay Texas $1.4 billion to settle a lawsuit alleging the company used personal biometric data without user consent, marking the largest privacy-related settlement ever obtained by a state. The Texas Tribune reports: The 2022 lawsuit, filed by Texas Attorney General Ken Paxton in state court, alleged that Meta had been using facial recognition software on photos uploaded to Facebook without Texans' consent. The settlement will be paid over five years. The attorney general's office did not say whether the money from the settlement would go into the state's general fund or if it would be distributed in some other way. The settlement, announced Tuesday, does not act as an admission of guilt and Meta maintains no wrongdoing. This was the first lawsuit Paxton's office argued under a 2009 state law that protects Texans' biometric data, like fingerprints and facial scans. The law requires businesses to inform and get consent from individuals before collecting such data. It also limits sharing this data, except in certain cases like helping law enforcement or completing financial transactions. Businesses must protect this data and destroy it within a year after it's no longer needed.

In 2011, Meta introduced a feature known as Tag Suggestions to make it easier for users to tag people in their photos. According to Paxton's office, the feature was turned on by default and ran facial recognition on users' photos, automatically capturing data protected by the 2009 law. That system was discontinued in 2021, with Meta saying it deleted over 1 billion people's individual facial recognition data. As part of the settlement, Meta must notify the attorney general's office of anticipated or ongoing activities that may fall under the state's biometric data laws. If Texas objects, the parties have 60 days to attempt to resolve the issue. Meta officials said the settlement will make it easier for the company to discuss the implications and requirements of the state's biometric data laws with the attorney general's office, adding that data protection and privacy are core priorities for the firm.

Security

Passkey Adoption Has Increased By 400 Percent In 2024 (theverge.com) 21

According to new report, password manager Dashlane has seen a 400 percent increase in passkey authentications since the beginning of the year, "with 1 in 5 active Dashlane users now having at least one passkey in their Dashlane vault," reports The Verge. From the report: Over 100 sites now offer passkey support, though Dashlane says the top 20 most popular apps account for 52 percent of passkey authentications. When split into industry sectors, e-commerce (which includes eBay, Amazon, and Target) made up the largest share of passkey authentications at 42 percent. So-called "sticky apps" -- meaning those used on a frequent basis, such as social media, e-commerce, and finance or payment sites -- saw the fastest passkey adoption between April and June of this year.

Other domains show surprising growth, though -- while Roblox is the only gaming category entry within the top 20 apps, its passkey adoption is outperforming giant platforms like Facebook, X, and Adobe, for example. Dashlane's report also found that passkey usage increased successful sign-ins by 70 percent compared to traditional passwords.

AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

Data Storage

LZ4 Compression Algorithm Gets Multi-Threaded Update (linuxiac.com) 44

Slashdot reader Seven Spirals brings news about the lossless compression algorithm LZ4: The already wonderful performance of the LZ4 compressor just got better with multi-threaded additions to it's codebase. In many cases, LZ4 can compress data faster than it can be written to disk giving this particular compressor some very special applications. The Linux kernel as well as filesystems like ZFS use LZ4 compression extensively. This makes LZ4 more comparable to the Zstd compression algorithm, which has had multi-threaded performance for a while, but cannot match the LZ4 compressor for speed, though it has some direct LZ4.
From Linuxiac.com: - On Windows 11, using an Intel 7840HS CPU, compression time has improved from 13.4 seconds to just 1.8 seconds — a 7.4 times speed increase.
- macOS users with the M1 Pro chip will see a reduction from 16.6 seconds to 2.55 seconds, a 6.5 times faster performance.
- For Linux users on an i7-9700k, the compression time has been reduced from 16.2 seconds to 3.05 seconds, achieving a 5.4 times speed boost...

The release supports lesser-known architectures such as LoongArch, RISC-V, and others, ensuring LZ4's portability across various platforms.

AI

Open Source AI Better for US as China Will Steal Tech Anyway, Zuckerberg Argues (fb.com) 37

Meta CEO Mark Zuckerberg has advocated for open-source AI development, asserting it as a strategic advantage for the United States against China. In a blog post, Zuckerberg argued that closing off AI models would not effectively prevent Chinese access, given their espionage capabilities, and would instead disadvantage U.S. allies and smaller entities. He writes: Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities. Plus, constraining American innovation to closed development increases the chance that we don't lead at all. Instead, I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure they can best take advantage of the latest advances and achieve a sustainable first-mover advantage over the long term.
Facebook

Meta Warns EU Regulatory Efforts Risk Bloc Missing Out on AI Advances 35

Meta has warned that the EU's approach to regulating AI is creating the "risk" that the continent is cut off from accessing cutting-edge services, while the bloc continues its effort to rein in the power of Big Tech. From a report: Rob Sherman, the social media group's deputy privacy officer and vice-president of policy, confirmed a report that it had received a request from the EU's privacy watchdog to voluntarily pause the training of its future AI models on data in the region. He told the Financial Times this was in order to give local regulators time to "get their arms around the issue of generative AI." While the Facebook owner is adhering to the request, Sherman said such moves were leading to a "gap in the technologies that are available in Europe versus" the rest of the world. He added that, with future and more advanced AI releases, "it's likely that availability in Europe could be impacted." Sherman said: "If jurisdictions can't regulate in a way that enables us to have clarity on what's expected, then it's going to be harder for us to offer the most advanced technologies in those places ... it is a realistic outcome that we're worried about."
AI

Meta Launches Powerful Open-Source AI Model Llama 3.1 20

Meta has released Llama 3.1, its largest open-source AI model to date, in a move that challenges the closed approaches of competitors like OpenAI and Google. The new model, boasting 405 billion parameters, is claimed by Meta to outperform GPT-4o and Claude 3.5 Sonnet on several benchmarks, with CEO Mark Zuckerberg predicting that Meta AI will become the most widely used assistant by year-end.

Llama 3.1, which Meta says was trained using over 16,000 Nvidia H100 GPUs, is being made available to developers through partnerships with major tech companies including Microsoft, Amazon, and Google, potentially reducing deployment costs compared to proprietary alternatives. The release includes smaller versions with 70 billion and 8 billion parameters, and Meta is introducing new safety tools to help developers moderate the model's output. While Meta isn't disclosing what all data it used to train its models, the company confirmed it used synthetic data to enhance the model's capabilities. The company is also expanding its Meta AI assistant, powered by Llama 3.1, to support additional languages and integrate with its various platforms, including WhatsApp, Instagram, and Facebook, as well as its Quest virtual reality headset.
Facebook

Meta Risks Sanctions Over 'Sneaky' Ad-Free Plans Confusing Users, EU Says (arstechnica.com) 23

An anonymous reader quotes a report from Ars Technica: The European Commission (EC) has finally taken action to block Meta's heavily criticized plan to charge a subscription fee to users who value privacy on its platforms. Surprisingly, this step wasn't taken under laws like the Digital Services Act (DSA), the Digital Markets Act (DMA), or the General Data Protection Regulation (GDPR). Instead, the EC announced Monday that Meta risked sanctions under EU consumer laws if it could not resolve key concerns about Meta's so-called "pay or consent" model. Meta's model is seemingly problematic, the commission said, because Meta "requested consumers overnight to either subscribe to use Facebook and Instagram against a fee or to consent to Meta's use of their personal data to be shown personalized ads, allowing Meta to make revenue out of it." Because users were given such short notice, they may have been "exposed to undue pressure to choose rapidly between the two models, fearing that they would instantly lose access to their accounts and their network of contacts," the EC said. To protect consumers, the EC joined national consumer protection authorities, sending a letter to Meta requiring the tech giant to propose solutions to resolve the commission's biggest concerns by September 1.

That Meta's "pay or consent" model may be "misleading" is a top concern because it uses the term "free" for ad-based plans, even though Meta "can make revenue from using their personal data to show them personalized ads." It seems that while Meta does not consider giving away personal information to be a cost to users, the EC's commissioner for justice, Didier Reynders, apparently does. "Consumers must not be lured into believing that they would either pay and not be shown any ads anymore, or receive a service for free, when, instead, they would agree that the company used their personal data to make revenue with ads," Reynders said. "EU consumer protection law is clear in this respect. Traders must inform consumers upfront and in a fully transparent manner on how they use their personal data. This is a fundamental right that we will protect." Additionally, the EC is concerned that Meta users might be confused about how "to navigate through different screens in the Facebook/Instagram app or web-version and to click on hyperlinks directing them to different parts of the Terms of Service or Privacy Policy to find out how their preferences, personal data, and user-generated data will be used by Meta to show them personalized ads." They may also find Meta's "imprecise terms and language" confusing, such as Meta referring to "your info" instead of clearly referring to consumers' "personal data."
A Meta spokesperson said in a statement: "Subscriptions as an alternative to advertising are a well-established business model across many industries. Subscription for no ads follows the direction of the highest court in Europe and we are confident it complies with European regulation."
Facebook

Nigeria Fines Meta $220 Million For Violating Consumer, Data Laws (reuters.com) 15

Nigeria fined Meta for $220 million on Friday, alleging the tech giant violated the country's local consumer, data protection and privacy laws. Reuters reports: Nigeria's Federal Competition and Consumer Protection Commission (FCCPC) said Meta appropriated the data of Nigerian users on its platforms without their consent, abused its market dominance by forcing exploitative privacy policies on users, and meted out discriminatory and disparate treatment on Nigerians, compared with other jurisdictions with similar regulations. FCCPC chief Adamu Abdullahi said the investigations were jointly held with Nigeria's Data Protection Commission and spanned over 38 months. The investigations found Meta policies don't allow users the option or opportunity to self-determine or withhold consent to the gathering, use, and sharing of personal data, Abdullahi said.

"The totality of the investigation has concluded that Meta over the protracted period of time has engaged in conduct that constituted multiple and repeated, as well as continuing infringements... particularly, but not limited to abusive, and invasive practices against data subjects in Nigeria," Abdullahi said. "Being satisfied with the significant evidence on the record, and that Meta has been provided every opportunity to articulate any position, representations, refutations, explanations or defences of their conduct, the Commission have now entered a final order and issued a penalty against Meta," Abdullahi said. The final order mandates steps and actions Meta must take to comply with local laws, Abdullahi said.

Facebook

Meta Won't Release Its Multimodal Llama AI Model in the EU (theverge.com) 26

Meta says it won't be launching its upcoming multimodal AI model -- capable of handling video, audio, images, and text -- in the European Union, citing regulatory concerns. From a report: The decision will prevent European companies from using the multimodal model, despite it being released under an open license. Just last week, the EU finalized compliance deadlines for AI companies under its strict new AI Act. Tech companies operating in the EU will generally have until August 2026 to comply with rules around copyright, transparency, and AI uses like predictive policing. Meta's decision follows a similar move by Apple, which recently said it would likely exclude the EU from its Apple Intelligence rollout due to concerns surrounding the Digital Markets Act.
Facebook

Meta Opens Pilot Program For Researchers To Study Instagram's Impact On Teen Mental Health (theatlantic.com) 13

An anonymous reader quotes a report from The Atlantic: Now, after years of contentious relationships with academic researchers, Meta is opening a small pilot program that would allow a handful of them to access Instagram data for up to about six months in order to study the app's effect on the well-being of teens and young adults. The company will announce today that it is seeking proposals that focus on certain research areas -- investigating whether social-media use is associated with different effects in different regions of the world, for example -- and that it plans to accept up to seven submissions. Once approved, researchers will be able to access relevant data from study participants -- how many accounts they follow, for example, or how much they use Instagram and when. Meta has said that certain types of data will be off-limits, such as user-demographic information and the content of media published by users; a full list of eligible data is forthcoming, and it is as yet unclear whether internal information related to ads that are served to users or Instagram's content-sorting algorithm, for example, might be provided. The program is being run in partnership with the Center for Open Science, or COS, a nonprofit. Researchers, not Meta, will be responsible for recruiting the teens, and will be required to get parental consent and take privacy precautions.
EU

Meta Won't Offer Future Multimodal AI Models In EU (axios.com) 33

According to Axios, Meta will withhold future multimodel AI models from customers in the European Union "due to the unpredictable nature of the European regulatory environment." From the report: Meta plans to incorporate the new multimodal models, which are able to reason across video, audio, images and text, in a wide range of products, including smartphones and its Meta Ray-Ban smart glasses. Meta says its decision also means that European companies will not be able to use the multimodal models even though they are being released under an open license. It could also prevent companies outside of the EU from offering products and services in Europe that make use of the new multimodal models. The company is also planning to release a larger, text-only version of its Llama 3 model soon. That will be made available for customers and companies in the EU, Meta said.

Meta's issue isn't with the still-being-finalized AI Act, but rather with how it can train models using data from European customers while complying with GDPR -- the EU's existing data protection law. Meta announced in May that it planned to use publicly available posts from Facebook and Instagram users to train future models. Meta said it sent more than 2 billion notifications to users in the EU, offering a means for opting out, with training set to begin in June. Meta says it briefed EU regulators months in advance of that public announcement and received only minimal feedback, which it says it addressed. In June -- after announcing its plans publicly -- Meta was ordered to pause the training on EU data. A couple weeks later it received dozens of questions from data privacy regulators from across the region.

The United Kingdom has a nearly identical law to GDPR, but Meta says it isn't seeing the same level of regulatory uncertainty and plans to launch its new model for U.K. users. A Meta representative told Axios that European regulators are taking much longer to interpret existing law than their counterparts in other regions. A Meta representative told Axios that training on European data is key to ensuring its products properly reflect the terminology and culture of the region.

Facebook

Facebook Ads For Windows Desktop Themes Push Info-Stealing Malware (bleepingcomputer.com) 28

Cybercriminals are using Facebook business pages and advertisements to promote fake Windows themes that infect unsuspecting users with the SYS01 password-stealing malware. From a report: Trustwave researchers who observed the campaigns said the threat actors also promote fake downloads for pirated games and software, Sora AI, 3D image creator, and One Click Active. While using Facebook advertisements to push information-stealing malware is not new, the social media platform's massive reach makes these campaigns a significant threat.

The threat actors take out advertisements that promote Windows themes, free game downloads, and software activation cracks for popular applications, like Photoshop, Microsoft Office, and Windows. These advertisements are promoted through newly created Facebook business pages or by hijacking existing ones. When using hijacked Facebook pages, the threat actors rename them to suit the theme of their advertisement and to promote the downloads to the existing page members.

IT

Shipt's Pay Algorithm Squeezed Gig Workers. They Fought Back (ieee.org) 35

Workers at delivery company Shipt "found that their paychecks had become...unpredictable," according to an article in IEEE Spectrum. "They were doing the same work they'd always done, yet their paychecks were often less than they expected. And they didn't know why...."

The article notes that "Companies whose business models rely on gig workers have an interest in keeping their algorithms opaque." But "The workers showed that it's possible to fight back against the opaque authority of algorithms, creating transparency despite a corporation's wishes." On Facebook and Reddit, workers compared notes. Previously, they'd known what to expect from their pay because Shipt had a formula: It gave workers a base pay of $5 per delivery plus 7.5 percent of the total amount of the customer's order through the app. That formula allowed workers to look at order amounts and choose jobs that were worth their time. But Shipt had changed the payment rules without alerting workers. When the company finally issued a press release about the change, it revealed only that the new pay algorithm paid workers based on "effort," which included factors like the order amount, the estimated amount of time required for shopping, and the mileage driven. The company claimed this new approach was fairer to workers and that it better matched the pay to the labor required for an order. Many workers, however, just saw their paychecks dwindling. And since Shipt didn't release detailed information about the algorithm, it was essentially a black box that the workers couldn't see inside.

The workers could have quietly accepted their fate, or sought employment elsewhere. Instead, they banded together, gathering data and forming partnerships with researchers and organizations to help them make sense of their pay data. I'm a data scientist; I was drawn into the campaign in the summer of 2020, and I proceeded to build an SMS-based tool — the Shopper Transparency Calculator [written in Python, using optical character recognition and Twilio, and running on a home server] — to collect and analyze the data. With the help of that tool, the organized workers and their supporters essentially audited the algorithm and found that it had given 40 percent of workers substantial pay cuts...

This "information asymmetry" helps companies better control their workforces — they set the terms without divulging details, and workers' only choice is whether or not to accept those terms... There's no technical reason why these algorithms need to be black boxes; the real reason is to maintain the power structure... In a fairer world where workers have basic data rights and regulations require companies to disclose information about the AI systems they use in the workplace, this transparency would be available to workers by default.

The tool's creator was attracted to the idea of helping a community "control and leverage their own data," and ultimately received more than 5,600 screenshots from over 200 workers. 40% were earning at least 10% less — and about 33% were earning less than their state's minimum wage. Interestingly, "Sharing data about their work was technically against the company's terms of service; astoundingly, workers — including gig workers who are classified as 'independent contractors' — often don't have rights to their own data...

"[O]ur experiment served as an example for other gig workers who want to use data to organize, and it raised awareness about the downsides of algorithmic management. What's needed is wholesale changes to platforms' business models... The battles that gig workers are fighting are the leading front in the larger war for workplace rights, which will affect all of us. The time to define the terms of our relationship with algorithms is right now."

Thanks to long-time Slashdot reader mspohr for sharing the article.
Social Networks

Threads Hits 175 Million Users After a Year (theverge.com) 35

Ahead of its one-year anniversary, Meta CEO Mark Zuckerberg announced that Threads has reached more than 175 million monthly active users. The Verge reports: Back when it arrived in the App Store on July 5th, 2023, Musk was taking a wrecking ball to the service formerly called Twitter and goading Zuckerberg into a literal cage match that never happened. A year later, Threads is still growing at a steady clip -- albeit not as quickly as its huge launch -- while Musk hasn't shared comparable metrics for X since he took over.

As with any social network, and especially for Threads, monthly users only tell part of the growth story. It's telling that, unlike Facebook, WhatsApp, and Instagram, Meta hasn't shared daily user numbers yet. That omission suggests Threads is still getting a lot of flyby traffic from people who have yet to become regular users. I've heard from Meta employees in recent months that much of the app's growth is still coming from it being promoted inside Instagram. Both apps share the same account system, which isn't expected to change.

AI

Brazil Data Regulator Bans Meta From Mining Data To Train AI Models 13

Brazil's national data protection authority ruled on Tuesday that Meta must stop using data originating in the country to train its artificial intelligence models. The Associated Press reports: Meta's updated privacy policy enables the company to feed people's public posts into its AI systems. That practice will not be permitted in Brazil, however. The decision stems from "the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects," the agency said in the nation's official gazette. [...] Hye Jung Han, a Brazil-based researcher for the rights group, said in an email Tuesday that the regulator's action "helps to protect children from worrying that their personal data, shared with friends and family on Meta's platforms, might be used to inflict harm back on them in ways that are impossible to anticipate or guard against."

But the decision regarding Meta will "very likely" encourage other companies to refrain from being transparent in the use of data in the future, said Ronaldo Lemos, of the Institute of Technology and Society of Rio de Janeiro, a think-tank. "Meta was severely punished for being the only one among the Big Tech companies to clearly and in advance notify in its privacy policy that it would use data from its platforms to train artificial intelligence," he said. Compliance must be demonstrated by the company within five working days from the notification of the decision, and the agency established a daily fine of 50,000 reais ($8,820) for failure to do so.
In a statement, Meta said the company is "disappointed" by the decision and insists its method "complies with privacy laws and regulations in Brazil."

"This is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil," a spokesperson for the company added.
Security

10-Year-Old Open Source Flaw Could Affect 'Almost Every Apple Device' (thecyberexpress.com) 23

storagedude shares a report from the Cyber Express: Some of the most widely used web and social media applications could be vulnerable to three newly discovered CocoaPods vulnerabilities -- including potentially millions of Apple devices, according to a report by The Cyber Express, the news service of threat intelligence vendor Cyble Inc. E.V.A Information Security researchers reported three vulnerabilities in the open source CocoaPods dependency manager that could allow malicious actors to take over thousands of unclaimed pods and insert malicious code into many of the most popular iOS and MacOS applications, potentially affecting "almost every Apple device." The researchers found vulnerable code in applications provided by Meta (Facebook, Whatsapp), Apple (Safari, AppleTV, Xcode), and Microsoft (Teams); as well as in TikTok, Snapchat, Amazon, LinkedIn, Netflix, Okta, Yahoo, Zynga, and many more.

The vulnerabilities have been patched, yet the researchers still found 685 Pods "that had an explicit dependency using an orphaned Pod; doubtless there are hundreds or thousands more in proprietary codebases." The newly discovered vulnerabilities -- one of which (CVE-2024-38366) received a 10 out of 10 criticality score -- actually date from a May 2014 CocoaPods migration to a new 'Trunk' server, which left 1,866 orphaned pods that owners never reclaimed. While the vulnerabilities have been patched, the work for developers and DevOps teams that used CocoaPods before October 2023 is just getting started. "Developers and DevOps teams that have used CocoaPods in recent years should verify the integrity of open source dependencies used in their application code," the E.V.A researchers said. "The vulnerabilities we discovered could be used to control the dependency manager itself, and any published package." [...] "Dependency managers are an often-overlooked aspect of software supply chain security," the researchers wrote. "Security leaders should explore ways to increase governance and oversight over the use these tools."
"While there is no direct evidence of any of these vulnerabilities being exploited in the wild, evidence of absence is not absence of evidence." the EVA researchers wrote. "Potential code changes could affect millions of Apple devices around the world across iPhone, Mac, AppleTV, and AppleWatch devices."

While no action is required by app developers or users, the EVA researchers recommend several ways to protect against these vulnerabilities. To ensure secure and consistent use of CocoaPods, synchronize the podfile.lock file with all developers, perform CRC validation for internally developed Pods, and conduct thorough security reviews of third-party code and dependencies. Furthermore, regularly review and verify the maintenance status and ownership of CocoaPods dependencies, perform periodic security scans, and be cautious of widely used dependencies as potential attack targets.
EU

Meta Defends Charging Fee For Privacy Amid Showdown With EU (arstechnica.com) 66

An anonymous reader quotes a report from Ars Technica: Meta continues to hit walls with its heavily scrutinized plan to comply with the European Union's strict online competition law, the Digital Markets Act (DMA), by offering Facebook and Instagram subscriptions as an alternative for privacy-inclined users who want to opt out of ad targeting. Today, the European Commission (EC) announced preliminary findings that Meta's so-called "pay or consent" or "pay or OK" model -- which gives users a choice to either pay for access to its platforms or give consent to collect user data to target ads -- is not compliant with the DMA. According to the EC, Meta's advertising model violates the DMA in two ways. First, it "does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the 'personalized ads-based service." And second, it "does not allow users to exercise their right to freely consent to the combination of their personal data," the press release said.

Now, Meta will have a chance to review the EC's evidence and defend its policy, with today's findings kicking off a process that will take months. The EC's investigation is expected to conclude next March. Thierry Breton, the commissioner for the internal market, said in the press release that the preliminary findings represent "another important step" to ensure Meta's full compliance with the DMA. "The DMA is there to give back to the users the power to decide how their data is used and ensure innovative companies can compete on equal footing with tech giants on data access," Breton said. A Meta spokesperson told Ars that Meta plans to fight the findings -- which could trigger fines up to 10 percent of the company's worldwide turnover, as well as fines up to 20 percent for repeat infringement if Meta loses. The EC agreed that more talks were needed, writing in the press release, "the Commission continues its constructive engagement with Meta to identify a satisfactory path towards effective compliance."
Meta continues to claim that its "subscription for no ads" model was "endorsed" by the highest court in Europe, the Court of Justice of the European Union (CJEU), last year.

"Subscription for no ads follows the direction of the highest court in Europe and complies with the DMA," Meta's spokesperson said. "We look forward to further constructive dialogue with the European Commission to bring this investigation to a close."

Meta rolled out its ad-free subscription service option last November. "Depending on where you purchase it will cost $10.5/month on the web or $13.75/month on iOS and Android," said the company in a blog post. "Regardless of where you purchase, the subscription will apply to all linked Facebook and Instagram accounts in a user's Accounts Center. As is the case for many online subscriptions, the iOS and Android pricing take into account the fees that Apple and Google charge through respective purchasing policies."
Social Networks

Threads Expands Fediverse Beta, Letting Users See Replies (and Likes) on Other Fediverse Sites like Mastodon (theverge.com) 16

An anonymous Slashdot reader shared this report from the Verge: Threads will now let people like and see replies to their Threads posts that appear on other federated social media platforms, the company announced on Tuesday.

Previously, if you made a post on Threads that was syndicated to another platform like Mastodon, you wouldn't be able to see responses to that post while still inside Threads. That meant you'd have to bounce back and forth between the platforms to stay up-to-date on replies... [I]n a screenshot, Meta notes that you can't reply to replies "yet," so it sounds like that feature will arrive in the future.

"Threads is Meta's first app built to be compatible with the fediverse..." according to a Meta blog post. "Our vision is that people using other fediverse-compatible servers will be able to follow and interact with people on Threads without having a Threads profile, and vice versa, connecting communities..." [If you turn on "sharing"...] "Developers can build new types of features and user experiences that can easily plug into other open social networks, accelerating the pace of innovation and experimentation."

And this week Instagram/Threads top executive Adam Mosseri posted that Threads is "also expanding the availability of the fediverse beta experience to more than 100 countries, and hope to roll it out everywhere soon."
Social Networks

'The Greatest Social Media Site Is Craigslist' (slate.com) 29

An anonymous reader quotes an op-ed for Slate, written by Amanda Chen: In August 2009, Wired magazine ran a cover story on Craigslist founder Craig Newmark titled "Why Craigslist Is Such a Mess." The opening paragraphs excoriate almost every aspect of the online classifieds platform as "underdeveloped," a "wasteland of hyperlinks," and demands that we, the public, ought to have higher standards. The same sentiment can found across tech forums and trade publications, a missed opportunity that the average self-professed LinkedIn expert on #UX #UI #design will have you believe that they are the first to point out. But as sites like Craigslist increasingly turn into digital artifacts, more people, myself included, are starting to see the beauty that belies those same features. Without them, where else on the internet could you find such ardent professions of desire or loneliness, or the random detritus of a life so steeply discounted?

The site has changed relatively little in both functionality and appearance since Newmark launched it in 1995 as a friends and family listserv for jobs and other opportunities. Yet in spite of that, it remains a household name whose niche in the contemporary digital landscape has yet to be usurped, with an estimated 180 million visits in May 2024. Though, it's certainly not for a lack of newcomers attempting to stake their claims on the booming C2C market; in the U.S., Facebook Marketplace, launched in 2016, is its closest direct competitor, followed by platforms like Nextdoor and OfferUp. Craigslist's business model is quite simple: Users in a few categories -- apartments in select cities, jobs, vehicles for sale -- pay a small but reasonable fee to make posts. Everything else is free. Its Perl-backed tech is straightforward. The team is relatively lean, as the company considers functions like sales and marketing superfluous. This strategy has allowed Craigslist to stay extremely profitable throughout the years without implementing sophisticated recommendation algorithms or inundating the webpage with third-party advertisements. Its runaway success threatens decades-old industry gospels of growth, disruption, and innovation, and might force tech evangelists to admit they don't fully understand what people want. [...]

These days I find myself casually browsing Craigslist in lieu of Instagram. Like readers of a local paper, I use it to keep a pulse on what's happening around me, even if I'll never know who these people are. That's beside the point. Perhaps Craigslist's single greatest cultural contribution, and my favorite place to lurk, is the "missed connections." The feature has inspired countless copycats, artistic reinterpretations, human interest stories, and analyses (one in particular extrapolated that Monday evenings are the most lovelorn time across the country). There is something deeply comforting about seeing those intangible threads of yearning which permeate a city so plainly laid out, as confirmation that you're not alone in wanting to be seen by others alive in the same place and time as you. Sometimes I'll peruse random job listings or the "free" section. This leads to the ever-amusing exercise, which I'll often invite friends to participate in, of speculating about the motivations and circumstances behind an object's acquisition and imminent relinquishment. I'll even visit the clunky, dial-up era-style discussion forums, subdivided into topics labeled things like "death and dying" or "haiku hotel," where a unique penchant for whimsy and romance can be felt deeply throughout. On Craigslist, a post can be a shout into the void that may or may not be returned, an affirmation of life, but regardless, in 45 days it's gone. Positioned somewhere in between digital ephemera and archive, the site's images and language are often utilitarian, occasionally unintelligible, and just when you least expect it, absurd, poetic, and profound.
"Frequently, technologists remain convinced that the market will eventually reveal a solution for all of our deep-seated societal problems, something that we can hack if only granted access to better tech," writes Chen, in closing. "From the start, the industry has advanced the idea that change is inherently good, even if only for its own sake, which can be viewed as symptomatic of the accelerating conditions of late-stage capitalism. Of course, there are many ways in which change is desperately needed in this moment, but when it comes to the particular case of Craigslist, it hardly seems necessary."

Slashdot Top Deals