×
Youtube

YouTube Tells Open-Source Privacy Software 'Invidious' to Shut Down (vice.com) 42

YouTube has sent a cease-and-desist letter to Invidious, an open-source "alternative front-end" to the website which allows users to watch videos without having their data tracked, claiming it violates YouTube's API policy and demanding that it be shut down within seven days. From a report: "We recently became aware of your product or service, Invidious," reads the letter, which was posted on the Invidious GitHub last week. "Your Client appears to be in violation of the YouTube API Services Terms of Service and Developer Policies." The letter then delineates the policies which Invidious is accused of having violated, such as not displaying a link to YouTube's Terms of Service or "clearly" explaining what it does with user information. Invidious is open-source software licensed under AGPL-3.0, and it markets itself as a way for users to interact with YouTube without allowing the site to collect their data, or having to make an account. "Invidious protects you from the prying eyes of Google," its homepage reads. "It won't track you either!" Invidious also allows users to watch videos without being interrupted by "annoying ads," which is how YouTube makes most of its money.
Youtube

Why YouTube Could Give Google an Edge in AI (theinformation.com) 30

Google last month upgraded its Bard chatbot with a new machine-learning model that can better understand conversational language and compete with OpenAI's ChatGPT. As Google develops a sequel to that model, it may hold a trump card: YouTube. From a report: The video site, which Google owns, is the single biggest and richest source of imagery, audio and text transcripts on the internet. And Google's researchers have been using YouTube to develop its next large-language model, Gemini, according to a person with knowledge of the situation. The value of YouTube hasn't been lost on OpenAI, either: The startup has secretly used data from the site to train some of its artificial intelligence models, said one person with direct knowledge of the effort. AI practitioners who compete with Google say the company may gain an edge from owning YouTube, which gives it more complete access to the video data than rivals that scrape the videos. That's especially important as AI developers face new obstacles to finding high-quality data on which to train and improve their models. Major website publishers from Reddit to Stack Exchange to DeviantArt are increasingly blocking developers from downloading data for that purpose. Before those walls came up, AI startups used data from such sites to develop AI models, according to the publishers and disclosures from the startups.

The advantage that Google gains in AI from owning YouTube may reinforce concerns among antitrust regulators about Google's power. On Wednesday, the European Commission kicked off a complaint about Google's power in the ad tech world, contending that Google favors its "own online display advertising technology services to the detriment of competing providers." The U.S. Department of Justice in January sued Google over similar issues. Google could use audio transcriptions or descriptions of YouTube videos as another source of text for training Gemini, leading to more-sophisticated language understanding and the ability to generate more-realistic conversational responses. It could also integrate video and audio into the model itself, giving it the multimodal capabilities many researchers believe are the next frontier in AI, according to interviews with nearly a dozen people who work on these types of machine-learning models. Google CEO Sundar Pichai told investors earlier this month that Gemini, which is still in development, is exhibiting multimodal capabilities not seen in any other model, though he didn't elaborate.

IT

30 Years of Change, 30 Years of PDF (pdfa.org) 53

PDF Association, in a blog post: We live in a world where the only constant is accelerating change. The twists and turns in the technology landscape over the last 30 years have drained some of the hype from the early days of the consumer digital era. Today we are confronted with all-new, even more disruptive, possibilities. Along with the drama of the internet, the web, broadband, smart-phones, mobile broadband, social media, and AI, the last thirty years have revealed some persistent truths about how people use and think about information and communication. From the vantage-point of 2023 we are positioned to recognize 1993 as a year of two key developments; the first specification of HTML, the language of the web, and the first specification of PDF, the language of documents. Today, both technologies predominate in their respective use cases. They coexist because they meet deeply-related but distinct needs.
Google

Google Warns Staff About Chatbots (reuters.com) 10

Alphabet is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, Reuters reported Thursday, citing people familiar with the matter. From the report: The Google parent has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information. The chatbots, among them Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and answer myriad prompts. Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk.

Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate, some of the people said. Asked for comment, the company said Bard can make undesired code suggestions, but it helps programmers nonetheless. Google also said it aimed to be transparent about the limitations of its technology. The concerns show how Google wishes to avoid business harm from software it launched in competition with ChatGPT.

Government

Texas Bans Kids From Social Media Without Parental Consent (theverge.com) 254

Texas Governor Greg Abbott has signed a bill prohibiting children under 18 from joining various social media platforms without parental consent. Similar legislation has been passed in Utah and Louisiana. The Verge reports: The bill, HB 18, requires social media companies to receive explicit consent from a minor's parent or guardian before they'd be allowed to create their own accounts starting in September of next year. It also forces these companies to prevent children from seeing "harmful" content -- like content related to eating disorders, substance abuse, or "grooming" -- by creating new filtering systems.

Texas' definition of a "digital service" is extremely broad. Under the law, parental consent would be necessary for kids trying to access nearly any site that collects identifying information, like an email address. There are some exceptions, including sites that primarily deliver educational or news content and email services. The Texas attorney general could sue companies found to have violated this law. The law's requirements to filter loosely defined "harmful material" and provide parents with control over their child's accounts mirror language in some federal legislation that has spooked civil and digital rights groups.

Like HB 18, the US Senate-led Kids Online Safety Act orders platforms to prevent minors from being exposed to content related to disordered eating and other destructive behaviors. But critics fear this language could encourage companies like Instagram or TikTok to overmoderate non-harmful content to avoid legal challenges. Overly strict parental controls could also harm kids in abusive households, allowing parents to spy on marginalized children searching for helpful resources online.

Security

JPL Creates World's Largest PDF Archive to Aid Malware Research 21

NASA's Jet Propulsion Laboratory (JPL) has created the largest open-source archive of PDFs as part of DARPA's Safe Documents program, with the aim of improving internet security. The corpus consists of approximately 8 million PDFs collected from the internet. From a press release: "PDFs are used everywhere and are important for contracts, legal documents, 3D engineering designs, and many other purposes. Unfortunately, they are complex and can be compromised to hide malicious code or render different information for different users in a malicious way," said Tim Allison, a data scientist at JPL in Southern California. "To confront these and other challenges from PDFs, a large sample of real-world PDFs needs to be collected from the internet to create a shared, freely available resource for software experts." Building the corpus was no easy task. As a starting point, Allison's team used Common Crawl, an open-source public repository of web-crawl data, to identify a wide variety of PDFs to be included in the corpus -- files that are publicly available and not behind firewalls or in private networks. Conducted between July and August 2021, the crawl identified roughly 8 million PDFs.

Common Crawl limits downloaded data to 1 megabyte per file, meaning larger files were incomplete. But researchers need the entire PDF, not a truncated version, in order to conduct meaningful research on them. The file-size limit reduced the number of complete, untruncated files extracted directly from Common Crawl to 6 million. To get the other 2 million PDFs and ensure the corpus was complete, the JPL team re-fetched the truncated files using specialized software that downloaded the whole files from the incomplete PDFs' web addresses. Various metadata, such as the software used to create each PDF, was extracted and is included with the corpus. The JPL team also relied on free, publicly available geolocation software to identify the server location of the source website for each PDF. The complete data set totals about 8 terabytes, making it the largest publicly available corpus of its kind.

The corpus will do more than help researchers identify threats. Privacy researchers, for example, could study these files to determine how file-creation and editing software can be improved to better protect personal information. Software developers could use the files to find bugs in their code and to check if old versions of software are still compatible with newer versions of PDFs. The Digital Corpora project hosts the huge data archive as part of Amazon Web Services' Open Data Sponsorship Program, and the files have been packaged in easily downloadable zip files.
Medicine

Google Lens Can Now Search For Skin Conditions 11

Google Lens, the company's computer vision-powered app that scans objects and brings up relevant information, is now able to search for skin conditions, like moles and rashes. "Uploading a picture or photo through Lens will kick off a search for visual matches, which will also work for other physical maladies that you might not be sure how to describe with words (like a bump on the lip, a line on nails or hair loss)," reports TechCrunch. From the report: It's a step short of the AI-driven app Google launched in 2021 to diagnose skin, hair and nail conditions. That app, which debuted first in the E.U., faced barriers to entry in the U.S., where it would have had to have been approved by the Food and Drug Administration. (Google declined to seek approval.) Still, the Lens feature might be useful for folks deciding whether to seek medical attention or over-the-counter treatments. Lens integration with Google Bard is also coming soon. "Users will be able to include images in their Bard prompts and Lens will work behind the scenes to help Bard make sense of what's being shown," reports TechCrunch.
The Internet

Bay Area Woman Is On a Crusade To Prove Yelp Reviews Can't Be Trusted (sfgate.com) 59

An anonymous reader quotes a report from SFGATE: A strange letter showed up on Kay Dean's doorstep. It was 2017, and the San Jose resident had left a one-star review on the Yelp page of a psychiatry office in Los Altos. Then the letter arrived: It seemed the clinic had hired a local lawyer to demand that Dean remove her negative review or face a lawsuit. The envelope included a $50 check. Dean, who once worked as a criminal investigator in the U.S. Department of Education's Office of Inspector General, smelled something fishy. She decided to look into the clinic, part of a small California chain called SavantCare. By the time her work was done, she'd found a higher calling -- and SavantCare's ex-CEO was fighting felony charges.

Since then, Dean, 60, has mounted a yearslong crusade against Yelp and the broader online review ecosystem from a home office in San Jose. Yelp, founded in San Francisco in 2004, is deeply entrenched in American consumer habits, and has burrowed itself into the larger consciousness through partnerships with the likes of Apple Maps. The company's crowdsourced reviews undergird the internet's web of recommendations and can send businesses droves of customers -- or act as an insurmountable black mark. Dean follows fake reviews from their origins in social media groups to when they hit the review sites, methodically documenting hours of research in spreadsheets and little-watched YouTube videos. Targets accuse her of an unreasonable fixation. Yelp claims it aggressively and effectively weeds out fakes. But Dean disagrees, and she's out to convince America that Yelp, Google and other purveyors of reviews cannot be trusted.

"This is an issue that affects millions of consumers, and thousands of honest businesses," she said in her YouTube page's introductory post on April 30, 2020, facing the camera dead-on. "I'm creating these videos to expose this massive fraud against the American public and shine a light on Big Tech's culpability." "I don't do it lightly. If I put a video up, it's serious," she told SFGATE in May. "I'm putting myself out there." Dean is particularly motivated by the types of small businesses that she's found gaming Yelp's recommendation algorithm. She has spotted seemingly paid-for reviews on the pages of lawyers, home contractors, and doctors' offices -- high-ticket companies for which she says she'd "rather have no information than fake information."

AI

McKinsey Report Finds Generative AI Could Add Up To $4.4 Trillion a Year To the Global Economy (venturebeat.com) 39

According to global consulting leader McKinsey and Company, Generative AI could add "2.6 trillion to $4.4 trillion annually" to the global economy. That's almost the "economic equivalent of adding an entire new country the size and productivity of the United Kingdom to the Earth ($3.1 trillion GDP in 2021)," notes VentureBeat. From the report: The $2.6 trillion to $4.4 trillion economic impact figure marks a huge increase over McKinsey's previous estimates of the AI field's impact on the economy from 2017, up 15 to 40% from before. This upward revision is due to the incredibly fast embrace and potential use cases of GenAI tools by large and small enterprises. Furthermore, McKinsey finds "current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70% of employees' time today." Does this mean massive job loss is inevitable? No, according to Alex Sukharevsky, senior partner and global leader of QuantumBlack, McKinsey's in-house AI division and report co-author. "You basically could make it significantly faster to perform these jobs and do so much more precisely than they are performed today," Sukharevsky told VentureBeat. What that translates to is an addition of "0.2 to 3.3 percentage points annually to productivity growth" to the entire global economy, he said.

However, as the report notes, "workers will need support in learning new skills, and some will change occupations. If worker transitions and other risks can be managed, generative AI could contribute substantively to economic growth and support a more sustainable, inclusive world." Also, the advent of accessible GenAI has pushed up McKinsey's previous estimates for workplace automation: "Half of today's work activities could be automated between 2030 and 2060, with a midpoint in 2045, or roughly a decade earlier than in our previous estimates."

Specifically, McKinsey's report found that four types of tasks -- customer operations, marketing and sales, software engineering and R&D -- were likely to account for 75% of the value add of GenAI in particular. "Examples include generative AI's ability to support interactions with customers, generate creative content for marketing and sales and draft computer code based on natural-language prompts, among many other tasks." [...] Overall, McKinsey views GenAI as a "technology catalyst," pushing industries further along toward automation journeys, but also freeing up the creative potential of employees. "I do believe that if anything, we are getting into the age of creativity and the age of creator," Sukharevsky said.

The Internet

A San Francisco Library Is Turning Off Wi-Fi At Night To Keep People Without Housing From Using It (theverge.com) 251

In San Francisco's District 8, a public library has turned off its Wi-Fi outside of business hours in response to complaints from neighbors and the city supervisor's office about open drug use and disturbances caused by unhoused individuals. The Verge reports: In San Francisco's District 8, a public library has been shutting down Wi-Fi outside business hours for nearly a year. The measure, quietly implemented in mid-2022, was made at the request of neighbors and the office of city supervisor Rafael Mandelman. It's an attempt to keep city dwellers who are currently unhoused away from the area by locking down access to one of the library's most valuable public services. A local activist known as HDizz revealed details behind the move last month, tweeting public records of a July 2022 email exchange between local residents and the city supervisor's office. In the emails, residents complained about open drug use and sidewalks blocked by residents who are unhoused. One relayed a secondhand story about a library worker who had been followed to her car. And by way of response, they demanded the library limit the hours Wi-Fi was available. "Why are the vagrants and drug addicts so attracted to the library?" one person asked rhetorically. "It's the free 24/7 wi-fi."

San Francisco's libraries have been historically progressive when it comes to providing resources to people who are unhoused, even hiring specialists to offer assistance. But on August 1st, reports San Francisco publication Mission Local, city librarian Michael Lambert met with Mandelman's office to discuss the issue. The next day, District 8's Eureka Valley/Harvey Milk Memorial branch began turning its Wi-Fi off after hours -- a policy that San Francisco Public Library (SFPL) spokesperson Jaime Wong told The Verge via email remains in place today.

In the initial months after the decision, the library apparently received no complaints. But in March, a little over seven months following the change, it got a request to reverse the policy. "I'm worried about my friend," the email reads, "whom I am trying to get into long term residential treatment." San Francisco has shelters, but the requester said their friend had trouble communicating with the staff and has a hard time being around people who used drugs, among other issues. Because this friend has no regular cell service, "free wifi is his only lifeline to me [or] for that matter any services for crisis or whatever else." The resident said some of the neighborhood's residents "do not understand what they do to us poor folks nor the homeless by some of the things they do here."
Jennifer Friedenbach of San Francisco's Coalition on Homelessness told The Verge in a phone interview that "folks are not out there on the streets by choice. They're destitute and don't have other options. These kinds of efforts, like turning off the Wi-Fi, just exacerbate homelessness and have the opposite effect. Putting that energy into fighting for housing for unhoused neighbors would be a lot more effective."
Transportation

Feds Tell Automakers Not To Comply With Massachusetts 'Right To Repair' Law (arstechnica.com) 89

An anonymous reader quotes a report from Ars Technica: In 2020, voters in Massachusetts chose to extend that state's automotive "right to repair" law to include telematics and connected car services. But this week, the National Highway Traffic Safety Administration told automakers that some of the law's requirements create a real safety problem and that they should be ignored since federal law preempts state law when the two conflict. Almost all new cars in 2023 contain embedded modems and offer some form of telematics or connected car services. And the ballot language that passed in Massachusetts requires "manufacturers that sell vehicles with telematics systems in Massachusetts to equip them with a standardized open data platform beginning with model year 2022 that vehicle owners and independent repair facilities may access to retrieve mechanical data and run diagnostics through a mobile-based application."

There have been attempts by state lawmakers, the auto industry, and NHTSA to tweak the law to create a more reasonable timeline for implementation, but to no avail. Now, according to Reuters, NHTSA has written to automakers to advise them not to comply with the Massachusetts law. Among its problems are the fact that someone "could utilize such open access to remotely command vehicles to operate dangerously, including attacking multiple vehicles concurrently," and that "open access to vehicle manufacturers' telematics offerings with the ability to remotely send commands allows for manipulation of systems on a vehicle, including safety-critical functions such as steering, acceleration, or braking." Faced with this dilemma, it's quite possible the automakers will respond by simply disabling telematics and connected services for customers in the state. Subaru already took that step when it introduced its model year 2022 vehicles, and NHTSA says other OEMs may do the same.

AI

Bipartisan Bill Denies Section 230 Protection for AI (axios.com) 34

Sens. Josh Hawley and Richard Blumenthal want to clarify that the internet's bedrock liability law does not apply to generative AI, per a new bill introduced Wednesday. From a report: Legal experts and lawmakers have questioned whether AI-created works would qualify for legal immunity under Section 230 of the Communications Decency Act, the law that largely shields platforms from lawsuits over third-party content. It's a newly urgent issue thanks to the explosive of generative AI. The new bipartisan bill bolsters the argument that Section 230 doesn't cover AI-generated work. It also gives lawmakers an opening to go after Section 230 after vowing to amend it, without much success, for years.

Section 230 is often credited as the law that allowed the internet to flourish and for social media to take off, as well as websites hosting travel listings and restaurant reviews. To its detractors, it goes too far and is not fit for today's web, allowing social media companies to leave too much harmful content up online. Hawley and Blumenthal's "No Section 230 Immunity for AI Act" would amend Section 230 "by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI," per a description of the bill from Hawley's office.

Google

Google Is Weaving Generative AI Into Online Shopping Features (bloomberg.com) 10

Google is bringing generative AI technology to shopping, aiming to get a jump on e-commerce sites like Amazon. From a report: The Alphabet-owned company announced features Wednesday aimed at helping people understand how apparel will fit on them, no matter their body size, and added capabilities for finding products using its search and image-recognition technology. Additionally, Google introduced new ways to research travel destinations and map routes using generative AI -- technology that can craft text, images or even video from simple prompts.

"We want to make Google the place for consumers to come shop, as well as the place for merchants to connect with consumers," Maria Renz, Google's vice president of commerce, said in an interview ahead of the announcement. "We've always been committed to an open ecosystem and a healthy web, and this is one way where we're bringing this technology to bear across merchants." Google is the world's dominant search engine, but 46% of respondents in a survey of US shoppers conducted last year said they still started their product searches and research on Amazon, according to the research firm CivicScience. TikTok, too, is making inroads, CivicScience's research found -- 18% of Gen Z online shoppers turn to the platform first. Google is taking note, with some of its new, AI-powered shopping exploration features aimed at capturing younger audiences.

A new virtual "try-on" feature, launching on Wednesday, will let people see how clothes fit across a range of body types, from XXS to 4XL sizes. Apparel will be overlaid on top of images of diverse models that the company photographed while developing the capability. Google said it was able to launch such a service because of a new image-based AI model that it developed internally, and the company is releasing a new research paper detailing its work alongside the announcement.

Businesses

Comcast Complains To FCC That Listing All of Its Monthly Fees is Too Hard (arstechnica.com) 109

mschaffer shares a report: Comcast and other ISPs have annoyed customers for many years by advertising low prices and then charging much bigger monthly bills by tacking on a variety of fees. While some of these fees are related to government-issued requirements and others are not, poorly trained customer service reps have been known to falsely tell customers that fees created by Comcast are mandated by the government. The FCC rules will force ISPs to accurately describe fees in labels given to customers, but Comcast said it wants the FCC to rescind a requirement related to "fees that ISPs may, but are not obligated to, pass through to customers." These include state Universal Service fees and other local fees. As Comcast makes clear, it isn't required to pass these costs on to customers in the form of separate fees. Comcast could stop charging the fees and raise its advertised prices by the corresponding amount to more accurately convey its actual prices to customers. Instead, Comcast wants the FCC to change the rule so that it can continue charging the fees without itemizing them..

I suppose it's just easier to grab people's money than it is to make up names for the fees, Mschaffer adds.

Google

Google Faces EU Break-Up Order Over Anti-Competitive Adtech Practices (reuters.com) 51

Alphabet's Google may have to sell part of its lucrative adtech business to address concerns about anti-competitive practices, EU regulators said on Wednesday, threatening the company with its harshest regulatory penalty to date. From a report: The European Commission set out its charges in a statement of objections to Google two years after opening an investigation into behaviours such as favouring its own advertising services, which could also lead to a fine of as much as 10% of Google's annual global turnover. The stakes are higher for Google in this latest clash with regulators as it concerns the company's biggest money maker, with the adtech business accounting for 79% of total revenue last year.

Its 2022 advertising revenue, including from search services, Gmail, Google Play, Google Maps, YouTube adverts, Google Ad Manager, AdMob and AdSense, amounted to $224.5 billion. EU antitrust chief Margrethe Vestager said Google may have to sell part of its adtech business because a behavioural remedy is unlikely to be effective at stopping the anti-competitive practices.

AI

Europeans Take a Major Step Toward Regulating AI (nytimes.com) 25

The European Union took an important step on Wednesday toward passing what would be one of the first major laws to regulate artificial intelligence, a potential model for policymakers around the world as they grapple with how to put guardrails on the rapidly developing technology. From a report: The European Parliament, a main legislative branch of the E.U., passed a draft law known as the A.I. Act, which would put new restrictions on what are seen as the technology's riskiest uses. It would severely curtail uses of facial recognition software, while requiring makers of A.I. systems like the ChatGPT chatbot to disclose more about the data used to create their programs. The vote is one step in a longer process. A final version of the law is not expected to be passed until later this year.

The European Union is further along than the United States and other large Western governments in regulating A.I. The 27-nation bloc has debated the topic for more than two years, and the issue took on new urgency after last year's release of ChatGPT, which intensified concerns about the technology's potential effects on employment and society. Policymakers everywhere from Washington to Beijing are now racing to control an evolving technology that is alarming even some of its earliest creators. In the United States, the White House has released policy ideas that includes rules for testing A.I. systems before they are publicly available and protecting privacy rights. In China, draft rules unveiled in April would require makers of chatbots to adhere to the country's strict censorship rules. Beijing is also taking more control over the ways makers of A.I. systems use data.

AI

Meta Open Sources An AI-Powered Music Generator (techcrunch.com) 39

TechCrunch's Kyle Wiggers writes: Not to be outdone by Google, Meta has released its own AI-powered music generator -- and, unlike Google, open-sourced it. Called MusicGen, Meta's music-generating tool, a demo of which can be found here, can turn a text description (e.g. "An '80s driving pop song with heavy drums and synth pads in the background") into about 12 seconds of audio, give or take. MusicGen can optionally be "steered" with reference audio, like an existing song, in which case it'll try to follow both the description and melody.

Meta says that MusicGen was trained on 20,000 hours of music, including 10,000 "high-quality" licensed music tracks and 390,000 instrument-only tracks from ShutterStock and Pond5, a large stock media library. The company hasn't provided the code it used to train the model, but it has made available pre-trained models that anyone with the right hardware -- chiefly a GPU with around 16GB of memory -- can run.

So how does MusicGen perform? Well, I'd say -- though certainly not well enough to put human musicians out of a job. Its songs are reasonably melodic, at least for basic prompts like "ambient chiptunes music," and -- to my ears -- on par (if not slightly better) with the results from Google's AI music generator, MusicLM. But they won't win any awards.

Programming

Google Home's Script Editor Is Now Live (theverge.com) 23

Google has launched its script editor tool, offering advanced automations for Google Home-powered smart homes. The Verge reports: Available starting Tuesday, June 13th, to those in the Google Home public preview, the script editor is part of Google's new home.google.com web interface, which also has live feeds for any Nest cams on your account. The script editor will be coming to the new Google Home app preview starting June 14th. There's no date for general availability.

Along with allowing for multiple starters and actions, the script editor adds more advanced conditions. For example, you can set an automation to run only if the TV is on and it's after 6PM but before midnight. The script editor automations are created in the new Google Home web interface, you can apply for the public preview here.

The script editor allows you to do everything you can in the Home app when setting up automations, plus "more than 100 new features and capabilities to fit your unique understanding of your home and what you want it to do," according to a blog post by Anish Kattukaran, director of product management at Google Home. This includes access to nearly 100 starters and actions, including Matter sensors -- something not currently possible in the Home app. For example, an Eve Motion sensor connected via Matter to Google Home can't currently be used as a starter for automations in the Home app but can be used as one in the script editor.
Google has some example automations that you can view here.
Social Networks

Reddit Communities With Millions of Followers Plan To Extend the Blackout Indefinitely (theverge.com) 236

An anonymous reader quotes a report from The Verge: Moderators of many Reddit communities are pledging to keep their subreddits private or restricted indefinitely. For the vast majority of subreddits, the blackout to protest Reddit's expensive API pricing changes was expected to last from Monday until Wednesday. But in response to a Tuesday post on the r/ModCoord subreddit, users are chiming in to say that their subreddits will remain dark past that 48-hour window. "Reddit has budged microscopically," u/SpicyThunder335, a moderator for r/ModCoord, wrote in the post. They say that despite an announcement that access to a popular data-archiving tool for moderators would be restored, "our core concerns still aren't satisfied, and these concessions came prior to the blackout start date; Reddit has been silent since it began." SpicyThunder335 also bolded a line from a Monday memo from CEO Steve Huffman obtained by The Verge -- "like all blowups on Reddit, this one will pass as well" -- and said that "more is needed for Reddit to act."

Ahead of the Tuesday post, more than 300 subreddits had committed to staying dark indefinitely, SpicyThunder335 said. The list included some hugely popular subreddits, like r/aww (more than 34 million subscribers), r/music (more than 32 million subscribers), and r/videos (more than 26 million subscribers). Even r/nba committed to an indefinite timeframe at arguably the most important time of the NBA season. But SpicyThunder335 invited moderators to share pledges to keep the protests going, and the commitments are rolling in. SpicyThunder335 notes that not everyone will be able to go dark indefinitely for valid reasons. "For example, r/stopDrinking represents a valuable resource for a communities in need, and the urgency of getting the news of the ongoing war out to r/Ukraine obviously outweighs any of these concerns," SpicyThunder335 wrote. As an alternative, SpicyThunder335 recommended implementing a "weekly gesture of support on 'Touch-Grass-Tuesdays,'" which would be left up to the discretion of individual communities. SpicyThunder335 also acknowledged that some subreddits would need to poll their users to make sure they're on board. As of this writing, more than 8,400 subreddits have gone private or into a restricted mode. The blackouts caused Reddit to briefly crash on Monday.

Facebook

Meta Releases 'Human-Like' AI Image Creation Model (reuters.com) 25

Meta said on Tuesday that it would provide researchers with access to components of a new "human-like" artificial intelligence model that it said can analyze and complete unfinished images more accurately than existing models. From a report: The model, I-JEPA, uses background knowledge about the world to fill in missing pieces of images, rather than looking only at nearby pixels like other generative AI models, the company said. That approach incorporates the kind of human-like reasoning advocated by Meta's top AI scientist Yann LeCun and helps the technology to avoid errors that are common to AI-generated images, like hands with extra fingers, it said.

Meta, which owns Facebook and Instagram, is a prolific publisher of open-sourced AI research via its in-house research lab. Chief Executive Mark Zuckerberg has said that sharing models developed by Meta's researchers can help the company by spurring innovation, spotting safety gaps and lowering costs. "For us, it's way better if the industry standardizes on the basic tools that we're using and therefore we can benefit from the improvements that others make," he told investors in April.

Slashdot Top Deals