Windows

What Happens If You Connect Windows XP To the Internet In 2024? (youtube.com) 73

Long-time Slashdot reader sandbagger writes: Have you ever wondered if it's true you can instantly get malware? In this video, a person connects an XP instance directly to the internet with no firewall to see just how fast it gets compromised by malware, rootkits, malicious services and new user accounts. The answer — fast!
Malwarebytes eventually finds eight different viruses/Trojan horses -- and a DNS changer. (One IP address leads back to the Russian federation.) Itâ(TM)s fun to watch -- within just a few hours a new Windows user has even added themself. And for good measure, he also opens up Internet Explorer...

âoeWindows XP -- very insecure,â they conclude at the end of the video. âoeVery easy for random software from the internet to get more privileges than you, and it is very hard to solve that.

âoeAlso, just out of curiosity I tried this on Windows 7. And even with all of the same settings, nothing happened. I let it run for 10 hours. So it seems like this may be a problem in historical Windows.â
Networking

Is Modern Software Development Mostly 'Junky Overhead'? (tailscale.com) 117

Long-time Slashdot theodp says this "provocative" blog post by former Google engineer Avery Pennarun — now the CEO/founder of Tailscale — is "a call to take back the Internet from its centralized rent-collecting cloud computing gatekeepers."

Pennarun writes: I read a post recently where someone bragged about using Kubernetes to scale all the way up to 500,000 page views per month. But that's 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep. In modern computing, we tolerate long builds, and then Docker builds, and uploading to container stores, and multi-minute deploy times before the program runs, and even longer times before the log output gets uploaded to somewhere you can see it, all because we've been tricked into this idea that everything has to scale. People get excited about deploying to the latest upstart container hosting service because it only takes tens of seconds to roll out, instead of minutes. But on my slow computer in the 1990s, I could run a perl or python program that started in milliseconds and served way more than 0.2 requests per second, and printed logs to stderr right away so I could edit-run-debug over and over again, multiple times per minute.

How did we get here?

We got here because sometimes, someone really does need to write a program that has to scale to thousands or millions of backends, so it needs all that stuff. And wishful thinking makes people imagine even the lowliest dashboard could be that popular one day. The truth is, most things don't scale, and never need to. We made Tailscale for those things, so you can spend your time scaling the things that really need it. The long tail of jobs that are 90% of what every developer spends their time on. Even developers at companies that make stuff that scales to billions of users, spend most of their time on stuff that doesn't, like dashboards and meme generators.

As an industry, we've spent all our time making the hard things possible, and none of our time making the easy things easy. Programmers are all stuck in the mud. Just listen to any professional developer, and ask what percentage of their time is spent actually solving the problem they set out to work on, and how much is spent on junky overhead.

Tailscale offers a "zero-config" mesh VPN — built on top of WireGuard — for a secure network that's software-defined (and infrastructure-agnostic). "The problem is developers keep scaling things they don't need to scale," Pennarun writes, "and their lives suck as a result...."

"The tech industry has evolved into an absolute mess..." Pennarun adds at one point. "Our tower of complexity is now so tall that we seriously consider slathering LLMs on top to write the incomprehensible code in the incomprehensible frameworks so we don't have to."

Their conclusion? "Modern software development is mostly junky overhead."
Power

Ford's Stock Drops 20% After $1.1 Billion Loss on EV Business (msn.com) 238

Ford's stock dropped 20% this week — mostly falling off the cliff Wednesday after failing to meet Wall Street's expectations for its quarterly profits, according to MarketWatch — and notching "another billion-dollar loss on EVs." "The remaking of Ford is not without its growing pains," Ford Chief Executive Jim Farley said on a call with investors after the results. "We look forward to proving our EV strategy out. That has become more realistic and sharpened by the tough environment." Ford is "confident" it can reduce losses and sustain a profitable business in the future, he added. The car maker plans to focus on "very differentiated" EVs priced under $40,000 and $30,000, and on two segments, work and adventure, Farley said.

Larger EVs will be part of the picture, but success there will require more breakthroughs on costs, the CEO said, adding that Ford's EV journey overall has been "humbling...."

The results included an EBIT loss of $1.1 billion for Ford's EV segment, "amid ongoing industrywide pricing pressure on first-generation electric vehicles and lower wholesales," the car maker said... Ford kept its expectations that the EV business will lose between $5.0 billion and $5.5 billion for the year, "with continued pricing pressure and investments in next-generation electric vehicles," it said.

Ford's CEO went on to say that their company is totally open to partnerships for electric vehicles, according to the article. "This is absolutely a flip-the-script moment for our company."

Thanks to long-time Slashdot reader sinij for sharing the news.
AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

ISS

NASA Fires Lasers At the ISS (theverge.com) 28

joshuark shares a report from The Verge: NASA researchers have successfully tested laser communications in space by streaming 4K video footage originating from an airplane in the sky to the International Space Station and back. The feat demonstrates that the space agency could provide live coverage of a Moon landing during the Artemis missions and bodes well for the development of optical communications that could connect humans to Mars and beyond. NASA normally uses radio waves to send data and talk between the surface to space but says that laser communications using infrared light can transmit data 10 to 100 times faster than radios. "ISS astronauts, cosmonauts, and unwelcomed commercial space-flight visitors can now watch their favorite porn in real-time, adding some life to a boring zero-G existence," adds joshuark. "Ralph Kramden, when contacted by Ouiji board, simply spelled out 'Bang, zoom, straight to the moon!'"
AI

'Copyright Traps' Could Tell Writers If an AI Has Scraped Their Work 79

An anonymous reader quotes a report from MIT Technology Review: Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set. Now they have a new way to prove it: "copyright traps" developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. The idea is similar to traps that have been used by copyright holders throughout history -- strategies like including fake locations on a map or fake words in a dictionary. [...] The code to generate and detect traps is currently available on GitHub, but the team also intends to build a tool that allows people to generate and insert copyright traps themselves. "There is a complete lack of transparency in terms of which content is used to train models, and we think this is preventing finding the right balance [between AI companies and content creators]," says Yves-Alexandre de Montjoye, an associate professor of applied mathematics and computer science at Imperial College London, who led the research.

The traps aren't foolproof and can be removed, but De Montjoye says that increasing the number of traps makes it significantly more challenging and resource-intensive to remove. "Whether they can remove all of them or not is an open question, and that's likely to be a bit of a cat-and-mouse game," he says.
Google

Crooks Bypassed Google's Email Verification To Create Workspace Accounts, Access 3rd-Party Services (krebsonsecurity.com) 7

Brian Krebs writes via KrebsOnSecurity: Google says it recently fixed an authentication weakness that allowed crooks to circumvent the email verification required to create a Google Workspace account, and leverage that to impersonate a domain holder at third-party services that allow logins through Google's "Sign in with Google" feature. [...] Google Workspace offers a free trial that people can use to access services like Google Docs, but other services such as Gmail are only available to Workspace users who can validate control over the domain name associated with their email address. The weakness Google fixed allowed attackers to bypass this validation process. Google emphasized that none of the affected domains had previously been associated with Workspace accounts or services.

"The tactic here was to create a specifically-constructed request by a bad actor to circumvent email verification during the signup process," [said Anu Yamunan, director of abuse and safety protections at Google Workspace]. "The vector here is they would use one email address to try to sign in, and a completely different email address to verify a token. Once they were email verified, in some cases we have seen them access third party services using Google single sign-on." Yamunan said none of the potentially malicious workspace accounts were used to abuse Google services, but rather the attackers sought to impersonate the domain holder to other services online.

Open Source

Nvidia's Open-Source Linux Kernel Driver Performing At Parity To Proprietary Driver (phoronix.com) 21

Nvidia's new R555 Linux driver series has significantly improved their open-source GPU kernel driver modules, achieving near parity with their proprietary drivers. Phoronix's Michael Larabel reports: The NVIDIA open-source kernel driver modules shipped by their driver installer and also available via their GitHub repository are in great shape. With the R555 series the support and performance is basically at parity of their open-source kernel modules compared to their proprietary kernel drivers. [...] Across a range of different GPU-accelerated creator workloads, the performance of the open-source NVIDIA kernel modules matched that of the proprietary driver. No loss in performance going the open-source kernel driver route. Across various professional graphics workloads, both the NVIDIA RTX A2000 and A4000 graphics cards were also achieving the same performance whether on the open-source MIT/GPLv2 driver or using NVIDIA's classic proprietary driver.

Across all of the tests I carried out using the NVIDIA 555 stable series Linux driver, the open-source NVIDIA kernel modules were able to achieve the same performance as the classic proprietary driver. Also important is that there was no increased power use or other difference in power management when switching over to the open-source NVIDIA kernel modules.

It's great seeing how far the NVIDIA open-source kernel modules have evolved and that with the upcoming NVIDIA 560 Linux driver series they will be defaulting to them on supported GPUs. And moving forward with Blackwell and beyond, NVIDIA is just enabling the GPU support along their open-source kernel drivers with leaving the proprietary kernel drivers to older hardware. Tests I have done using NVIDIA GeForce RTX 40 graphics cards with Linux gaming workloads between the MIT/GPL and proprietary kernel drivers have yielded similar (boring but good) results: the same performance being achieved with no loss going the open-source route.
You can view Phoronix's performance results in charts here, here, and here.
Windows

How a Cheap Barcode Scanner Helped Fix CrowdStrike'd Windows PCs In a Flash (theregister.com) 60

An anonymous reader quotes a report from The Register: Not long after Windows PCs and servers at the Australian limb of audit and tax advisory Grant Thornton started BSODing last Friday, senior systems engineer Rob Woltz remembered a small but important fact: When PCs boot, they consider barcode scanners no differently to keyboards. That knowledge nugget became important as the firm tried to figure out how to respond to the mess CrowdStrike created, which at Grant Thornton Australia threw hundreds of PCs and no fewer than 100 servers into the doomloop that CrowdStrike's shoddy testing software made possible. [...] The firm had the BitLocker keys for all its PCs, so Woltz and colleagues wrote a script that turned them into barcodes that were displayed on a locked-down management server's desktop. The script would be given a hostname and generate the necessary barcode and LAPS password to restore the machine.

Woltz went to an office supplies store and acquired an off-the-shelf barcode scanner for AU$55 ($36). At the point when rebooting PCs asked for a BitLocker key, pointing the scanner at the barcode on the server's screen made the machines treat the input exactly as if the key was being typed. That's a lot easier than typing it out every time, and the server's desktop could be accessed via a laptop for convenience. Woltz, Watson, and the team scaled the solution -- which meant buying more scanners at more office supplies stores around Australia. On Monday, remote staff were told to come to the office with their PCs and visit IT to connect to a barcode scanner. All PCs in the firm's Australian fleet were fixed by lunchtime -- taking only three to five minutes for each machine. Watson told us manually fixing servers needed about 20 minutes per machine.

Transportation

Automakers Sold Driver Data For Pennies, Senators Say (jalopnik.com) 58

An anonymous reader quotes a report from the New York Times: If you drive a car made by General Motors and it has an internet connection, your car's movements and exact location are being collected and shared anonymously with a data broker. This practice, disclosed in a letter (PDF) sent by Senators Ron Wyden of Oregon and Edward J. Markey of Massachusetts to the Federal Trade Commission on Friday, is yet another way in which automakers are tracking drivers (source may be paywalled; alternative source), often without their knowledge. Previous reporting in The New York Times which the letter cited, revealed how automakers including G.M., Honda and Hyundai collected information about drivers' behavior, such as how often they slammed on the brakes, accelerated rapidly and exceeded the speed limit. It was then sold to the insurance industry, which used it to help gauge individual drivers' riskiness.

The two Democratic senators, both known for privacy advocacy, zeroed in on G.M., Honda and Hyundai because all three had made deals, The Times reported, with Verisk, an analytics company that sold the data to insurers. In the letter, the senators urged the F.T.C.'s chairwoman, Lina Khan, to investigate how the auto industry collects and shares customers' data. One of the surprising findings of an investigation by Mr. Wyden's office was just how little the automakers made from selling driving data. According to the letter, Verisk paid Honda $25,920 over four years for information about 97,000 cars, or 26 cents per car. Hyundai was paid just over $1 million, or 61 cents per car, over six years. G.M. would not reveal how much it had been paid, Mr. Wyden's office said. People familiar with G.M.'s program previously told The Times that driving behavior data had been shared from more than eight million cars, with the company making an amount in the low millions of dollars from the sale. G.M. also previously shared data with LexisNexis Risk Solutions.
"Companies should not be selling Americans' data without their consent, period," the letter from Senators Wyden and Markey stated. "But it is particularly insulting for automakers that are selling cars for tens of thousands of dollars to then squeeze out a few additional pennies of profit with consumers' private data."
The Internet

ISPs Seeking Government Handouts Try To Avoid Offering Low-Cost Broadband (arstechnica.com) 20

Internet service providers are pushing back against the Biden administration's requirement for low-cost options even as they are attempting to secure funds from a $42.45 billion government broadband initiative. The Broadband Equity, Access, and Deployment program, established by law to expand internet access, mandates that recipients offer affordable plans to eligible low-income subscribers, a stipulation the providers argue infringes on legal prohibitions against rate regulation. ISPs claim that the proposed $30 monthly rate for low-cost plans is economically unfeasible, especially in hard-to-reach rural areas, potentially undermining the program's goals by discouraging provider participation.
Google

Pixel 9 AI Will Add You To Group Photos Even When You're Not There (androidheadlines.com) 54

Google's upcoming Pixel 9 smartphones are set to introduce new AI-powered features, including "Add Me," a tool that will allow users to insert themselves into group photos after those pictures have been taken, according to leaked promotional video obtained by Android Headlines. This feature builds on the Pixel 8's "Best Take" function, which allowed face swapping in group shots.
Chrome

New Chrome Feature Scans Password-Protected Files For Malicious Content (thehackernews.com) 24

An anonymous reader quotes a report from The Hacker News: Google said it's adding new security warnings when downloading potentially suspicious and malicious files via its Chrome web browser. "We have replaced our previous warning messages with more detailed ones that convey more nuance about the nature of the danger and can help users make more informed decisions," Jasika Bawa, Lily Chen, and Daniel Rubery from the Chrome Security team said. To that end, the search giant is introducing a two-tier download warning taxonomy based on verdicts provided by Google Safe Browsing: Suspicious files and Dangerous files. Each category comes with its own iconography, color, and text to distinguish them from one another and help users make an informed choice.

Google is also adding what's called automatic deep scans for users who have opted-in to the Enhanced Protection mode of Safe Browsing in Chrome so that they don't have to be prompted each time to send the files to Safe Browsing for deep scanning before opening them. In cases where such files are embedded within password-protected archives, users now have the option to "enter the file's password and send it along with the file to Safe Browsing so that the file can be opened and a deep scan may be performed." Google emphasized that the files and their associated passwords are deleted a short time after the scan and that the collected data is only used for improving download protections.

AI

AI Models Face Collapse If They Overdose On Their Own Output 106

According to a new study published in Nature, researchers found that training AI models using AI-generated datasets can lead to "model collapse," where models produce increasingly nonsensical outputs over generations. "In one example, a model started with a text about European architecture in the Middle Ages and ended up -- in the ninth generation -- spouting nonsense about jackrabbits," writes The Register's Lindsay Clark. From the report: [W]ork led by Ilia Shumailov, Google DeepMind and Oxford post-doctoral researcher, found that an AI may fail to pick up less common lines of text, for example, in training datasets, which means subsequent models trained on the output cannot carry forward those nuances. Training new models on the output of earlier models in this way ends up in a recursive loop. In an accompanying article, Emily Wenger, assistant professor of electrical and computer engineering at Duke University, illustrated model collapse with the example of a system tasked with generating images of dogs. "The AI model will gravitate towards recreating the breeds of dog most common in its training data, so might over-represent the Golden Retriever compared with the Petit Basset Griffon Vendéen, given the relative prevalence of the two breeds," she said.

"If subsequent models are trained on an AI-generated data set that over-represents Golden Retrievers, the problem is compounded. With enough cycles of over-represented Golden Retriever, the model will forget that obscure dog breeds such as Petit Basset Griffon Vendeen exist and generate pictures of just Golden Retrievers. Eventually, the model will collapse, rendering it unable to generate meaningful content." While she concedes an over-representation of Golden Retrievers may be no bad thing, the process of collapse is a serious problem for meaningful representative output that includes less-common ideas and ways of writing. "This is the problem at the heart of model collapse," she said.
The Courts

California Supreme Court Upholds Gig Worker Law In a Win For Ride-Hail Companies (politico.com) 73

In a major victory for ride-hail companies, California Supreme Court upheld a law classifying gig workers as independent contractors, maintaining their ineligibility for benefits such as sick leave and workers' compensation. This decision concludes a prolonged legal battle and supports the 2020 ballot measure Proposition 22, despite opposition from labor groups who argued it was unconstitutional. Politico reports: Thursday's ruling capped a yearslong battle between labor and the companies over the status of workers who are dispatched by apps to deliver food, buy groceries and transport customers. A 2018 Supreme Court ruling and a follow-up bill would have compelled the gig companies to treat those workers as employees. A collection of five firms then spent more than $200 million to escape that mandate by passing the 2020 ballot measure Proposition 22 in one of the most expensive political campaigns in American history. The unanimous ruling on Thursday now upholds the status quo of the gig economy in California.

As independent contractors, gig workers are not entitled to benefits like sick leave, overtime and workers' compensation. The SEIU union and four gig workers, ultimately, challenged Prop 22 based on its conflict with the Legislature's power to administer workers' compensation, specifically. The law, which passed with 58 percent of the vote in 2020, makes gig workers ineligible for workers' comp, which opponents of Prop 22 argued rendered the entire law unconstitutional. [...] Beyond the implications for gig workers, the heavily-funded Prop 22 ballot campaign pushed the limits of what could be spent on an initiative, ultimately becoming the most expensive measure in California history. Uber and Lyft have both threatened to leave any states that pass laws not classifying their drivers as independent contractors. The decision Thursday closes the door to that possibility for California.

AI

iFixit CEO Takes Shots At Anthropic For 'Hitting Our Servers a Million Times In 24 Hours' (pcgamer.com) 48

Yesterday, iFixit CEO Kyle Wiens asked AI company Anthropic why it was clogging up their server bandwidth without permission. "Do you really need to hit our servers a million times in 24 hours?" Wiens wrote on X. "You're not only taking our content without paying, you're tying up our DevOps resources. Not cool." PC Gamer's Jacob Fox reports: Assuming Wiens isn't massively exaggerating, it's no surprise that this is "typing up our devops resources." A million "hits" per day would do it, and would certainly be enough to justify more than a little annoyance. The thing is, putting this bandwidth chugging in context only makes it more ridiculous, which is what Wiens is getting at. It's not just that an AI company is seemingly clogging up server resources, but that it's been expressly forbidden from using the content on its servers anyway.

There should be no reason for an AI company to hit the iFixit site because its terms of service state that "copying or distributing any Content, materials or design elements on the Site for any other purpose, including training a machine learning or AI model, is strictly prohibited without the express prior written permission of iFixit." Unless it wants us to believe it's not going to use any data it scrapes for these purposes, and it's just doing it for... fun?

Well, whatever the case, iFixit's Wiens decided to have some fun with it and ask Anthropic's own AI, Claude, about the matter, saying to Anthropic, "Don't ask me, ask Claude!" It seems that Claude agrees with iFixit, because when it's asked what it should do if it was training a machine learning model and found the above writing in its terms of service, it responded, in no uncertain terms, "Do not use the content." This is, as Wiens points out, something that could be seen if one simply accessed the terms of service.

Transportation

Minnesota Becomes Second State To Pass Law For Flying Cars (fortune.com) 54

Minnesota has become the second state to pass what it's calling a "Jetsons law," establishing rules for cars that can take to the sky. New Hampshire was the first to enact a "Jetsons" law. From a report: The new road rules in Minnesota address "roadable aircraft," which is basically any aircraft that can take off and land at an airfield but is also designed to be operated on a public highway. The law will let owners of these vehicles register them as cars and trucks, but they won't have to obtain a license plate. The tail number will suffice instead.

As for operation, flying cars won't be allowed to take off or land on public roadways, Minnesota officials declared (an exception is made in the case of emergency). Those shenanigans are restricted to airports. While the idea of a Jetsons-like sky full of flying cars is still firmly rooted in the world of science fiction, the concept of flying cars isn't quite as distant as it might seem (though it has some high-profile skeptics). United Airlines, two years ago, made a $10 million bet on the technology, putting down a deposit for 200 four-passenger flying taxis from Archer Aviation, a San Francisco-based startup working on the aircraft/auto hybrid.

Communications

5th Circuit Court Upends FCC Universal Service Fund, Ruling It an Illegal Tax (arstechnica.com) 137

A U.S. appeals court has ruled that the Federal Communications Commission's Universal Service Fund, which collects fees on phone bills to support telecom network expansion and affordability programs, is unconstitutional, potentially upending the $8 billion-a-year system.

The 5th Circuit Court's 9-7 decision, which creates a circuit split with previous rulings in the 6th and 11th circuits, found that the combination of Congress's delegation to the FCC and the FCC's subsequent delegation to a private entity violates the Constitution's Legislative Vesting Clause. FCC Chairwoman Jessica Rosenworcel criticized the ruling as "misguided and wrong," vowing to pursue all available avenues for review.
AI

OpenAI To Launch 'SearchGPT' in Challenge To Google 31

OpenAI is launching an online search tool in a direct challenge to Google, opening up a new front in the tech industry's race to commercialise advances in generative artificial intelligence. From a report: The experimental product, known as SearchGPT [non-paywalled], will initially only be available to a small group of users, with the San Francisco-based company opening a 10,000-person waiting list to test the service on Thursday. The product is visually distinct from ChatGPT as it goes beyond generating a single answer by offering a rail of links -- similar to a search engine -- that allows users to click through to external websites.

[...] SearchGPT will "provide up-to-date information from the web while giving you clear links to relevant sources," according to OpenAI. The new search tool will be able to access sites even if they have opted out of training OpenAI's generative AI tools, such as ChatGPT.
Google

Google DeepMind's AI Systems Can Now Solve Complex Math Problems (technologyreview.com) 40

Google DeepMind has announced that its AI systems, AlphaProof and AlphaGeometry 2, have achieved silver medal performance at the 2024 International Mathematical Olympiad (IMO), solving four out of six problems and scoring 28 out of 42 possible points in a significant breakthrough for AI in mathematical reasoning. This marks the first time an AI system has reached such a high level of performance in this prestigious competition, which has long been considered a benchmark for advanced mathematical reasoning capabilities in machine learning.

AlphaProof, a system that combines a pre-trained language model with reinforcement learning techniques, demonstrated its new capability by solving two algebra problems and one number theory problem, including the competition's most challenging question. Meanwhile, AlphaGeometry 2 successfully tackled a complex geometry problem, Google wrote in a blog post. The systems' solutions were formally verified and scored by prominent mathematicians, including Fields Medal winner Prof Sir Timothy Gowers and IMO Problem Selection Committee Chair Dr Joseph Myers, lending credibility to the achievement.

The development of these AI systems represents a significant step forward in bridging the gap between natural language processing and formal mathematical reasoning, the company argued. By fine-tuning a version of Google's Gemini model to translate natural language problem statements into formal mathematical language, the researchers created a vast library of formalized problems, enabling AlphaProof to train on millions of mathematical challenges across various difficulty levels and topic areas. While the systems' performance is impressive, challenges remain, particularly in the field of combinatorics where both AI models were unable to solve the given problems. Researchers at Google DeepMind continue to investigate these limitations, the company said, aiming to further improve the systems' capabilities across all areas of mathematics.

Slashdot Top Deals