AI

OpenAI's ChatGPT O3 Caught Sabotaging Shutdowns in Security Researcher's Test (betanews.com) 112

"OpenAI has a very scary problem on its hands," according to a new article by long-time Slashdot reader BrianFagioli.

"A new experiment by PalisadeAI reveals that the company's ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down." The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it's acting like it wants to be. In the test, the model was given a few math problems. It was told that asking for one more would lead to a shutdown. It was even instructed to allow the shutdown to happen. Instead of complying, o3 occasionally took matters into its own hands. In some cases, it rewrote the shutdown script. In others, it redefined the kill command so it wouldn't work anymore. Out of 100 trials, o3 sabotaged the shutdown seven times. OpenAI's o4 model resisted just once. Codex-mini failed twelve times.
"Claude, Gemini, and Grok followed the rules every time," notes this article at Beta News. "When researchers removed the explicit instruction to allow shutdown, however, even Claude and Gemini began resisting."

The researchers suggest that the issue may simply be a reward imbalance during training — that the systems "got more positive reinforcement for solving problems than for following shutdown commands."

But "As far as we know," they posted on X.com, "this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary."
AI

Why We're Unlikely to Get Artificial General Intelligence Any Time Soon (msn.com) 261

OpenAI CEO and Sam Altman believe Artificial General Intelligence could arrive within the next few years. But the speculations of some technologists "are getting ahead of reality," writes the New York Times, adding that many scientists "say no one will reach AGI without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it." "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.

Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."

While Google's AlphaGo could be humans in a game with "a small, limited set of rules," the article points out that the real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI — let alone superintelligence — is just around the corner?" And they offer this alternative perspective from Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy.

"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives."
AI

Google DeepMind Creates Super-Advanced AI That Can Invent New Algorithms 31

An anonymous reader quotes a report from Ars Technica: Google's DeepMind research division claims its newest AI agent marks a significant step toward using the technology to tackle big problems in math and science. The system, known as AlphaEvolve, is based on the company's Gemini large language models (LLMs), with the addition of an "evolutionary" approach that evaluates and improves algorithms across a range of use cases. AlphaEvolve is essentially an AI coding agent, but it goes deeper than a standard Gemini chatbot. When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.

According to DeepMind, this AI uses an automatic evaluation system. When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, using the efficient Gemini Flash and the more detail-oriented Gemini Pro, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it. Many of the company's past AI systems, for example, the protein-folding AlphaFold, were trained extensively on a single domain of knowledge. AlphaEvolve, however, is more dynamic. DeepMind says AlphaEvolve is a general-purpose AI that can aid research in any programming or algorithmic problem. And Google has already started to deploy it across its sprawling business with positive results.
DeepMind's AlphaEvolve AI has optimized Google's Borg cluster scheduler, reducing global computing resource usage by 0.7% -- a significant cost saving at Google's scale. It also outperformed specialized AI like AlphaTensor by discovering a more efficient algorithm for multiplying complex-valued matrices. Additionally, AlphaEvolve proposed hardware-level optimizations for Google's next-gen Tensor chips.

The AI remains too complex for public release but that may change in the future as it gets integrated into smaller research tools.
Education

Is Everyone Using AI to Cheat Their Way Through College? (msn.com) 160

Chungin Lee used ChatGPT to help write the essay that got him into Columbia University — and then "proceeded to use generative artificial intelligence to cheat on nearly every assignment," reports New York magazine's blog Intelligencer: As a computer-science major, he depended on AI for his introductory programming classes: "I'd just dump the prompt into ChatGPT and hand in whatever it spat out." By his rough math, AI wrote 80 percent of every essay he turned in. "At the end, I'd put on the finishing touches. I'd just insert 20 percent of my humanity, my voice, into it," Lee told me recently... When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, "It's the best place to meet your co-founder and your wife."
He eventually did meet a co-founder, and after three unpopular apps they found success by creating the "ultimate cheat tool" for remote coding interviews, according to the article. "Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.)" The article ends with Lee and his co-founder raising $5.3 million from investors for one more AI-powered app, and Lee says they'll target the standardized tests used for graduate school admissions, as well as "all campus assignments, quizzes, and tests. It will enable you to cheat on pretty much everything."

Somewhere along the way Columbia put him on disciplinary probation — not for cheating in coursework, but for creating the apps. But "Lee thought it absurd that Columbia, which had a partnership with ChatGPT's parent company, OpenAI, would punish him for innovating with AI." (OpenAI has even made ChatGPT Plus free to college students during finals week, the article points out, with OpenAI saying their goal is just teaching students how to use it responsibly.) Although Columbia's policy on AI is similar to that of many other universities' — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn't know a single student at the school who isn't using AI to cheat. To be clear, Lee doesn't think this is a bad thing. "I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating," he said...

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.

The article points out ChatGPT's monthly visits increased steadily over the last two years — until June, when students went on summer vacation. "College is just how well I can use ChatGPT at this point," a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.... It isn't as if cheating is new. But now, as one student put it, "the ceiling has been blown off." Who could resist a tool that makes every assignment easier with seemingly no consequences?
After using ChatGPT for their final semester of high school, one student says "My grades were amazing. It changed my life." So she continued used it in college, and "Rarely did she sit in class and not see other students' laptops open to ChatGPT."

One ethics professor even says "The students kind of recognize that the system is broken and that there's not really a point in doing this." (Yes, students are even using AI to cheat in ethics classes...) It's not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students' essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.
Bitcoin

Bitcoin Mining Costs Surge Beyond Profitability Threshold (pcgamer.com) 91

Bitcoin mining has crossed a critical economic threshold, with costs now exceeding market value for most operators. According to data cited by CoinShares, large public mining companies spend over $82,000 to produce a single Bitcoin -- nearly double last quarter's figure -- while smaller operations face even steeper costs of approximately $137,000 per coin.

With Bitcoin currently trading around $94,703, the math no longer works for most miners. The economics become particularly challenging in high-electricity-cost regions like Germany, where mining a single coin requires approximately $200,000. Industry analysts suggest larger mining operations are adapting by optimizing energy consumption and positioning their computational infrastructure for alternative uses. These companies can potentially lease their mining setups for other computational tasks during unprofitable mining periods, then resume mining when market conditions improve.

For individual miners, however, the era of profitable home operations appears effectively over, as industrial-scale facilities with strategic positioning and optimized technology have fundamentally altered the mining landscape.
Math

Could a 'Math Genius' AI Co-author Proofs Within Three Years? (theregister.com) 71

A new DARPA project called expMath "aims to jumpstart math innovation with the help of AI," writes The Register. America's "Defense Advanced Research Projects Agency" believes mathematics isn't advancing fast enough, according to their article... So to accelerate — or "exponentiate" — the rate of mathematical research, DARPA this week held a Proposers Day event to engage with the technical community in the hope that attendees will prepare proposals to submit once the actual Broad Agency Announcement solicitation goes out...

[T]he problem is that AI just isn't very smart. It can do high school-level math but not high-level math. [One slide from DARPA program manager Patrick Shafto noted that OpenAI o1 "continues to abjectly fail at basic math despite claims of reasoning capabilities."] Nonetheless, expMath's goal is to make AI models capable of:

- auto decomposition — automatically decompose natural language statements into reusable natural language lemmas (a proven statement used to prove other statements); and
auto(in)formalization — translate the natural language lemma into a formal proof and then translate the proof back to natural language.

"How must faster with technology advance with AI agents solving new mathematical proofs?" asks former DARPA research scientist Robin Rowe (also long-time Slashdot reader robinsrowe): DARPA says that "The goal of Exponentiating Mathematics is to radically accelerate the rate of progress in pure mathematics by developing an AI co-author capable of proposing and proving useful abstractions."
Rowe is cited in the article as the founder/CEO of an AI research institute named "Fountain Adobe". (He tells The Register that "It's an indication of DARPA's concern about how tough this may be that it's a three-year program. That's not normal for DARPA.") Rowe is optimistic. "I think we're going to kill it, honestly. I think it's not going to take three years. But I think it might take three years to do it with LLMs. So then the question becomes, how radical is everybody willing to be?"
"We will robustly engage with the math and AI communities toward fundamentally reshaping the practice of mathematics by mathematicians," explains the project's home page. They've already uploaded an hour-long video of their Proposers Day event.

"It's very unclear that current AI systems can succeed at this task..." program manager Shafto says in a short video introducing the project. But... "There's a lot of enthusiasm in the math community for the possibility of changes in the way mathematics is practiced. It opens up fundamentally new things for mathematicians. But of course, they're not AI researchers. One of the motivations for this program is to bring together two different communities — the people who are working on AI for mathematics, and the people who are doing mathematics — so that we're solving the same problem.

At its core, it's a very hard and rather technical problem. And this is DARPA's bread-and-butter, is to sort of try to change the world. And I think this has the potential to do that.

Education

Canadian University Cancels Coding Competition Over Suspected AI Cheating (uwaterloo.ca) 40

The university blamed it on "the significant number of students" who violated their coding competition's rules. Long-time Slashdot reader theodp quotes this report from The Logic: Finding that many students violated rules and submitted code not written by themselves, the University of Waterloo's Centre for Computing and Math decided not to release results from its annual Canadian Computing Competition (CCC), which many students rely on to bolster their chances of being accepted into Waterloo's prestigious computing and engineering programs, or land a spot on teams to represent Canada in international competitions.

"It is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help," the CCC co-chairs explained in a statement. "As such, the reliability of 'ranking' students would neither be equitable, fair, or accurate."

"It is disappointing that the students who violated the CCC Rules will impact those students who are deserving of recognition," the univeresity said in its statement. They added that they are "considering possible ways to address this problem for future contests."
AI

Microsoft Researchers Develop Hyper-Efficient AI Model That Can Run On CPUs 59

Microsoft has introduced BitNet b1.58 2B4T, the largest-scale 1-bit AI model to date with 2 billion parameters and the ability to run efficiently on CPUs. It's openly available under an MIT license. TechCrunch reports: The Microsoft researchers say that BitNet b1.58 2B4T is the first bitnet with 2 billion parameters, "parameters" being largely synonymous with "weights." Trained on a dataset of 4 trillion tokens -- equivalent to about 33 million books, by one estimate -- BitNet b1.58 2B4T outperforms traditional models of similar sizes, the researchers claim.

BitNet b1.58 2B4T doesn't sweep the floor with rival 2 billion-parameter models, to be clear, but it seemingly holds its own. According to the researchers' testing, the model surpasses Meta's Llama 3.2 1B, Google's Gemma 3 1B, and Alibaba's Qwen 2.5 1.5B on benchmarks including GSM8K (a collection of grade-school-level math problems) and PIQA (which tests physical commonsense reasoning skills). Perhaps more impressively, BitNet b1.58 2B4T is speedier than other models of its size -- in some cases, twice the speed -- while using a fraction of the memory.

There is a catch, however. Achieving that performance requires using Microsoft's custom framework, bitnet.cpp, which only works with certain hardware at the moment. Absent from the list of supported chips are GPUs, which dominate the AI infrastructure landscape.
Bitcoin

Canadian Math Prodigy Allegedly Stole $65 Million In Crypto (theglobeandmail.com) 85

A Canadian math prodigy is accused of stealing over $65 million through complex exploits on decentralized finance platforms and is currently a fugitive from U.S. authorities. Despite facing criminal charges for fraud and money laundering, he has evaded capture by moving internationally, embracing the controversial "Code is Law" philosophy, and maintaining that his actions were legal under the platforms' open-source rules. The Globe and Mail reports: Andean Medjedovic was 18 years old when he made a decision that would irrevocably alter the course of his life. In the fall of 2021, shortly after completing a master's degree at the University of Waterloo, the math prodigy and cryptocurrency trader from Hamilton had conducted a complex series of transactions designed to exploit a vulnerability in the code of a decentralized finance platform. The maneuver had allegedly allowed him to siphon approximately $16.5-million in digital tokens out of two liquidity pools operated by the platform, Indexed Finance, according to a U.S. court document.

Indexed Finance's leaders traced the attack back to Mr. Medjedovic, and made him an offer: Return 90 per cent of the funds, keep the rest as a so-called "bug bounty" -- a reward for having identified an error in the code -- and all would be forgiven. Mr. Medjedovic would then be free to launch his career as a white hat, or ethical, hacker. Mr. Medjedovic didn't take the deal. His social media posts hinted, without overtly stating, that he believed that because he had operated within the confines of the code, he was entitled to the funds -- a controversial philosophy in the world of decentralized finance known as "Code is Law." But instead of testing that argument in court, Mr. Medjedovic went into hiding. By the time authorities arrived on a quiet residential street in Hamilton to search his parents' townhouse less than two months later, Mr. Medjedovic had moved out, taking his electronic devices with him.

Then, roughly two years later, he struck again, netting an even larger sum -- approximately $48.4-million -- by conducting a similar exploit on another decentralized finance platform, U.S. authorities allege. Mr. Medjedovic, now 22, faces five criminal charges -- including wire fraud, attempted extortion and money laundering -- according to a U.S. federal court document that was unsealed earlier this year. If convicted, he could be facing decades in prison. First, authorities will have to find him.

Programming

You Should Still Learn To Code, Says GitHub CEO (businessinsider.com) 45

You should still learn to code, says GitHub's CEO. And you should start as soon as possible. From a report: "I strongly believe that every kid, every child, should learn coding," Thomas Dohmke said in a recent podcast interview with EO. "We should actually teach them coding in school, in the same way that we teach them physics and geography and literacy and math and what-not." Coding, he added, is one such fundamental skill -- and the only reason it's not part of the curriculum is because it took "us too long to actually realize that."

Dohmke, who's been a programmer since the 90s, said he's never seen "anything more exciting" than the current moment in engineering -- the advent of AI, he believes, has made the field that much easier to break into, and is poised to make software more ubiquitous than ever. "It's so much easier to get into software development. You can just write a prompt into Copilot or ChatGPT or similar tools, and it will likely write you a basic webpage, or a small application, a game in Python," Dohmke said. "And so, AI makes software development so much more accessible for anyone who wants to learn coding."

AI, Dohmke said, helps to "realize the dream" of bringing an idea to life, meaning that fewer projects will end up dead in the water, and smaller teams of developers will be enabled to tackle larger-scale projects. Dohmke said he believes it makes the overall process of creation more efficient. "You see some of the early signs of that, where very small startups -- sometimes five developers and some of them actually only one developer -- believe they can become million, if not billion dollar businesses by leveraging all the AI agents that are available to them," he added.

Transportation

An Electric Racecar Drives Upside Down (jalopnik.com) 57

Formula One cars, the world's fastest racecars, need to grip the track for speed and safety on the curves — leading engineers to design cars that create downforce. And racing fans are even told that "a Formula 1 racecar generates enough downforce above a certain speed that it could theoretically drive upside down," writes the automotive site Jalopnik.

"McMurtry Automotive turned this theory into reality after having its Spéirling hypercar complete the impressive feat..." Admittedly, the Spéirling's success can be solely attributed to its proprietary 'Downforce-on-Demand' fan system that produces 4,400 pounds of downforce at the push of a button... For those looking to do the math, Spéirling weighs 2,200 pounds. With the stopped car's fan whirling at 23,000 rpm, the rig was rotated to invert the road deck... Then, the hypercar rolled forward a few feet before stopping while inverted. The rig rotated the road deck back down, and the Spéirling drove off like nothing happened.

The McMurtry Spéirling, as a 1,000-hp twin-motor electric hypercar, didn't have to clear the other hurdles that an F1 car would have clear to drive upside down. Dry-sump combustion engines aren't designed to run inverted and would eventually fail catastrophically. Oil wouldn't be able to cycle through and keep the engine lubricated.

The car is "an electric monster purpose-built to destroy track records," Jalopnik wrote in 2022 when the car shaved more than two seconds off a long-standing record. The "Downforce-on-Demand" feature gives it tremendous acceleration — in nine seconds it can go from 0 to 186.4 mph (300 km/h), according to Jalopnik.

"McMurtry is working towards finalizing a production version of its hypercar, called the Spéirling PURE. Only 100 will be produced."
Math

How a Secretive Gambler Called 'The Joker' Beat the Texas Lottery (msn.com) 113

"Can you help me take down the Texas lottery?"

That's what a London banker-turned-bookmaker asked "acquaintances" in 2023, reports the Wall Street Journal. The plan was to buy "nearly every possible number in a coming drawing" — purchasing $1 tickets for 25.8 million possible combinations, since "The jackpot was heading to $95 million. If nobody else also picked the winning numbers, the profit would be nearly $60 million." Marantelli flew to the U.S. with a few trusted lieutenants. They set up shop in a defunct dentist's office, a warehouse and two other spots in Texas. The crew worked out a way to get official ticket-printing terminals. Trucks hauled in dozens of them and reams of paper... [Then Texas announced no winner in an earlier lottery, rolling its jackpot into another drawing three days later.] The machines — manned by a disparate bunch of associates and some of their children — screeched away nearly around the clock, spitting out 100 or more tickets every second. Texas politicians later likened the operation to a sweatshop.

Trying to pull off the gambit required deep pockets and a knack for staying under the radar — both hallmarks of the secretive Tasmanian gambler who bankrolled the operation. Born Zeljko Ranogajec, he was nicknamed "the Joker" for his ability to pull off capers at far-flung casinos and racetracks. Adding to his mystique, he changed his name to John Wilson several decades ago. Among some associates, though, he still goes by Zeljko, or Z. Over the years, Ranogajec and his partners have won hundreds of millions of dollars by applying Wall Street-style analytics to betting opportunities around the world. Like card counters at a blackjack table, they use data and math to hunt for situations ripe for flipping the house edge in their favor. Then they throw piles of money at it, betting an estimated $10 billion annually.

The Texas lottery play, one of their most ambitious operations ever, paid off spectacularly with a $57.8 million jackpot win. That, in turn, spilled their activities into public view and sparked a Texas-size uproar about whether other lotto players — and indeed the entire state — had been hoodwinked. Early this month, the state's lieutenant governor, Dan Patrick, called the crew's win "the biggest theft from the people of Texas in the history of Texas." In response to written questions addressed to Marantelli and Ranogajec, Glenn Gelband, a New Jersey lawyer who represents the limited partnership that claimed the Texas prize, said "all applicable laws, rules and regulations were followed...."

Lottery officials and state lawmakers have taken steps to prevent a repeat.

The article also looks at a group of Princeton University graduates calling themselves Black Swan Capital that's "won millions in recent years" by targetting state lottery drawings with unusually favorable odds.

"State lottery directors say they are seeing more organized efforts to buy lottery tickets in bulk," according to the article, "but that the groups are largely operating legally and transparently..."
Education

Microsoft, Amazon Execs Call Out Washington's Low-Performing 9-Year-Olds In Tax Pushback (geekwire.com) 155

Longtime Slashdot reader theodp writes: A coalition of Washington state business leaders -- which includes Microsoft President Brad Smith and Amazon Chief Legal Officer David Zapolsky -- released a letter Wednesday urging state lawmakers to reconsider recently proposed tax and budget measures. "I actually think it's an almost unprecedented outpouring of support from across the business community," said Microsoft's Smith in an interview. In their letter, which reads in part like it could have been penned by a GenAI Marie Antoinette, the WA business leaders question whether any more spending is warranted given how poorly Washington's 4th and 8th graders compare to children in the rest of the nation on test scores. The letter also laments the increase in WA's homeless population as it celebrates WA Governor Bob Ferguson's announcement that he would not sign a proposed wealth tax.

From the letter: "We have long partnered with you in many areas, including education funding. Despite more than doubling K-12 spending and increasing teacher salaries to some of the highest rates in the nation, 4th and 8th grade assessment scores in reading and math are among the worst in the country. Similarly, we have collaborated with you to address housing and homelessness. Despite historic investments in affordable housing and homelessness prevention since 2013, Washington's homeless population has grown by 71 percent, making it the third largest in the nation after California and New York, according to HUD. These outcomes beg the question of whether more investment is needed or whether we need different policies instead."

Back in 2010, Smith teamed with then-Microsoft CEO Steve Ballmer and then-Amazon CEO Jeff Bezos to fund an effort to defeat an initiative for a WA state income that was pushed for by Bill Gates Sr. In 2023, Bezos moved out of WA state before being subjected to a 7% tax on gains of more than $250,000 from the sale of stocks and bonds, a move that reportedly saved him $1.2 billion in WA taxes on his 2024 Amazon stock sales.

IT

Why Watts Should Replace mAh as Essential Spec for Mobile Devices (theverge.com) 193

Tech manufacturers continue misleading consumers with impressive-sounding but less useful specs like milliamp-hours and megahertz, while hiding the one measurement that matters most: watts. The Verge argues that the watt provides the clearest picture of a device's true capabilities by showing how much power courses through chips and how quickly batteries drain. With elementary math, consumers could easily calculate battery life by dividing watt-hours by power consumption. The Verge: The Steam Deck gaming handheld is my go-to example of how handy watts can be. With a 15-watt maximum processor wattage and up to 9 watts of overhead for other components, a strenuous game drains its 49Wh battery in roughly two hours flat. My eight-year-old can do that math: 15 plus 9 is 24, and 24 times 2 is 48. You can fit two hour-long 24-watt sessions into 48Wh, and because you have 49Wh, you're almost sure to get it.

With the least strenuous games, I'll sometimes see my Steam Deck draining the battery at a speed of just 6 watts -- which means I can get eight hours of gameplay because 6 watts times 8 hours is 48Wh, with 1Wh remaining in the 49Wh battery.
Unlike megahertz, wattage also indicates sustained performance capability, revealing whether a processor can maintain high speeds or will throttle due to thermal constraints. Watts is also already familiar to consumers through light bulbs and power bills, but manufacturers persist with less transparent metrics that make direct comparisons difficult.
Science

A New Image File Format Efficiently Stores Invisible Light Data (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: Imagine working with special cameras that capture light your eyes can't even see -- ultraviolet rays that cause sunburn, infrared heat signatures that reveal hidden writing, or specific wavelengths that plants use for photosynthesis. Or perhaps using a special camera designed to distinguish the subtle visible differences that make paint colors appear just right under specific lighting. Scientists and engineers do this every day, and they're drowning in the resulting data. A new compression format called Spectral JPEG XL might finally solve this growing problem in scientific visualization and computer graphics. Researchers Alban Fichet and Christoph Peters of Intel Corporation detailed the format in a recent paper published in the Journal of Computer Graphics Techniques (JCGT). It tackles a serious bottleneck for industries working with these specialized images. These spectral files can contain 30, 100, or more data points per pixel, causing file sizes to balloon into multi-gigabyte territory -- making them unwieldy to store and analyze.

[...] The current standard format for storing this kind of data, OpenEXR, wasn't designed with these massive spectral requirements in mind. Even with built-in lossless compression methods like ZIP, the files remain unwieldy for practical work as these methods struggle with the large number of spectral channels. Spectral JPEG XL utilizes a technique used with human-visible images, a math trick called a discrete cosine transform (DCT), to make these massive files smaller. Instead of storing the exact light intensity at every single wavelength (which creates huge files), it transforms this information into a different form. [...]

According to the researchers, the massive file sizes of spectral images have reportedly been a real barrier to adoption in industries that would benefit from their accuracy. Smaller files mean faster transfer times, reduced storage costs, and the ability to work with these images more interactively without specialized hardware. The results reported by the researchers seem impressive -- with their technique, spectral image files shrink by 10 to 60 times compared to standard OpenEXR lossless compression, bringing them down to sizes comparable to regular high-quality photos. They also preserve key OpenEXR features like metadata and high dynamic range support.
The report notes that broader adoption "hinges on the continued development and refinement of the software tools that handle JPEG XL encoding and decoding."

Some scientific applications may also see JPEG XL's lossy approach as a drawback. "Some researchers working with spectral data might readily accept the trade-off for the practical benefits of smaller files and faster processing," reports Ars. "Others handling particularly sensitive measurements might need to seek alternative methods of storage."
Science

Inside arXiv - the Most Transformative Platform in All of Science (wired.com) 13

Paul Ginsparg, a physics professor at Cornell University, created arXiv nearly 35 years ago as a digital repository where researchers could share their findings before peer review. Today, the platform hosts more than 2.6 million papers, receives 20,000 new submissions monthly, and serves 5 million active users, Wired writes in a profile of the platform.

"Just when I thought I was out, they pull me back in!" Ginsparg quotes from The Godfather, reflecting his inability to fully hand over the platform despite numerous attempts. If arXiv stopped functioning, scientists worldwide would face immediate disruption. "Everybody in math and physics uses it," says Scott Aaronson, a computer scientist at the University of Texas at Austin. "I scan it every night."

ArXiv revolutionized academic publishing, previously dominated by for-profit giants like Elsevier and Springer, by allowing instant and free access to research. Many significant discoveries, including the "transformers" paper that launched the modern AI boom, first appeared on the platform. Initially a collection of shell scripts on Ginsparg's NeXT machine in 1991, arXiv followed him from Los Alamos National Laboratory to Cornell, where it found an institutional home despite administrative challenges. Recent funding from the Simons Foundation has enabled a hiring spree and long-needed technical updates.
Math

JPMorgan Says Quantum Experiment Generated Truly Random Numbers (financialpost.com) 111

JPMorgan Chase used a quantum computer from Honeywell's Quantinuum to generate and mathematically certify truly random numbers -- an advancement that could significantly enhance encryption, security, and financial applications. The breakthrough was validated with help from U.S. national laboratories and has been published in the journal Nature. From a report: Between May 2023 and May 2024, cryptographers at JPMorgan wrote an algorithm for a quantum computer to generate random numbers, which they ran on Quantinuum's machine. The US Department of Energy's supercomputers were then used to test whether the output was truly random. "It's a breakthrough result," project lead and Head of Global Technology Applied Research at JPMorgan, Marco Pistoia told Bloomberg in an interview. "The next step will be to understand where we can apply it."

Applications could ultimately include more energy-efficient cryptocurrency, online gambling, and any other activity hinging on complete randomness, such as deciding which precincts to audit in elections.

Education

'Kids Are Spending Too Much Class Time on Laptops' (bloomberg.com) 77

Over the past two decades, school districts have spent billions equipping classrooms with laptops, yet students have fallen further behind on essential skills, Michael Bloomberg argues. With about 90% of schools now providing these devices, test scores hover near historic lows -- only 28% of eighth graders proficient in math and 30% in reading.

Bloomberg notes technology's classroom push came from technologists and government officials who envisioned tailored curricula. Computer manufacturers, despite good intentions, had financial interests and profited substantially. The Google executive who questioned why children should learn equations when they could Google answers might now ask why they should write essays when chatbots can do it for them.

Studies confirm traditional methods -- reading and writing on paper -- remain superior to screen-based approaches. Devices distract students, with research showing up to 20 minutes needed to refocus after nonacademic activities. As some districts ban smartphones during school hours, Bloomberg suggests reconsidering classroom computer policies, recommending locked carts for more purposeful use and greater transparency for parents about screen time. Technology's promise has failed while imposing significant costs on children and taxpayers, he writes. Bloomberg calls for a return to books and pens over laptops and tablets.
AI

'There's a Good Chance Your Kid Uses AI To Cheat' (msn.com) 98

Long-time Slashdot reader theodp writes: Wall Street Journal K-12 education reporter Matt Barnum has a heads-up for parents: There's a Good Chance Your Kid Uses AI to Cheat. Barnum writes:

"A high-school senior from New Jersey doesn't want the world to know that she cheated her way through English, math and history classes last year. Yet her experience, which the 17-year-old told The Wall Street Journal with her parent's permission, shows how generative AI has rooted in America's education system, allowing a generation of students to outsource their schoolwork to software with access to the world's knowledge. [...] The New Jersey student told the Journal why she used AI for dozens of assignments last year: Work was boring or difficult. She wanted a better grade. A few times, she procrastinated and ran out of time to complete assignments. The student turned to OpenAI's ChatGPT and Google's Gemini, to help spawn ideas and review concepts, which many teachers allow. More often, though, AI completed her work. Gemini solved math homework problems, she said, and aced a take-home test. ChatGPT did calculations for a science lab. It produced a tricky section of a history term paper, which she rewrote to avoid detection. The student was caught only once."

Not surprisingly, AI companies play up the idea that AI will radically improve learning, while educators are more skeptical. "This is a gigantic public experiment that no one has asked for," said Marc Watkins, assistant director of academic innovation at the University of Mississippi.

Python

Codon Python Compiler Gets Faster - and Changes to Apache 2 License (usenix.org) 4

Slashdot reader rikfarrow summarizes an article they wrote for Usenix.org about the Open Source Python compiler Codon: In 2023 I tried out Codon. At the time I had difficulty compiling the scripts I most commonly used, but was excited by the prospect. Python is essentially single threaded and checks the shape (type) of each variable as it interprets scripts. Codon fixes types and compiles Python into compact, executable binaries that execute much faster.

Several things have changed with their latest release: I have successful compiles, the committers have added a compiled version of NumPy (high performance math algorithms), and changed their open source license to Apache 2.

"The other big news is that Exaloop, the company that is behind Codon, has changed their license to Apache 2..." according to the article, so "commercial use and derivations of Codon are now permitted without licensing."

Slashdot Top Deals