Education

Changes Are Coming To the ACT Exam (cnn.com) 81

Major changes are coming to the ACT college admissions exam in the spring, the CEO of ACT announced Monday. From a report: The exam will be evolving to "meet the challenges students and educators face" -- and that will include shortening the core test and making the science section optional, chief executive Janet Godwin said in a post on the non-profit's website. The changes will begin with national online tests in spring 2025 and be rolled out for school-day testing in spring 2026, Godwin said in the post. The decision to alter the ACT follows changes made to the SAT earlier this year by the College Board, the non-profit organization that develops and administers that test. The SAT was shortened by a third and went fully digital.

Science is being removed from the ACT's core sections, leaving English, reading and math as the portions that will result in a college-reportable composite score ranging from 1 to 36, Godwin wrote. The science section, like the ACT's writing section already was, will be optional. "This means students can choose to take the ACT, the ACT plus science, the ACT plus writing, or the ACT plus science and writing," Godwin wrote. "With this flexibility, students can focus on their strengths and showcase their abilities in the best possible way."

IOS

iOS 18 Could 'Sherlock' $400 Million In App Revenue (techcrunch.com) 43

An anonymous reader quotes a report from TechCrunch: Apple's practice of leveraging ideas from its third-party developer community to become new iOS and Mac features and apps has a hefty price tag, a new report indicates. Ahead of its fall release, you can download the public beta for iOS 18 right now to get a firsthand look at Apple's changes, which may affect apps that today have an estimated $393 million in revenue and have been downloaded roughly 58 million times over the past year, according to an analysis by app intelligence firm Appfigures. Every June at Apple's Worldwide Developers Conference, the iPhone maker teases the upcoming releases of its software and operating systems, which often include features previously only available through third-party apps. The practice is so common now it's even been given a name: "sherlocking" -- a reference to a 1990s search app for Mac that borrowed features from a third-party app known as Watson. Now when Apple launches a new feature that was before the domain of a third-party app, it's said to have "sherlocked" the app. [...]

In an analysis of third-party apps that generated more than 1,000 downloads per year, Appfigures discovered several genres that had found themselves in Apple's crosshairs in 2024. In terms of worldwide gross revenue, these categories have generated significant income over the past 12 months, with the trail app category making the most at $307 million per year, led by market leader and 2023 Apple "App of the Year" AllTrails. Grammar helper apps, like Grammarly and others, also generated $35.7 million, while math helpers and password managers earned $23.4 million and $20.3 million, respectively. Apps for making custom emoji generated $7 million, too. Of these, trail apps accounted for the vast majority of "potentially sherlocked" revenue, or 78%, noted Appfigures, as well as 40% of downloads of sherlocked apps. In May 2024, they accounted for an estimated $28.8 million in gross consumer spending and 2.5 million downloads, to give you an idea of scale.

Many of these app categories were growing quickly, with math solvers having seen revenue growth of 43% year-over-year followed by grammar helpers (+40%), password managers (+38%) and trail apps (+28%). Emoji-making apps, however, were seeing declines at -17% year-over-year. By downloads, grammar helpers had seen 9.4 million installs over the past 12 months, followed by emoji makers (10.6 million), math-solving apps (9.5 million) and password managers (457,000 installs).
"Although these apps certainly have dedicated user bases that may not immediately choose to switch to a first-party offering, Apple's ability to offer similar functionality built-in could be detrimental to their potential growth," concludes TechCrunch's Sarah Perez. "Casual users may be satisfied by Apple's 'good enough' solutions and won't seek out alternatives."
Power

Amazon Says It Now Runs On 100% Clean Power. Employees Say It's More Like 22% (fastcompany.com) 90

Today, Amazon announced that it reached its 100% renewable energy goal seven years ahead of schedule. However, as Fast Company's Adele Peters reports, "a group of Amazon employees argues that the company's math is misleading." From the report: A report (PDF) from the group, Amazon Employees for Climate Justice, argues that only 22% of the company's data centers in the U.S. actually run on clean power. The employees looked at where each data center was located and the mix of power on the regional grids -- how much was coming from coal, gas, or oil versus solar or wind. Amazon, like many other companies, buys renewable energy credits (RECs) for a certain amount of clean power that's produced by a solar plant or wind farm. In theory, RECs are supposed to push new renewable energy to get built. In reality, that doesn't always happen. The employee research found that 68% of Amazon's RECs are unbundled, meaning that they didn't fund new renewable infrastructure, but gave credit for renewables that already existed or were already going to be built.

As new data centers are built, they can mean that fossil-fuel-dependent grids end up building new fossil fuel power plants. "Dominion Energy, which is the utility in Virginia, is expanding because of demand, and Amazon is obviously one of their largest customers," says Eliza Pan, a representative from Amazon Employees for Climate Justice and a former Amazon employee. "Dominion's expansion is not renewable expansion. It's more fossil fuels." Amazon also doesn't buy credits that are specifically tied to the grids powering their data centers. The company might purchase RECs from Canada or Arizona, for example, to offset electricity used in Virginia. The credits also aren't tied to the time that the energy was used; data centers run all day and night, but most renewable energy is only available some of the time. The employee group argues that the company should follow the approach that Google takes. Google aims to use carbon-free energy, 24/7, on every grid where it operates.

Education

Curricula From Bill Gates-Backed 'Illustrative Math' Required In NYC High Schools (nyc.gov) 90

New York City announced a "major citywide initiative" to increase "math achievement" among students, according to the mayor's office.

93 middle schools and 420 high schools will implement an "Illustrative Math" curriculum (from an education nonprofit founded in 2011) combined with intensive teacher coaching, starting this fall. "The goal is to ensure that all New York City students develop math skills," according to the NYC Solves web site (with the mayor's office noting "years of stagnant math scores.") Long-time Slashdot reader theodp writes: The NYC Public Schools further explained, "As part of the NYC Solves initiative, all high schools will use Illustrative Mathematics and districts will choose a comprehensive, evidence-based curricula for middle school math instruction from an approved list. Each curriculum has been reviewed and recommended by EdReports, a nationally recognized nonprofit organization."

The About page for Illustrative Mathematics (IM) lists The Bill & Melinda Gates Foundation as a Philanthropic Supporter [as well as the Chan Zuckerberg Initiative and The William and Flora Hewlett Foundation], and lists two Gates Foundation Directors as Board members... A search of Gates Foundation records for "Illustrative Mathematics" turns up $25 million in committed grants since 2012, including a $13.9 million grant to Illustrated Mathematics in Nov. 2022 ("To support the implementation of high-quality instructional materials and practices for improving students' math experience and outcomes") and a $425,000 grant just last month to Educators for Excellence ("To engage teacher feedback on the implementation of Illustrative Mathematics curriculum and help middle school teachers learn about the potential for math high-quality instructional materials and professional learning in New York City").

EdReports, which vouched for the Illustrative Mathematics curriculum (according to New York's Education Department), has received $10+ million in committed Gates Foundation grants. The Gates Foundation is also a very generous backer of NYC's Fund for Public Schools, with grants that included $4,276,973 in October 2023 "to support the implementation of high-quality instructional materials and practices for improving students' math experience and outcomes."

Chalkbeat reported in 2018 on a new focus on high school curriculum by the Gates Foundation ("an area where we feel like we've underinvested," said Bill Gates). The Foundation made math education its top K-12 priority in Oct. 2022 with a $1.1 billion investment. Also note this May 2023 blog post from $14+ million Gates Foundation grantee Educators for Excellence, a New York City nonprofit. The blog post touts the key role the nonprofit had played in a year-long advocacy effort that ultimately "secured a major win" ending the city's curricula "free-for-all" and announced "a standardized algebra curriculum from Illustrative Mathematics will also be piloted at 150 high schools."

As the NY Times reported back in 2011, behind "grass-roots" school advocacy, there's Bill Gates!

AI

MIT Robotics Pioneer Rodney Brooks On Generative AI 41

An anonymous reader quotes a report from TechCrunch: When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently the Panasonic Professor of Robotics Emeritus at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot and his current endeavor, Robust.ai. Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997. In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog of how well he's doing. He knows what he's talking about, and he thinks maybe it's time to put the brakes on the screaming hype that is generative AI. Brooks thinks it's impressive technology, but maybe not quite as capable as many are suggesting. "I'm not saying LLMs are not important, but we have to be careful [with] how we evaluate them," he told TechCrunch.

He says the trouble with generative AI is that, while it's perfectly capable of performing a certain set of tasks, it can't do everything a human can, and humans tend to overestimate its capabilities. "When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that," Brooks said. "And they're usually very over-optimistic, and that's because they use a model of a person's performance on a task." He added that the problem is that generative AI is not human or even human-like, and it's flawed to try and assign human capabilities to it. He says people see it as so capable they even want to use it for applications that don't make sense.

Brooks offers his latest company, Robust.ai, a warehouse robotics system, as an example of this. Someone suggested to him recently that it would be cool and efficient to tell his warehouse robots where to go by building an LLM for his system. In his estimation, however, this is not a reasonable use case for generative AI and would actually slow things down. It's instead much simpler to connect the robots to a stream of data coming from the warehouse management software. "When you have 10,000 orders that just came in that you have to ship in two hours, you have to optimize for that. Language is not gonna help; it's just going to slow things down," he said. "We have massive data processing and massive AI optimization techniques and planning. And that's how we get the orders completed fast."
"People say, 'Oh, the large language models are gonna make robots be able to do things they couldn't do.' That's not where the problem is. The problem with being able to do stuff is about control theory and all sorts of other hardcore math optimization," he said.

"It's not useful in the warehouse to tell an individual robot to go out and get one thing for one order, but it may be useful for eldercare in homes for people to be able to say things to the robots," he said.
Math

The Rubik's Cube Turns 50 (nytimes.com) 18

The Rubik's Cube turns 50 this year, but it's far from retiring. At a recent San Francisco conference, math buffs and puzzle fans celebrated the enduring appeal of Erno Rubik's invention, reports The New York Times. With a mind-boggling 43 quintillion possible configurations, the Cube has inspired countless variants and found uses in education and art.
AI

Chinese AI Tops Hugging Face's Revamped Chatbot Leaderboard 9

Alibaba's Qwen models dominated Hugging Face's latest LLM leaderboard, securing three top-ten spots. The new benchmark, launched Thursday, tests open-source models on tougher criteria including long-context reasoning and complex math. Meta's Llama3-70B also ranked highly, but several Chinese models outperformed Western counterparts. (Closed-source AIs like ChatGPT were excluded.) The leaderboard replaces an earlier version deemed too easy to game.
The Matrix

Researchers Upend AI Status Quo By Eliminating Matrix Multiplication In LLMs 72

Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a new method to run AI language models more efficiently by eliminating matrix multiplication, potentially reducing the environmental impact and operational costs of AI systems. Ars Technica's Benj Edwards reports: Matrix multiplication (often abbreviated to "MatMul") is at the center of most neural network computational tasks today, and GPUs are particularly good at executing the math quickly because they can perform large numbers of multiplication operations in parallel. [...] In the new paper, titled "Scalable MatMul-free Language Modeling," the researchers describe creating a custom 2.7 billion parameter model without using MatMul that features similar performance to conventional large language models (LLMs). They also demonstrate running a 1.3 billion parameter model at 23.8 tokens per second on a GPU that was accelerated by a custom-programmed FPGA chip that uses about 13 watts of power (not counting the GPU's power draw). The implication is that a more efficient FPGA "paves the way for the development of more efficient and hardware-friendly architectures," they write.

The paper doesn't provide power estimates for conventional LLMs, but this post from UC Santa Cruz estimates about 700 watts for a conventional model. However, in our experience, you can run a 2.7B parameter version of Llama 2 competently on a home PC with an RTX 3060 (that uses about 200 watts peak) powered by a 500-watt power supply. So, if you could theoretically completely run an LLM in only 13 watts on an FPGA (without a GPU), that would be a 38-fold decrease in power usage. The technique has not yet been peer-reviewed, but the researchers -- Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason Eshraghian -- claim that their work challenges the prevailing paradigm that matrix multiplication operations are indispensable for building high-performing language models. They argue that their approach could make large language models more accessible, efficient, and sustainable, particularly for deployment on resource-constrained hardware like smartphones. [...]

The researchers say that scaling laws observed in their experiments suggest that the MatMul-free LM may also outperform traditional LLMs at very large scales. The researchers project that their approach could theoretically intersect with and surpass the performance of standard LLMs at scales around 10^23 FLOPS, which is roughly equivalent to the training compute required for models like Meta's Llama-3 8B or Llama-2 70B. However, the authors note that their work has limitations. The MatMul-free LM has not been tested on extremely large-scale models (e.g., 100 billion-plus parameters) due to computational constraints. They call for institutions with larger resources to invest in scaling up and further developing this lightweight approach to language modeling.
Red Hat Software

Red Hat's RHEL-Based In-Vehicle OS Attains Milestone Safety Certification (networkworld.com) 36

In 2022, Red Hat announced plans to extend RHEL to the automotive industry through Red Hat In-Vehicle Operating System (providing automakers with an open and functionally-safe platform). And this week Red Hat announced it achieved ISO 26262 ASIL-B certification from exida for the Linux math library (libm.so glibc) — a fundamental component of that Red Hat In-Vehicle Operating System.

From Red Hat's announcement: This milestone underscores Red Hat's pioneering role in obtaining continuous and comprehensive Safety Element out of Context certification for Linux in automotive... This certification demonstrates that the engineering of the math library components individually and as a whole meet or exceed stringent functional safety standards, ensuring substantial reliability and performance for the automotive industry. The certification of the math library is a significant milestone that strengthens the confidence in Linux as a viable platform of choice for safety related automotive applications of the future...

By working with the broader open source community, Red Hat can make use of the rigorous testing and analysis performed by Linux maintainers, collaborating across upstream communities to deliver open standards-based solutions. This approach enhances long-term maintainability and limits vendor lock-in, providing greater transparency and performance. Red Hat In-Vehicle Operating System is poised to offer a safety certified Linux-based operating system capable of concurrently supporting multiple safety and non-safety related applications in a single instance. These applications include advanced driver-assistance systems (ADAS), digital cockpit, infotainment, body control, telematics, artificial intelligence (AI) models and more. Red Hat is also working with key industry leaders to deliver pre-tested, pre-integrated software solutions, accelerating the route to market for SDV concepts.

"Red Hat is fully committed to attaining continuous and comprehensive safety certification of Linux natively for automotive applications," according to the announcement, "and has the industry's largest pool of Linux maintainers and contributors committed to this initiative..."

Or, as Network World puts it, "The phrase 'open source for the open road' is now being used to describe the inevitable fit between the character of Linux and the need for highly customizable code in all sorts of automotive equipment."
AI

Big Tech's AI Datacenters Demand Electricity. Are They Increasing Use of Fossil Fuels? (msn.com) 56

The artificial intelligence revolution will demand more electricity, warns the Washington Post. "Much more..."

They warn that the "voracious" electricity consumption of AI is driving an expansion of fossil fuel use in America — "including delaying the retirement of some coal-fired plants." As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world's most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source... A ChatGPT-powered search, according to the International Energy Agency, consumes almost 10 times the amount of electricity as a search on Google. One large data center complex in Iowa owned by Meta burns the annual equivalent amount of power as 7 million laptops running eight hours every day, based on data shared publicly by the company...

[Tech companies] argue advancing AI now could prove more beneficial to the environment than curbing electricity consumption. They say AI is already being harnessed to make the power grid smarter, speed up innovation of new nuclear technologies and track emissions.... "If we work together, we can unlock AI's game-changing abilities to help create the net zero, climate resilient and nature positive works that we so urgently need," Microsoft said in a statement.

The tech giants say they buy enough wind, solar or geothermal power every time a big data center comes online to cancel out its emissions. But critics see a shell game with these contracts: The companies are operating off the same power grid as everyone else, while claiming for themselves much of the finite amount of green energy. Utilities are then backfilling those purchases with fossil fuel expansions, regulatory filings show... heavily polluting fossil fuel plants that become necessary to stabilize the power grid overall because of these purchases, making sure everyone has enough electricity.

The article quotes a project director at the nonprofit Data & Society, which tracks the effect of AI and accuses the tech industry of using "fuzzy math" in its climate claims. "Coal plants are being reinvigorated because of the AI boom," they tell the Washington Post. "This should be alarming to anyone who cares about the environment."

The article also summarzies a recent Goldman Sachs analysis, which predicted data centers would use 8% of America's total electricity by 2030, with 60% of that usage coming "from a vast expansion in the burning of natural gas. The new emissions created would be comparable to that of putting 15.7 million additional gas-powered cars on the road." "We all want to be cleaner," Brian Bird, president of NorthWestern Energy, a utility serving Montana, South Dakota and Nebraska, told a recent gathering of data center executives in Washington, D.C. "But you guys aren't going to wait 10 years ... My only choice today, other than keeping coal plants open longer than all of us want, is natural gas. And so you're going see a lot of natural gas build out in this country."
Big Tech responded by "going all in on experimental clean-energy projects that have long odds of success anytime soon," the article concludes. "In addition to fusion, they are hoping to generate power through such futuristic schemes as small nuclear reactors hooked to individual computing centers and machinery that taps geothermal energy by boring 10,000 feet into the Earth's crust..." Some experts point to these developments in arguing the electricity needs of the tech companies will speed up the energy transition away from fossil fuels rather than undermine it. "Companies like this that make aggressive climate commitments have historically accelerated deployment of clean electricity," said Melissa Lott, a professor at the Climate School at Columbia University.
Math

Mathematician Reveals 'Equals' Has More Than One Meaning In Math (sciencealert.com) 118

"It turns out that mathematicians actually can't agree on the definition of what makes two things equal, and that could cause some headaches for computer programs that are increasingly being used to check mathematical proofs," writes Clare Watson via ScienceAlert. The issue has prompted British mathematician Kevin Buzzard to re-examine the concept of equality to "challenge various reasonable-sounding slogans about equality." The research has been posted on arXiv. From the report: In familiar usage, the equals sign sets up equations that describe different mathematical objects that represent the same value or meaning, something which can be proven with a few switcharoos and logical transformations from side to side. For example, the integer 2 can describe a pair of objects, as can 1 + 1. But a second definition of equality has been used amongst mathematicians since the late 19th century, when set theory emerged. Set theory has evolved and with it, mathematicians' definition of equality has expanded too. A set like {1, 2, 3} can be considered 'equal' to a set like {a, b, c} because of an implicit understanding called canonical isomorphism, which compares similarities between the structures of groups.

"These sets match up with each other in a completely natural way and mathematicians realised it would be really convenient if we just call those equal as well," Buzzard told New Scientist's Alex Wilkins. However, taking canonical isomorphism to mean equality is now causing "some real trouble," Buzzard writes, for mathematicians trying to formalize proofs -- including decades-old foundational concepts -- using computers. "None of the [computer] systems that exist so far capture the way that mathematicians such as Grothendieck use the equal symbol," Buzzard told Wilkins, referring to Alexander Grothendieck, a leading mathematician of the 20th century who relied on set theory to describe equality.

Some mathematicians think they should just redefine mathematical concepts to formally equate canonical isomorphism with equality. Buzzard disagrees. He thinks the incongruence between mathematicians and machines should prompt math minds to rethink what exactly they mean by mathematical concepts as foundational as equality so computers can understand them. "When one is forced to write down what one actually means and cannot hide behind such ill-defined words," Buzzard writes. "One sometimes finds that one has to do extra work, or even rethink how certain ideas should be presented."

AI

China's DeepSeek Coder Becomes First Open-Source Coding Model To Beat GPT-4 Turbo (venturebeat.com) 108

Shubham Sharma reports via VentureBeat: Chinese AI startup DeepSeek, which previously made headlines with a ChatGPT competitor trained on 2 trillion English and Chinese tokens, has announced the release of DeepSeek Coder V2, an open-source mixture of experts (MoE) code language model. Built upon DeepSeek-V2, an MoE model that debuted last month, DeepSeek Coder V2 excels at both coding and math tasks. It supports more than 300 programming languages and outperforms state-of-the-art closed-source models, including GPT-4 Turbo, Claude 3 Opus and Gemini 1.5 Pro. The company claims this is the first time an open model has achieved this feat, sitting way ahead of Llama 3-70B and other models in the category. It also notes that DeepSeek Coder V2 maintains comparable performance in terms of general reasoning and language capabilities.

Founded last year with a mission to "unravel the mystery of AGI with curiosity," DeepSeek has been a notable Chinese player in the AI race, joining the likes of Qwen, 01.AI and Baidu. In fact, within a year of its launch, the company has already open-sourced a bunch of models, including the DeepSeek Coder family. The original DeepSeek Coder, with up to 33 billion parameters, did decently on benchmarks with capabilities like project-level code completion and infilling, but only supported 86 programming languages and a context window of 16K. The new V2 offering builds on that work, expanding language support to 338 and context window to 128K -- enabling it to handle more complex and extensive coding tasks. When tested on MBPP+, HumanEval, and Aider benchmarks, designed to evaluate code generation, editing and problem-solving capabilities of LLMs, DeepSeek Coder V2 scored 76.2, 90.2, and 73.7, respectively -- sitting ahead of most closed and open-source models, including GPT-4 Turbo, Claude 3 Opus, Gemini 1.5 Pro, Codestral and Llama-3 70B. Similar performance was seen across benchmarks designed to assess the model's mathematical capabilities (MATH and GSM8K). The only model that managed to outperform DeepSeek's offering across multiple benchmarks was GPT-4o, which obtained marginally higher scores in HumanEval, LiveCode Bench, MATH and GSM8K. [...]

As of now, DeepSeek Coder V2 is being offered under a MIT license, which allows for both research and unrestricted commercial use. Users can download both 16B and 236B sizes in instruct and base avatars via Hugging Face. Alternatively, the company is also providing access to the models via API through its platform under a pay-as-you-go model. For those who want to test out the capabilities of the models first, the company is offering the option to interact. with Deepseek Coder V2 via chatbot.

Microsoft

The Verge's David Pierce Reports On the Excel World Championship From Vegas (theverge.com) 29

In a featured article for The Verge, David Pierce explores the world of competitive Excel, highlighting its rise from a hobbyist activity to a potential esport, showcased during the Excel World Championship in Las Vegas. Top spreadsheet enthusiasts competed at the MGM Grand to solve complex Excel challenges, emphasizing the transformative power and ubiquity of spreadsheets in both business and entertainment. An anonymous reader quotes an excerpt from the report: Competitive Excel has been around for years, but only in a hobbyist way. Most of the people in this room full of actuaries, analysts, accountants, and investors play Excel the way I play Scrabble or do the crossword -- exercising your brain using tools you understand. But last year's competition became a viral hit on ESPN and YouTube, and this year, the organizers are trying to capitalize. After all, someone points out to me, poker is basically just math, and it's all over TV. Why not spreadsheets? Excel is a tool. It's a game. Now it hopes to become a sport. I've come to realize in my two days in this ballroom that understanding a spreadsheet is like a superpower. The folks in this room make their living on their ability to take some complex thing -- a company's sales, a person's lifestyle, a region's political leanings, a race car -- and pull it apart into its many component pieces. If you can reduce the world down to a bunch of rows and columns, you can control it. Manipulate it. Build it and rebuild it in a thousand new ways, with a couple of hotkeys and an undo button at the ready. A good spreadsheet shows you the universe and gives you the ability to create new ones. And the people in this room, in their dad jeans and short-sleeved button-downs, are the gods on Olympus, bending everything to their will.

There is one inescapably weird thing about competitive Excel: spreadsheets are not fun. Spreadsheets are very powerful, very interesting, very important, but they are for work. Most of what happens at the FMWC is, in almost every practical way, indistinguishable from the normal work that millions of people do in spreadsheets every day. You can gussy up the format, shorten the timelines, and raise the stakes all you want -- the reality is you're still asking a bunch of people who make spreadsheets for a living to just make more spreadsheets, even if they're doing it in Vegas. You really can't overstate how important and ubiquitous spreadsheets really are, though. "Electronic spreadsheets" actually date back earlier than computers and are maybe the single most important reason computers first became mainstream. In the late 1970s, a Harvard MBA student named Dan Bricklin started to dream up a software program that could automatically do the math he was constantly doing and re-doing in class. "I imagined a magic blackboard that if you erased one number and wrote a new thing in, all of the other numbers would automatically change, like word processing with numbers," he said in a 2016 TED Talk. This sounds quaint and obvious now, but it was revolutionary then. [...]

Competitive Excel has been around for years, but only in a hobbyist way. Most of the people in this room full of actuaries, analysts, accountants, and investors play Excel the way I play Scrabble or do the crossword -- exercising your brain using tools you understand. But last year's competition became a viral hit on ESPN and YouTube, and this year, the organizers are trying to capitalize. After all, someone points out to me, poker is basically just math, and it's all over TV. Why not spreadsheets? Excel is a tool. It's a game. Now it hopes to become a sport. I've come to realize in my two days in this ballroom that understanding a spreadsheet is like a superpower. The folks in this room make their living on their ability to take some complex thing -- a company's sales, a person's lifestyle, a region's political leanings, a race car -- and pull it apart into its many component pieces. If you can reduce the world down to a bunch of rows and columns, you can control it. Manipulate it. Build it and rebuild it in a thousand new ways, with a couple of hotkeys and an undo button at the ready. A good spreadsheet shows you the universe and gives you the ability to create new ones. And the people in this room, in their dad jeans and short-sleeved button-downs, are the gods on Olympus, bending everything to their will.

IOS

Apple Made an iPad Calculator App After 14 Years (theverge.com) 62

Jay Peters reports via The Verge: The iPad is finally getting a Calculator app as part of iPadOS 18. The long-requested app was just announced by Apple at WWDC 2024. On its face, the app looks a lot like the calculator you might be familiar with from iOS. But it also supports Apple Pencil, meaning that you can write down math problems and the app will solve them thanks to a feature Apple calls Math Notes. Other features included in iPadOS 18 include a new, customizable floating tab bar; enhanced SharePlay functionality for easier screen sharing and remote control of another person's iPad; and Smart Script, a handwriting feature that refines and improves legibility using machine learning.
Desktops (Apple)

Apple Unveils macOS 15 'Sequoia' at WWDC, Introduces Window Tiling and iPhone Mirroring (arstechnica.com) 35

At its Worldwide Developers Conference, Apple formally introduced macOS 15, codenamed "Sequoia." The new release combines features from iOS 18 with Mac-specific improvements. One notable addition is automated window tiling, allowing users to arrange windows on their screen without manual resizing or switching to full-screen mode. Another feature, iPhone Mirroring, streams the iPhone's screen to the Mac, enabling app use with the Mac's keyboard and trackpad while keeping the phone locked for privacy.

Gamers will appreciate the second version of Apple's Game Porting Toolkit, simplifying the process of bringing Windows games to macOS and vice versa. Sequoia also incorporates changes from iOS and iPadOS, such as RCS support and expanded Tapback reactions in Messages, a redesigned Calculator app, and the Math Notes feature for typed equations in Notes. Additionally, all Apple platforms and Windows will receive a new Passwords app, potentially replacing standalone password managers. A developer beta of macOS Sequoia is available today, with refined public betas coming in July and a full release planned for the fall.
Math

Crows Can 'Count' Out Loud, Study Shows (sciencealert.com) 39

An anonymous reader quotes a report from ScienceAlert: A team of scientists has shown that crows can 'count' out loud -- producing a specific and deliberate number of caws in response to visual and auditory cues. While other animals such as honeybees have shown an ability to understand numbers, this specific manifestation of numeric literacy has not yet been observed in any other non-human species. "Producing a specific number of vocalizations with purpose requires a sophisticated combination of numerical abilities and vocal control," writes the team of researchers led by neuroscientist Diana Liao of the University of Tubingen in Germany. "Whether this capacity exists in animals other than humans is yet unknown. We show that crows can flexibly produce variable numbers of one to four vocalizations in response to arbitrary cues associated with numerical values."

The ability to count aloud is distinct from understanding numbers. It requires not only that understanding, but purposeful vocal control with the aim of communication. Humans are known to use speech to count numbers and communicate quantities, an ability taught young. [...] "Our results demonstrate that crows can flexibly and deliberately produce an instructed number of vocalizations by using the 'approximate number system', a non-symbolic number estimation system shared by humans and animals," the researchers write in their paper. "This competency in crows also mirrors toddlers' enumeration skills before they learn to understand cardinal number words and may therefore constitute an evolutionary precursor of true counting where numbers are part of a combinatorial symbol system."
The findings have been published in the journal Science.
Bitcoin

MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says (arstechnica.com) 112

An anonymous reader quotes a report from Ars Technica: Within approximately 12 seconds, two highly educated brothers allegedly stole $25 million by tampering with the ethereum blockchain in a never-before-seen cryptocurrency scheme, according to an indictment that the US Department of Justice unsealed Wednesday. In a DOJ press release, US Attorney Damian Williams said the scheme was so sophisticated that it "calls the very integrity of the blockchain into question."

"The brothers, who studied computer science and math at one of the most prestigious universities in the world, allegedly used their specialized skills and education to tamper with and manipulate the protocols relied upon by millions of ethereum users across the globe," Williams said. "And once they put their plan into action, their heist only took 12 seconds to complete." Anton, 24, and James Peraire-Bueno, 28, were arrested Tuesday, charged with conspiracy to commit wire fraud, wire fraud, and conspiracy to commit money laundering. Each brother faces "a maximum penalty of 20 years in prison for each count," the DOJ said. The indictment goes into detail explaining that the scheme allegedly worked by exploiting the ethereum blockchain in the moments after a transaction was conducted but before the transaction was added to the blockchain.
To uncover the scheme, the special agent in charge, Thomas Fattorusso of the IRS Criminal Investigation (IRS-CI) New York Field Office, said that investigators "simply followed the money."

"Regardless of the complexity of the case, we continue to lead the effort in financial criminal investigations with cutting-edge technology and good-ol'-fashioned investigative work, on and off the blockchain," Fattorusso said.
Power

A Coal Billionaire is Building the World's Biggest Clean Energy Plant - Five Times the Size of Paris (cnn.com) 79

An anonymous reader shared this report from CNN: Five times the size of Paris. Visible from space. The world's biggest energy plant. Enough electricity to power Switzerland. The scale of the project transforming swathes of barren salt desert on the edge of western India into one of the most important sources of clean energy anywhere on the planet is so overwhelming that the man in charge can't keep up. "I don't even do the math any more," Sagar Adani told CNN in an interview last week.

Adani is executive director of Adani Green Energy Limited (AGEL). He's also the nephew of Gautam Adani, Asia's second richest man, whose $100 billion fortune stems from the Adani Group, India's biggest coal importer and a leading miner of the dirty fuel. Founded in 1988, the conglomerate has businesses in fields ranging from ports and thermal power plants to media and cements. Its clean energy unit AGEL is building the sprawling solar and wind power plant in the western Indian state of Gujarat at a cost of about $20 billion.

It will be the world's biggest renewable park when it is finished in about five years, and should generate enough clean electricity to power 16 million Indian homes... [T]he park will cover more than 200 square miles and be the planet's largest power plant regardless of the energy source, AGEL said.

CNN adds that the company "plans to invest $100 billion into energy transition over the next decade, with 70% of the investments ear-marked for clean energy."
XBox (Games)

Xbox Console Sales Are Tanking As Microsoft Brings Games To PS5 (kotaku.com) 25

In its third-quarter earnings call on Thursday, Microsoft reported a 30% drop in Xbox console sales, after reporting a 30% drop last April. "It blamed the nosedive on a 'lower volume of consoles sold' during the start of 2024," reports Kotaku. From the report: In February, Grand Theft Auto VI parent company Take-Two claimed in a presentation to investors that there were roughly 77 million "gen 9" consoles in people's homes. It didn't take fans long to do the math and speculate that Microsoft had only sold around 25 million Xbox Series X/S consoles to-date. That puts it ahead of the GameCube but behind the Nintendo 64, at least for now. Given the results this quarter as well, it doesn't seem like Game Pass and Starfield have moved the needle much. Maybe that will change once Call of Duty, which Microsoft acquired last fall along with the rest of Activision Blizzard, finally makes its way to Game Pass. Diablo IV only just arrived on the Netflix-like subscription platform this month. But given the fact that the fate of Xbox Series X/S appears to be locked in at this point, it's easy to see why Microsoft is looking at other places it can put its games.

Sea of Thieves, the last of four games in this initial volley to come to PS5, dominated the PlayStation Store's top sellers list last week on pre-orders alone. CEO Satya Nadella specifically called this out during a call with investors, noting that Microsoft had more games in the top 25 best sellers on PS5 than any other publisher. "We are committed to meeting players where they are by bringing great games to more people on more devices," he said. If players there continue to flock to the live-service pirate sim, it's not hard to imagine Microsoft bringing another batch of its first-party exclusives to the rival platform. Whether that means more recent blockbusters like Starfield or the upcoming Indiana Jones game will someday make the journey remains to be seen.

Math

A Chess Formula Is Taking Over the World (theatlantic.com) 28

An anonymous reader quotes a report from The Atlantic: In October 2003, Mark Zuckerberg created his first viral site: not Facebook, but FaceMash. Then a college freshman, he hacked into Harvard's online dorm directories, gathered a massive collection of students' headshots, and used them to create a website on which Harvard students could rate classmates by their attractiveness, literally and figuratively head-to-head. The site, a mean-spirited prank recounted in the opening scene of The Social Network, got so much traction so quickly that Harvard shut down his internet access within hours. The math that powered FaceMash -- and, by extension, set Zuckerberg on the path to building the world's dominant social-media empire -- was reportedly, of all things, a formula for ranking chess players: the Elo system.

Fundamentally, what an Elo rating does is predict the outcome of chess matches by assigning every player a number that fluctuates based purely on performance. If you beat a slightly higher-ranked player, your rating goes up a little, but if you beat a much higher-ranked player, your rating goes up a lot (and theirs, conversely, goes down a lot). The higher the rating, the more matches you should win. That is what Elo was designed for, at least. FaceMash and Zuckerberg aside, people have deployed Elo ratings for many sports -- soccer, football, basketball -- and for domains as varied as dating, finance, and primatology. If something can be turned into a competition, it has probably been Elo-ed. Somehow, a simple chess algorithm has become an all-purpose tool for rating everything. In other words, when it comes to the preferred way to rate things, Elo ratings have the highest Elo rating. [...]

Elo ratings don't inherently have anything to do with chess. They're based on a simple mathematical formula that works just as well for any one-on-one, zero-sum competition -- which is to say, pretty much all sports. In 1997, a statistician named Bob Runyan adapted the formula to rank national soccer teams -- a project so successful that FIFA eventually adopted an Elo system for its official rankings. Not long after, the statistician Jeff Sagarin applied Elo to rank NFL teams outside their official league standings. Things really took off when the new ESPN-owned version of Nate Silver's 538 launched in 2014 and began making Elo ratings for many different sports. Some sports proved trickier than others. NBA basketball in particular exposed some of the system's shortcomings, Neil Paine, a stats-focused sportswriter who used to work at 538, told me. It consistently underrated heavyweight teams, for example, in large part because it struggled to account for the meaninglessness of much of the regular season and the fact that either team might not be trying all that hard to win a given game. The system assumed uniform motivation across every team and every game. Pretty much anything, it turns out, can be framed as a one-on-one, zero-sum game.
Arpad Emmerich Elo, creator of the Elo rating system, understood the limitations of his invention. "It is a measuring tool, not a device of reward or punishment," he once remarked. "It is a means to compare performances, assess relative strength, not a carrot waved before a rabbit, or a piece of candy given to a child for good behavior."

Slashdot Top Deals