Classic Games (Games)

Teaching Children To Play Chess Found To Decrease Risk Aversion (phys.org) 132

An anonymous reader quotes a report from Phys.Org: A trio of researchers from Monash University and Deakin University has found that teaching children to play chess can reduce their aversion to risk. In their paper published in Journal of Development Economics, Asad Islam, Wang-Sheng Lee and Aaron Nicholas describe studying the impact of learning chess on 400 children in the U.K. The researchers found that most of the children experienced a decrease in risk aversion in a variety of game playing scenarios. They also noticed that playing chess also led to better math scores for some of the students and improvements in logic or rational thinking.

The researchers note that the game of chess is very well suited to building confidence in risk taking when there is reason to believe it might improve an outcome. In contrast, students also learned to avoid taking risks haphazardly, finding that such risks rarely lead to a positive outcome. They [...] line between good and poor risk-taking is especially evident in chess, which means that the more a person plays, the sharper their skills become. The researchers also found that the skills learned during chess playing appeared to be long lasting -- most of the children retained their decrease in risk aversion a full year after the end of their participation in the study. The researchers [...] did not find any evidence of changes in other cognitive skills, such as improvements in grades other than math or general creativity.

Verizon

Verizon Will Shut Down Its 3G Network In 2022 (engadget.com) 64

An anonymous reader quotes a report from Engadget: Verizon will shut down its 3G services on December 31st, 2022, VP of network engineering Mike Haberman announced today. According to Haberman, less than 1 percent of Verizon customers still access the 3G network, with 99 percent on 4G LTE or 5G. Verizon has roughly 94 million customers, so by the company's own math, as many as 940,000 people are still using Verizon's 3G network.

"Customers who still have a 3G device will continue to be strongly encouraged to make a change now," Haberman wrote. "As we move closer to the shut-off date customers still accessing the 3G network may experience a degradation or complete loss of service, and our service centers will only be able to offer extremely limited troubleshooting help on these older devices." Verizon has been teasing a shut-off of its 3G CDMA services for years. [...] The delay to 2022 is final — there will be no more extensions, Haberman said. He noted that this will be "months after our competitors have shut off their networks completely."

Math

Quantum Computer Solves Decades-Old Problem Three Million Times Faster Than a Classical Computer (zdnet.com) 77

ZDNet reports: Scientists from quantum computing company D-Wave have demonstrated that, using a method called quantum annealing, they could simulate some materials up to three million times faster than it would take with corresponding classical methods.

Together with researchers from Google, the scientists set out to measure the speed of simulation in one of D-Wave's quantum annealing processors, and found that performance increased with both simulation size and problem difficulty, to reach a million-fold speedup over what could be achieved with a classical CPU... The calculation that D-Wave and Google's teams tackled is a real-world problem; in fact, it has already been resolved by the 2016 winners of the Nobel Prize in Physics, Vadim Berezinskii, J. Michael Kosterlitz and David Thouless, who studied the behavior of so-called "exotic magnetism", which occurs in quantum magnetic systems....

Instead of proving quantum supremacy, which happens when a quantum computer runs a calculation that is impossible to resolve with classical means, D-Wave's latest research demonstrates that the company's quantum annealing processors can lead to a computational performance advantage... "What we see is a huge benefit in absolute terms," said Andrew King, director of performance research at D-Wave. "This simulation is a real problem that scientists have already attacked using the algorithms we compared against, marking a significant milestone and an important foundation for future development. This wouldn't have been possible today without D-Wave's lower noise processor."

Equally as significant as the performance milestone, said D-Wave's team, is the fact that the quantum annealing processors were used to run a practical application, instead of a proof-of-concept or an engineered, synthetic problem with little real-world relevance. Until now, quantum methods have mostly been leveraged to prove that the technology has the potential to solve practical problems, and is yet to make tangible marks in the real world.

Looking ahead to the future, long-time Slashdot reader schwit1 asks, "Is this is bad news for encryption that depends on brute-force calculations being prohibitively difficult?"
Earth

Solar and Wind Are Reaching for the Last 90% of the US Power Market (bloomberg.com) 253

An anonymous reader shares a report: Three decades ago, the U.S. passed an infinitesimal milestone: solar and wind power generated one-tenth of one percent of the country's electricity. It took 18 years, until 2008, for solar and wind to reach 1% of U.S. electricity. It took 12 years for solar and wind to increase by another factor of 10. In 2020, wind and solar generated 10.5% of U.S. electricity. If this sounds a bit like a math exercise, that's because it is. Anything growing at a compounded rate of nearly 18%, as U.S. wind and solar have done for the past three decades, will double in four years, then double again four years after that, then again four years after that, and so on. It gets confusing to think in so many successive doublings, especially when they occur more than twice a decade. Better, then, to think in orders of magnitude -- 10^10.

There are a number of reasons why exponential consideration matters. The first is that U.S. power demand isn't growing, and hasn't since wind and solar reached that 1% milestone in the late 2000s. That means that the growth of wind and solar -- and that of natural gas-fired power -- have come entirely at the expense of coal-fired power. That replacement of coal with either natural gas (half the emissions of coal) or with wind and solar (zero emissions) is certainly an environmental achievement. Coupled with last year's massive drop in emissions, that power shift also makes it much easier for the U.S. to meet its Paris Agreement obligations.

Math

Machines Are Inventing New Math We've Never Seen (vice.com) 44

An anonymous reader quotes a report from Motherboard: [A] group of researchers from the Technion in Israel and Google in Tel Aviv presented an automated conjecturing system that they call the Ramanujan Machine, named after the mathematician Srinivasa Ramanujan, who developed thousands of innovative formulas in number theory with almost no formal training. The software system has already conjectured several original and important formulas for universal constants that show up in mathematics. The work was published last week in Nature.

One of the formulas created by the Machine can be used to compute the value of a universal constant called Catalan's number more efficiently than any previous human-discovered formulas. But the Ramanujan Machine is imagined not to take over mathematics, so much as provide a sort of feeding line for existing mathematicians. As the researchers explain in the paper, the entire discipline of mathematics can be broken down into two processes, crudely speaking: conjecturing things and proving things. Given more conjectures, there is more grist for the mill of the mathematical mind, more for mathematicians to prove and explain. That's not to say their system is unambitious. As the researchers put it, the Ramanujan Machine is "trying to replace the mathematical intuition of great mathematicians and providing leads to further mathematical research." In particular, the researchers' system produces conjectures for the value of universal constants (like pi), written in terms of elegant formulas called continued fractions. Continued fractions are essentially fractions, but more dizzying. The denominator in a continued fraction includes a sum of two terms, the second of which is itself a fraction, whose denominator itself contains a fraction, and so on, out to infinity.

The Ramanujan Machine is built off of two primary algorithms. These find continued fraction expressions that, with a high degree of confidence, seem to equal universal constants. That confidence is important, as otherwise, the conjectures would be easily discarded and provide little value. Each conjecture takes the form of an equation. The idea is that the quantity on the left side of the equals sign, a formula involving a universal constant, should be equal to the quantity on the right, a continued fraction. To get to these conjectures, the algorithm picks arbitrary universal constants for the left side and arbitrary continued fractions for the right, and then computes each side separately to a certain precision. If the two sides appear to align, the quantities are calculated to higher precision to make sure their alignment is not a coincidence of imprecision. Critically, formulas already exist to compute the value of universal constants like pi to an arbitrary precision, so that the only obstacle to verifying the sides match is computing time.

Math

Quixotic Californian Crusade To Officially Recognize the Hellabyte (theregister.com) 128

An anonymous reader quotes a report from The Register: In 2010, Austin Sendek, then a physics student at UC Davis, created a petition seeking recognition for prefix "hella-" as an official International System of Units (SI) measurement representing 10^27. "Northern California is home to many influential research institutions, including the University of California, Davis, the University of California, Berkeley, Stanford University, and the Lawrence Livermore and Lawrence Berkeley National Laboratories," he argued. "However, science isn't all that sets Northern California apart from the rest of the world. The area is also the only region in the world currently practicing widespread usage of the English slang 'hella,' which typically means 'very,' or can refer to a large quantity (e.g. 'there are hella stars out tonight')."

To this day, the SI describes prefixes for quantities for up to 10^24. Those with that many bytes have a yottabyte. If you only have 10^21 bytes, you have a zettabyte. There's also exabyte (10^18), petabyte (10^15), terabyte (10^12), gigabyte(10^9), and so on. Support for "hella-" would allow you to talk about hellabytes of data, he argues, pointing out that this would make the number of atoms in 12 kg of carbon-12 would be simplified from 600 yottaatoms to 0.6 hellaatoms. Similarly, the sun (mass of 2.2 hellatons) would release energy at 0.3 hellawatts, rather than 300 yottawatts. [...] The soonest [a proposal for a "hella-" SI could be officially adopted] is in November 2022, at the quadrennial meeting of the International Bureau of Weights and Measures (BIPM)'s General Conference on Weight and Measures, where changes to the SI usually must be agreed upon.
The report notes that Google customized its search engine in 2010 to let you convert "bytes to hellabytes." A year later, Wolfram Alpha added support for "hella-" calculations.

"Sendek said 'hellabyte' initially started as a joke with some college friends but became a more genuine concern as he looked into how measurements get defined and as his proposal garnered support," reports The Register. He believes it could be useful for astronomical measurements.
AI

Calculations Show It'll Be Impossible To Control a Super-Intelligent AI (sciencealert.com) 194

schwit1 shares a report from ScienceAlert: [S]cientists have just delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The answer? Almost definitely not. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze. But if we're unable to comprehend it, it's impossible to create such a simulation. Rules such as "cause no harm to humans" can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.

Part of the team's reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once. Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not -- it's mathematically impossible for us to be absolutely sure either way, which means it's not containable.

The alternative to teaching AI some ethics and telling it not to destroy the world -- something which no algorithm can be absolutely certain of doing, the researchers say -- is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example. The new study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence -- the argument goes that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all? If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we're going in.

AI

New XPrize Challenge: Predicting Covid-19's Spread and Prescribing Interventions (ieee.org) 22

Slashdot reader the_newsbeagle shares an article from IEEE Spectrum: Many associate XPrize with a $10-million award offered in 1996 to motivate a breakthrough in private space flight. But the organization has since held other competitions related to exploration, ecology, and education. And in November, they launched the Pandemic Response Challenge, which will culminate in a $500,000 award to be split between two teams that not only best predict the continuing global spread of COVID-19, but also prescribe policies to curtail it...

For Phase 1, teams had to submit prediction models by 22 December... Up to 50 teams will make it to Phase 2, where they must submit a prescription model... The top two teams will split half a million dollars. The competition may not end there. Amir Banifatemi, XPrize's chief innovation and growth officer, says a third phase might test models on vaccine deployment prescriptions. And beyond the contest, some cities or countries might put some of the Phase 2 or 3 models into practice, if Banifatemi can find adventurous takers.

The organizers expect a wide variety of solutions. Banifatemi says the field includes teams from AI strongholds such as Stanford, Microsoft, MIT, Oxford, and Quebec's Mila, but one team consists of three women in Tunisia. In all, 104 teams from 28 countries have registered. "We're hoping that this competition can be a springboard for developing solutions for other really big problems as well," Miikkulainen says. Those problems include pandemics, global warming, and challenges in business, education, and healthcare. In this scenario, "humans are still in charge," he emphasizes. "They still decide what they want, and AI gives them the best alternatives from which the decision-makers choose."

But Miikkulainen hopes that data science can help humanity find its way. "Maybe in the future, it's considered irresponsible not to use AI for making these policies," he says.

For the Covid-19 competition, Banifatemi emphasized that one goal was "to make the resulting insights available freely to everyone, in an open-source manner — especially for all those communities that may not have access to data and epidemiology divisions, statisticians, or data scientists."
Intel

Linus Torvalds Rails At Intel For 'Killing' the ECC Industry (theregister.com) 218

An anonymous reader quotes a report from The Register: Linux creator Linus Torvalds has accused Intel of preventing widespread use of error-correcting memory and being "instrumental in killing the whole ECC industry with its horribly bad market segmentation." ECC stands for error-correcting code. ECC memory uses additional parity bits to verify that the data read from memory is the same as the data that was written. Without this check, memory is vulnerable to occasional corruption where a bit is flipped spontaneously, for example, by background radiation. Memory can also be attacked using a technique called Rowhammer, where rapid repeated reads of the same memory locations can cause adjacent locations to change their state. ECC memory solves these problems and has been available for over 50 years yet most personal computers do not use it. Cost is a factor but what riles Torvalds is that Intel has made ECC support a feature of its Xeon range, aimed at servers and high-end workstations, and does not support it in other ranges such as the Core series.

The topic came up in a discussion about AMD's new Zen 3 Ryzen 9 5000 series processors on the Real World Tech forum site. AMD has semi-official ECC support in most of its processors. "I don't really see AMD's unofficial ECC support being a big deal," said an unwary contributor. "ECC absolutely matters," retorted Torvalds. "Intel has been detrimental to the whole industry and to users because of their bad and misguided policies wrt ECC. Seriously. And if you don't believe me, then just look at multiple generations of rowhammer, where each time Intel and memory manufacturers bleated about how it's going to be fixed next time... And yes, that was -- again -- entirely about the misguided and arse-backwards policy of 'consumers don't need ECC', which made the market for ECC memory go away."

The accusation is significant particularly at a time when security issues are high on the agenda. The suggestion is that Intel's marketing decisions have held back adoption of a technology that makes users more secure -- though rowhammer is only one of many potential attack mechanisms -- as well as making PCs more stable. "The arguments against ECC were always complete and utter garbage. Now even the memory manufacturers are starting to do ECC internally because they finally owned up to the fact that they absolutely have to," said Torvalds. Torvalds said that Xeon prices deterred usage. "I used to look at the Xeon CPU's, and I could never really make the math work. The Intel math was basically that you get twice the CPU for five times the price. So for my personal workstations, I ended up using Intel consumer CPU's." Prices, he said, dropped last year "because of Ryzen and Threadripper... but it was a 'too little, much too late' situation." By way of mitigation, he added that "apart from their ECC stance I was perfectly happy with [Intel's] consumer offerings."

Programming

Study Finds Brain Activity of Coders Isn't Like Language or Math (boingboing.net) 88

"When you do computer programming, what sort of mental work are you doing?" asks science/tech journalist Clive Thompson: For a long time, folks have speculated on this. Since coding involves pondering hierarchies of symbols, maybe the mental work is kinda like writing or reading? Others have speculated it's more similar to the way our brains process math and puzzles. A group of MIT neuroscientists recently did fMRI brain-scans of young adults while they were solving a small coding challenge using a textual programming language (Python) and a visual one (Scratch Jr.). The results?

The brain activity wasn't similar to when we process language. Instead, coding seems to activate the "multiple demand network," which — as the scientists note in a public-relations writeup of their work — "is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles."

So, coding is more like doing math than processing language?

Sorrrrrrt of ... but not exactly so. The scientists saw activity patterns that differ from those you'd see during math, too.

The upshot: Coding — in this (very preliminary!) work, anyway — looks to be a little different from either language or math. As the note, in a media release...

"Understanding computer code seems to be its own thing...."

Just anecdotally — having interviewed hundreds of coders and computer scientists for my book CODERS — I've met amazing programmers and computer scientists with all manner of intellectual makeups. There were math-heads, and there were people who practically counted on their fingers. There were programmers obsessed with — and eloquent in — language, and ones gently baffled by written and spoken communication. Lots of musicians, lots of folks who slid in via a love of art and visual design, then whose brains just seized excitedly on the mouthfeel of algorithms.

Math

The Lasting Lessons of John Conway's Game of Life 84

Siobhan Roberts, writing for The New York Times: In March of 1970, Martin Gardner opened a letter jammed with ideas for his Mathematical Games column in Scientific American. Sent by John Horton Conway, then a mathematician at the University of Cambridge, the letter ran 12 pages, typed hunt-and-peck style. Page 9 began with the heading "The game of life." It described an elegant mathematical model of computation -- a cellular automaton, a little machine, of sorts, with groups of cells that evolve from iteration to iteration, as a clock advances from one second to the next. Dr. Conway, who died in April, having spent the latter part of his career at Princeton, sometimes called Life a "no-player, never-ending game." Mr. Gardner called it a "fantastic solitaire pastime." The game was simple: Place any configuration of cells on a grid, then watch what transpires according to three rules that dictate how the system plays out.

Birth rule: An empty, or "dead," cell with precisely three "live" neighbors (full cells) becomes live.
Death rule: A live cell with zero or one neighbors dies of isolation; a live cell with four or more neighbors dies of overcrowding.
Survival rule: A live cell with two or three neighbors remains alive.
With each iteration, some cells live, some die and "Life-forms" evolve, one generation to the next. Among the first creatures to emerge was the glider -- a five-celled organism that moved across the grid with a diagonal wiggle and proved handy for transmitting information. It was discovered by a member of Dr. Conway's research team, Richard Guy, in Cambridge, England. The glider gun, producing a steady stream of gliders, was discovered soon after by Bill Gosper, then at the Massachusetts Institute of Technology.
AI

AI Solves Schrodinger's Equation (phys.org) 67

An anonymous reader quotes a report from Phys.Org: A team of scientists at Freie Universitat Berlin has developed an artificial intelligence (AI) method for calculating the ground state of the Schrodinger equation in quantum chemistry. The goal of quantum chemistry is to predict chemical and physical properties of molecules based solely on the arrangement of their atoms in space, avoiding the need for resource-intensive and time-consuming laboratory experiments. In principle, this can be achieved by solving the Schrodinger equation, but in practice this is extremely difficult. Up to now, it has been impossible to find an exact solution for arbitrary molecules that can be efficiently computed. But the team at Freie Universitat has developed a deep learning method that can achieve an unprecedented combination of accuracy and computational efficiency.

The deep neural network designed by [the] team is a new way of representing the wave functions of electrons. "Instead of the standard approach of composing the wave function from relatively simple mathematical components, we designed an artificial neural network capable of learning the complex patterns of how electrons are located around the nuclei," [Professor Frank Noe, who led the team effort] explains. "One peculiar feature of electronic wave functions is their antisymmetry. When two electrons are exchanged, the wave function must change its sign. We had to build this property into the neural network architecture for the approach to work," adds [Dr. Jan Hermann of Freie Universitat Berlin, who designed the key features of the method in the study]. This feature, known as 'Pauli's exclusion principle,' is why the authors called their method 'PauliNet.' Besides the Pauli exclusion principle, electronic wave functions also have other fundamental physical properties, and much of the innovative success of PauliNet is that it integrates these properties into the deep neural network, rather than letting deep learning figure them out by just observing the data. "Building the fundamental physics into the AI is essential for its ability to make meaningful predictions in the field," says Noe. "This is really where scientists can make a substantial contribution to AI, and exactly what my group is focused on."
The results were published in the journal Nature Chemistry.
Education

After Canceling Exam, College Board Touts Record Number of AP CSP Exam Takers 47

theodp writes: Q. How many AP Computer Science Principles 'exam takers' would you have if you cancelled the AP CSP exam due to the coronavirus? A. More than 116,000!

That's according to the math behind a new College Board press release, which boasts, "In 2020, more than 116,000 students took the AP CSP Exam -- more than double the number of exam takers in the course's first year, and a 21% increase over the previous year. In 2020, 39,570 women took the AP CSP exam, nearly three times the number who tested in 2017." Which is somewhat confusing, since the College Board actually cancelled the 2020 AP CSP Exam last spring, explaining to students, "This year, there will be no end-of-year multiple-choice exam in Computer Science Principles [the exam was to have counted for 60% of students' scores] -- your AP score will be computed from the Create and Explore performance tasks only."

Still, Sunday's Washington Post reported the good PR news, as did tech-bankrolled College Board partner Code.org, which exclaimed, "Young women set records in computer science exams, again!" In 2018, Code.org lamented that many students enrolled in AP CSP wouldn't get college credit for the course "because they don't take the exam", so perhaps an increase in AP CSP scores awarded -- if not AP CSP exams taken -- should be added to the list of silver linings of the pandemic.
Medicine

Poor Countries Face Long Wait for Vaccines Despite Promises 235

With Americans, Britons and Canadians rolling up their sleeves to receive coronavirus vaccines, the route out of the pandemic now seems clear to many in the West, even if the rollout will take many months. But for poorer countries, the road will be far longer and rougher. From a report: The ambitious initiative known as COVAX created to ensure the entire world has access to COVID-19 vaccines has secured only a fraction of the 2 billion doses it hopes to buy over the next year, has yet to confirm any actual deals to ship out vaccines and is short on cash. The virus that has killed more than 1.6 million people has exposed vast inequities between countries, as fragile health systems and smaller economies were often hit harder. COVAX was set up by the World Health Organization, vaccines alliance GAVI and CEPI, a global coalition to fight epidemics, to avoid the international stampede for vaccines that has accompanied past outbreaks and would reinforce those imbalances.

But now some experts say the chances that coronavirus shots will be shared fairly between rich nations and the rest are fading fast. With vaccine supplies currently limited, developed countries, some of which helped fund the research with taxpayer money, are under tremendous pressure to protect their own populations and are buying up shots. Meanwhile, some poorer countries that signed up to the initiative are looking for alternatives because of fears it won't deliver. "It's simple math," said Arnaud Bernaert, head of global health at the World Economic Forum. Of the approximately 12 billion doses the pharmaceutical industry is expected to produce next year, about 9 billion shots have already been reserved by rich countries. "COVAX has not secured enough doses, and the way the situation may unfold is they will probably only get these doses fairly late." To date, COVAX's only confirmed, legally binding agreement is for up to 200 million doses, though that includes an option to order several times that number of additional doses, GAVI spokesman James Fulker said. It has agreements for another 500 million vaccines, but those are not legally binding.
Math

Are Fragments of Energy the Fundamental Building Blocks of the Universe? (theconversation.com) 99

hcs_$reboot shares a remarkable new theory from Larry M. Silverberg, an aerospace engineering professor at North Carolina State University (with colleague Jeffrey Eischen). They're proposing that matter is not made of particles (or even waves), as was long thought, but fragments of energy.

[W]hile the theories and math of waves and particles allow scientists to make incredibly accurate predictions about the universe, the rules break down at the largest and tiniest scales. Einstein proposed a remedy in his theory of general relativity. Using the mathematical tools available to him at the time, Einstein was able to better explain certain physical phenomena and also resolve a longstanding paradox relating to inertia and gravity. But instead of improving on particles or waves, he eliminated them as he proposed the warping of space and time.Using newer mathematical tools, my colleague and I have demonstrated a new theory that may accurately describe the universe... Instead of basing the theory on the warping of space and time, we considered that there could be a building block that is more fundamental than the particle and the wave....

Much to our surprise, we discovered that there were only a limited number of ways to describe a concentration of energy that flows. Of those, we found just one that works in accordance with our mathematical definition of flow. We named it a fragment of energy... Using the fragment of energy as a building block of matter, we then constructed the math necessary to solve physics problems... More than 100 [years] ago, Einstein had turned to two legendary problems in physics to validate general relativity: the ever-so-slight yearly shift — or precession — in Mercury's orbit, and the tiny bending of light as it passes the Sun... In both problems, we calculated the trajectories of the moving fragments and got the same answers as those predicted by the theory of general relativity. We were stunned.

Our initial work demonstrated how a new building block is capable of accurately modeling bodies from the enormous to the minuscule. Where particles and waves break down, the fragment of energy building block held strong. The fragment could be a single potentially universal building block from which to model reality mathematically — and update the way people think about the building blocks of the universe.

Math

Physicists Nail Down the 'Magic Number' That Shapes the Universe (quantamagazine.org) 177

Natalie Wolchover writes via Quanta Magazine: As fundamental constants go, the speed of light, c, enjoys all the fame, yet c's numerical value says nothing about nature; it differs depending on whether it's measured in meters per second or miles per hour. The fine-structure constant, by contrast, has no dimensions or units. It's a pure number that shapes the universe to an astonishing degree -- "a magic number that comes to us with no understanding," as Richard Feynman described it. Paul Dirac considered the origin of the number "the most fundamental unsolved problem of physics."

Numerically, the fine-structure constant, denoted by the Greek letter a (alpha), comes very close to the ratio 1/137. It commonly appears in formulas governing light and matter. [...] The constant is everywhere because it characterizes the strength of the electromagnetic force affecting charged particles such as electrons and protons. Because 1/137 is small, electromagnetism is weak; as a consequence, charged particles form airy atoms whose electrons orbit at a distance and easily hop away, enabling chemical bonds. On the other hand, the constant is also just big enough: Physicists have argued that if it were something like 1/138, stars would not be able to create carbon, and life as we know it wouldn't exist.

Today, in a new paper in the journal Nature, a team of four physicists led by Saida Guellati-Khelifa at the Kastler Brossel Laboratory in Paris reported the most precise measurement yet of the fine-structure constant. The team measured the constant's value to the 11th decimal place, reporting that a = 1/137.03599920611. (The last two digits are uncertain.) With a margin of error of just 81 parts per trillion, the new measurement is nearly three times more precise than the previous best measurement in 2018 by Muller's group at Berkeley, the main competition. (Guellati-Khelifa made the most precise measurement before Muller's in 2011.) Muller said of his rival's new measurement of alpha, "A factor of three is a big deal. Let's not be shy about calling this a big accomplishment."

Math

South Africa's Lottery Probed As 5, 6, 7, 8, 9 and 10 Drawn (bbc.com) 195

AmiMoJo shares a report from the BBC: The winning numbers in South Africa's national lottery have caused a stir and sparked accusations of fraud over their unusual sequence. Tuesday's PowerBall lottery saw the numbers five, six, seven, eight and nine drawn, while the Powerball itself was, you've guessed it, 10. Some South Africans have alleged a scam and an investigation is under way. The organizers said 20 people purchased a winning ticket and won 5.7 million rand ($370,000; 278,000 pounds) each. Another 79 ticketholders won 6,283 rand each for guessing the sequence from five up to nine but missing the PowerBall.

The chances of winning South Africa's PowerBall lottery are one in 42,375,200 -- the number of different combinations when selecting five balls from a set of 50, plus an additional bonus ball from a pool of 20. The odds of the draw resulting in the numbers seen in Tuesday's televised live event are the same as any other combination. Competitions resulting in multiple winners are rare, but this may have something to do with this particular sequence.

Education

Assigning Homework Exacerbates Class Divides, Researchers Find (vice.com) 312

"Education scholars say that math homework as it's currently assigned reinforces class divides in society and needs to change for good," according to Motherboard — citing a new working paper from education scholars: Status-reinforcing processes, or ones that fortify pre-existing divides, are a dime a dozen in education. Standardized testing, creating honors and AP tracks, and grouping students based on perceived ability all serve to disadvantage students who lack the support structures and parental engagement associated with affluence. Looking specifically at math homework, the authors of the new working paper wanted to see if homework was yet another status-reinforcing process. As it turns out, it was, and researchers say that the traditional solutions offered up to fix the homework gap won't work.

"Here, teachers knew that students were getting unequal support with homework," said Jessica Calarco, the first author of the paper and an associate professor of psychology at Indiana University. "And yet, because of these standard, taken-for-granted policies that treated homework as students' individual responsibilities, it erased those unequal contexts of support and led teachers to interpret and respond to homework in these status-reinforcing ways...."

The teachers interviewed for the paper acknowledged the unequal contexts affecting whether students could complete their math homework fully and correctly, Calarco said. However, that did not stop the same teachers from using homework as a way to measure students' abilities. "The most shocking and troubling part to me was hearing teachers write off students because they didn't get their homework done," Calarco said.... Part of the reason why homework can serve as a status-reinforcing process is that formal school policies and grading schemes treat it as a measure of a student's individual effort and responsibility, when many other factors affect completion, Calarco said....

"I'm not sure I want to completely come out and say that we need to ban homework entirely, but I think we need to really seriously reconsider when and how we assign it."

Businesses

IBM Apologizes For Firing Computer Pioneer For Being Transgender... 52 Years Later (forbes.com) 164

On August 29, 1968, IBM's CEO fired computer scientist and transgender pioneer Lynn Conway to avoid the public embarrassment of employing a transwoman. Nearly 52 years later, in an act that defines its present-day culture, IBM is apologizing and seeking forgiveness. Jeremy Alicandri writes via Forbes reports: On January 2, 1938, Lynn Conway's life began in Mount Vernon, NY. With a reported IQ of 155, Conway was an exceptional and inquisitive child who loved math and science during her teens. She went on to study physics at MIT and earned her bachelor's and master's degrees in electrical engineering at Columbia University's Engineering School. In 1964, Conway joined IBM Research, where she made major innovations in computer design, ensuring a promising career in the international conglomerate (IBM was the 7th largest corporation in the world at the time). Recently married and with two young daughters, she lived a seemingly perfect life. But Conway faced a profound existential challenge: she had been born as a boy.
[...]
[W]hile IBM knew of its key role in the Conway saga, the company remained silent. That all changed in August 2020. When writing an article on LGBTQ diversity in the automotive industry, I included Conway's story as an example of the costly consequences to employers that fail to promote an inclusive culture. I then reached out to IBM to learn if its stance had changed after 52 years. To my surprise, IBM admitted regrets and responsibility for Conway's firing, stating, "We deeply regret the hardship Lynn encountered." The company also explained that it was in communication with Conway for a formal resolution, which came two months later. Arvind Krishna, IBM's CEO, and other senior executives had determined that Conway should be recognized and awarded "for her lifetime body of technical achievements, both during her time at IBM and throughout her career."

Dario Gil, Director of IBM Research, who revealed the award during the online event, says, "Lynn was recently awarded the rare IBM Lifetime Achievement Award, given to individuals who have changed the world through technology inventions. Lynn's extraordinary technical achievements helped define the modern computing industry. She paved the way for how we design and make computing chips today -- and forever changed microelectronics, devices, and people's lives." The company also acknowledged that after Conway's departure in 1968, her research aided its own success. "In 1965 Lynn created the architectural level Advanced Computing System-1 simulator and invented a method that led to the development of a superscalar computer. This dynamic instruction scheduling invention was later used in computer chips, greatly improving their performance," a spokesperson stated.

Math

Computer Scientists Achieve 'Crown Jewel' of Cryptography (quantamagazine.org) 69

A cryptographic master tool called indistinguishability obfuscation has for years seemed too good to be true. Three researchers have figured out that it can work. Erica Klarreich, reporting for Quanta Magazine: In 2018, Aayush Jain, a graduate student at the University of California, Los Angeles, traveled to Japan to give a talk about a powerful cryptographic tool he and his colleagues were developing. As he detailed the team's approach to indistinguishability obfuscation (iO for short), one audience member raised his hand in bewilderment. "But I thought iO doesn't exist?" he said. At the time, such skepticism was widespread. Indistinguishability obfuscation, if it could be built, would be able to hide not just collections of data but the inner workings of a computer program itself, creating a sort of cryptographic master tool from which nearly every other cryptographic protocol could be built. It is "one cryptographic primitive to rule them all," said Boaz Barak of Harvard University. But to many computer scientists, this very power made iO seem too good to be true. Computer scientists set forth candidate versions of iO starting in 2013. But the intense excitement these constructions generated gradually fizzled out, as other researchers figured out how to break their security. As the attacks piled up, "you could see a lot of negative vibes," said Yuval Ishai of the Technion in Haifa, Israel. Researchers wondered, he said, "Who will win: the makers or the breakers?" "There were the people who were the zealots, and they believed in [iO] and kept working on it," said Shafi Goldwasser, director of the Simons Institute for the Theory of Computing at the University of California, Berkeley. But as the years went by, she said, "there was less and less of those people."

Now, Jain -- together with Huijia Lin of the University of Washington and Amit Sahai, Jain's adviser at UCLA -- has planted a flag for the makers. In a paper posted online on August 18, the three researchers show for the first time how to build indistinguishability obfuscation using only "standard" security assumptions. All cryptographic protocols rest on assumptions -- some, such as the famous RSA algorithm, depend on the widely held belief that standard computers will never be able to quickly factor the product of two large prime numbers. A cryptographic protocol is only as secure as its assumptions, and previous attempts at iO were built on untested and ultimately shaky foundations. The new protocol, by contrast, depends on security assumptions that have been widely used and studied in the past. "Barring a really surprising development, these assumptions will stand," Ishai said. While the protocol is far from ready to be deployed in real-world applications, from a theoretical standpoint it provides an instant way to build an array of cryptographic tools that were previously out of reach. For instance, it enables the creation of "deniable" encryption, in which you can plausibly convince an attacker that you sent an entirely different message from the one you really sent, and "functional" encryption, in which you can give chosen users different levels of access to perform computations using your data. The new result should definitively silence the iO skeptics, Ishai said. "Now there will no longer be any doubts about the existence of indistinguishability obfuscation," he said. "It seems like a happy end."

Slashdot Top Deals