×
Education

After Canceling Exam, College Board Touts Record Number of AP CSP Exam Takers 47

theodp writes: Q. How many AP Computer Science Principles 'exam takers' would you have if you cancelled the AP CSP exam due to the coronavirus? A. More than 116,000!

That's according to the math behind a new College Board press release, which boasts, "In 2020, more than 116,000 students took the AP CSP Exam -- more than double the number of exam takers in the course's first year, and a 21% increase over the previous year. In 2020, 39,570 women took the AP CSP exam, nearly three times the number who tested in 2017." Which is somewhat confusing, since the College Board actually cancelled the 2020 AP CSP Exam last spring, explaining to students, "This year, there will be no end-of-year multiple-choice exam in Computer Science Principles [the exam was to have counted for 60% of students' scores] -- your AP score will be computed from the Create and Explore performance tasks only."

Still, Sunday's Washington Post reported the good PR news, as did tech-bankrolled College Board partner Code.org, which exclaimed, "Young women set records in computer science exams, again!" In 2018, Code.org lamented that many students enrolled in AP CSP wouldn't get college credit for the course "because they don't take the exam", so perhaps an increase in AP CSP scores awarded -- if not AP CSP exams taken -- should be added to the list of silver linings of the pandemic.
Medicine

Poor Countries Face Long Wait for Vaccines Despite Promises 235

With Americans, Britons and Canadians rolling up their sleeves to receive coronavirus vaccines, the route out of the pandemic now seems clear to many in the West, even if the rollout will take many months. But for poorer countries, the road will be far longer and rougher. From a report: The ambitious initiative known as COVAX created to ensure the entire world has access to COVID-19 vaccines has secured only a fraction of the 2 billion doses it hopes to buy over the next year, has yet to confirm any actual deals to ship out vaccines and is short on cash. The virus that has killed more than 1.6 million people has exposed vast inequities between countries, as fragile health systems and smaller economies were often hit harder. COVAX was set up by the World Health Organization, vaccines alliance GAVI and CEPI, a global coalition to fight epidemics, to avoid the international stampede for vaccines that has accompanied past outbreaks and would reinforce those imbalances.

But now some experts say the chances that coronavirus shots will be shared fairly between rich nations and the rest are fading fast. With vaccine supplies currently limited, developed countries, some of which helped fund the research with taxpayer money, are under tremendous pressure to protect their own populations and are buying up shots. Meanwhile, some poorer countries that signed up to the initiative are looking for alternatives because of fears it won't deliver. "It's simple math," said Arnaud Bernaert, head of global health at the World Economic Forum. Of the approximately 12 billion doses the pharmaceutical industry is expected to produce next year, about 9 billion shots have already been reserved by rich countries. "COVAX has not secured enough doses, and the way the situation may unfold is they will probably only get these doses fairly late." To date, COVAX's only confirmed, legally binding agreement is for up to 200 million doses, though that includes an option to order several times that number of additional doses, GAVI spokesman James Fulker said. It has agreements for another 500 million vaccines, but those are not legally binding.
Math

Are Fragments of Energy the Fundamental Building Blocks of the Universe? (theconversation.com) 99

hcs_$reboot shares a remarkable new theory from Larry M. Silverberg, an aerospace engineering professor at North Carolina State University (with colleague Jeffrey Eischen). They're proposing that matter is not made of particles (or even waves), as was long thought, but fragments of energy.

[W]hile the theories and math of waves and particles allow scientists to make incredibly accurate predictions about the universe, the rules break down at the largest and tiniest scales. Einstein proposed a remedy in his theory of general relativity. Using the mathematical tools available to him at the time, Einstein was able to better explain certain physical phenomena and also resolve a longstanding paradox relating to inertia and gravity. But instead of improving on particles or waves, he eliminated them as he proposed the warping of space and time.Using newer mathematical tools, my colleague and I have demonstrated a new theory that may accurately describe the universe... Instead of basing the theory on the warping of space and time, we considered that there could be a building block that is more fundamental than the particle and the wave....

Much to our surprise, we discovered that there were only a limited number of ways to describe a concentration of energy that flows. Of those, we found just one that works in accordance with our mathematical definition of flow. We named it a fragment of energy... Using the fragment of energy as a building block of matter, we then constructed the math necessary to solve physics problems... More than 100 [years] ago, Einstein had turned to two legendary problems in physics to validate general relativity: the ever-so-slight yearly shift — or precession — in Mercury's orbit, and the tiny bending of light as it passes the Sun... In both problems, we calculated the trajectories of the moving fragments and got the same answers as those predicted by the theory of general relativity. We were stunned.

Our initial work demonstrated how a new building block is capable of accurately modeling bodies from the enormous to the minuscule. Where particles and waves break down, the fragment of energy building block held strong. The fragment could be a single potentially universal building block from which to model reality mathematically — and update the way people think about the building blocks of the universe.

Math

Physicists Nail Down the 'Magic Number' That Shapes the Universe (quantamagazine.org) 177

Natalie Wolchover writes via Quanta Magazine: As fundamental constants go, the speed of light, c, enjoys all the fame, yet c's numerical value says nothing about nature; it differs depending on whether it's measured in meters per second or miles per hour. The fine-structure constant, by contrast, has no dimensions or units. It's a pure number that shapes the universe to an astonishing degree -- "a magic number that comes to us with no understanding," as Richard Feynman described it. Paul Dirac considered the origin of the number "the most fundamental unsolved problem of physics."

Numerically, the fine-structure constant, denoted by the Greek letter a (alpha), comes very close to the ratio 1/137. It commonly appears in formulas governing light and matter. [...] The constant is everywhere because it characterizes the strength of the electromagnetic force affecting charged particles such as electrons and protons. Because 1/137 is small, electromagnetism is weak; as a consequence, charged particles form airy atoms whose electrons orbit at a distance and easily hop away, enabling chemical bonds. On the other hand, the constant is also just big enough: Physicists have argued that if it were something like 1/138, stars would not be able to create carbon, and life as we know it wouldn't exist.

Today, in a new paper in the journal Nature, a team of four physicists led by Saida Guellati-Khelifa at the Kastler Brossel Laboratory in Paris reported the most precise measurement yet of the fine-structure constant. The team measured the constant's value to the 11th decimal place, reporting that a = 1/137.03599920611. (The last two digits are uncertain.) With a margin of error of just 81 parts per trillion, the new measurement is nearly three times more precise than the previous best measurement in 2018 by Muller's group at Berkeley, the main competition. (Guellati-Khelifa made the most precise measurement before Muller's in 2011.) Muller said of his rival's new measurement of alpha, "A factor of three is a big deal. Let's not be shy about calling this a big accomplishment."

Math

South Africa's Lottery Probed As 5, 6, 7, 8, 9 and 10 Drawn (bbc.com) 195

AmiMoJo shares a report from the BBC: The winning numbers in South Africa's national lottery have caused a stir and sparked accusations of fraud over their unusual sequence. Tuesday's PowerBall lottery saw the numbers five, six, seven, eight and nine drawn, while the Powerball itself was, you've guessed it, 10. Some South Africans have alleged a scam and an investigation is under way. The organizers said 20 people purchased a winning ticket and won 5.7 million rand ($370,000; 278,000 pounds) each. Another 79 ticketholders won 6,283 rand each for guessing the sequence from five up to nine but missing the PowerBall.

The chances of winning South Africa's PowerBall lottery are one in 42,375,200 -- the number of different combinations when selecting five balls from a set of 50, plus an additional bonus ball from a pool of 20. The odds of the draw resulting in the numbers seen in Tuesday's televised live event are the same as any other combination. Competitions resulting in multiple winners are rare, but this may have something to do with this particular sequence.

Education

Assigning Homework Exacerbates Class Divides, Researchers Find (vice.com) 312

"Education scholars say that math homework as it's currently assigned reinforces class divides in society and needs to change for good," according to Motherboard — citing a new working paper from education scholars: Status-reinforcing processes, or ones that fortify pre-existing divides, are a dime a dozen in education. Standardized testing, creating honors and AP tracks, and grouping students based on perceived ability all serve to disadvantage students who lack the support structures and parental engagement associated with affluence. Looking specifically at math homework, the authors of the new working paper wanted to see if homework was yet another status-reinforcing process. As it turns out, it was, and researchers say that the traditional solutions offered up to fix the homework gap won't work.

"Here, teachers knew that students were getting unequal support with homework," said Jessica Calarco, the first author of the paper and an associate professor of psychology at Indiana University. "And yet, because of these standard, taken-for-granted policies that treated homework as students' individual responsibilities, it erased those unequal contexts of support and led teachers to interpret and respond to homework in these status-reinforcing ways...."

The teachers interviewed for the paper acknowledged the unequal contexts affecting whether students could complete their math homework fully and correctly, Calarco said. However, that did not stop the same teachers from using homework as a way to measure students' abilities. "The most shocking and troubling part to me was hearing teachers write off students because they didn't get their homework done," Calarco said.... Part of the reason why homework can serve as a status-reinforcing process is that formal school policies and grading schemes treat it as a measure of a student's individual effort and responsibility, when many other factors affect completion, Calarco said....

"I'm not sure I want to completely come out and say that we need to ban homework entirely, but I think we need to really seriously reconsider when and how we assign it."

Businesses

IBM Apologizes For Firing Computer Pioneer For Being Transgender... 52 Years Later (forbes.com) 164

On August 29, 1968, IBM's CEO fired computer scientist and transgender pioneer Lynn Conway to avoid the public embarrassment of employing a transwoman. Nearly 52 years later, in an act that defines its present-day culture, IBM is apologizing and seeking forgiveness. Jeremy Alicandri writes via Forbes reports: On January 2, 1938, Lynn Conway's life began in Mount Vernon, NY. With a reported IQ of 155, Conway was an exceptional and inquisitive child who loved math and science during her teens. She went on to study physics at MIT and earned her bachelor's and master's degrees in electrical engineering at Columbia University's Engineering School. In 1964, Conway joined IBM Research, where she made major innovations in computer design, ensuring a promising career in the international conglomerate (IBM was the 7th largest corporation in the world at the time). Recently married and with two young daughters, she lived a seemingly perfect life. But Conway faced a profound existential challenge: she had been born as a boy.
[...]
[W]hile IBM knew of its key role in the Conway saga, the company remained silent. That all changed in August 2020. When writing an article on LGBTQ diversity in the automotive industry, I included Conway's story as an example of the costly consequences to employers that fail to promote an inclusive culture. I then reached out to IBM to learn if its stance had changed after 52 years. To my surprise, IBM admitted regrets and responsibility for Conway's firing, stating, "We deeply regret the hardship Lynn encountered." The company also explained that it was in communication with Conway for a formal resolution, which came two months later. Arvind Krishna, IBM's CEO, and other senior executives had determined that Conway should be recognized and awarded "for her lifetime body of technical achievements, both during her time at IBM and throughout her career."

Dario Gil, Director of IBM Research, who revealed the award during the online event, says, "Lynn was recently awarded the rare IBM Lifetime Achievement Award, given to individuals who have changed the world through technology inventions. Lynn's extraordinary technical achievements helped define the modern computing industry. She paved the way for how we design and make computing chips today -- and forever changed microelectronics, devices, and people's lives." The company also acknowledged that after Conway's departure in 1968, her research aided its own success. "In 1965 Lynn created the architectural level Advanced Computing System-1 simulator and invented a method that led to the development of a superscalar computer. This dynamic instruction scheduling invention was later used in computer chips, greatly improving their performance," a spokesperson stated.

Math

Computer Scientists Achieve 'Crown Jewel' of Cryptography (quantamagazine.org) 69

A cryptographic master tool called indistinguishability obfuscation has for years seemed too good to be true. Three researchers have figured out that it can work. Erica Klarreich, reporting for Quanta Magazine: In 2018, Aayush Jain, a graduate student at the University of California, Los Angeles, traveled to Japan to give a talk about a powerful cryptographic tool he and his colleagues were developing. As he detailed the team's approach to indistinguishability obfuscation (iO for short), one audience member raised his hand in bewilderment. "But I thought iO doesn't exist?" he said. At the time, such skepticism was widespread. Indistinguishability obfuscation, if it could be built, would be able to hide not just collections of data but the inner workings of a computer program itself, creating a sort of cryptographic master tool from which nearly every other cryptographic protocol could be built. It is "one cryptographic primitive to rule them all," said Boaz Barak of Harvard University. But to many computer scientists, this very power made iO seem too good to be true. Computer scientists set forth candidate versions of iO starting in 2013. But the intense excitement these constructions generated gradually fizzled out, as other researchers figured out how to break their security. As the attacks piled up, "you could see a lot of negative vibes," said Yuval Ishai of the Technion in Haifa, Israel. Researchers wondered, he said, "Who will win: the makers or the breakers?" "There were the people who were the zealots, and they believed in [iO] and kept working on it," said Shafi Goldwasser, director of the Simons Institute for the Theory of Computing at the University of California, Berkeley. But as the years went by, she said, "there was less and less of those people."

Now, Jain -- together with Huijia Lin of the University of Washington and Amit Sahai, Jain's adviser at UCLA -- has planted a flag for the makers. In a paper posted online on August 18, the three researchers show for the first time how to build indistinguishability obfuscation using only "standard" security assumptions. All cryptographic protocols rest on assumptions -- some, such as the famous RSA algorithm, depend on the widely held belief that standard computers will never be able to quickly factor the product of two large prime numbers. A cryptographic protocol is only as secure as its assumptions, and previous attempts at iO were built on untested and ultimately shaky foundations. The new protocol, by contrast, depends on security assumptions that have been widely used and studied in the past. "Barring a really surprising development, these assumptions will stand," Ishai said. While the protocol is far from ready to be deployed in real-world applications, from a theoretical standpoint it provides an instant way to build an array of cryptographic tools that were previously out of reach. For instance, it enables the creation of "deniable" encryption, in which you can plausibly convince an attacker that you sent an entirely different message from the one you really sent, and "functional" encryption, in which you can give chosen users different levels of access to perform computations using your data. The new result should definitively silence the iO skeptics, Ishai said. "Now there will no longer be any doubts about the existence of indistinguishability obfuscation," he said. "It seems like a happy end."

AI

AI Has Cracked a Key Mathematical Puzzle For Understanding Our World (technologyreview.com) 97

An anonymous reader shares a report: Unless you're a physicist or an engineer, there really isn't much reason for you to know about partial differential equations. I know. After years of poring over them in undergrad while studying mechanical engineering, I've never used them since in the real world. But partial differential equations, or PDEs, are also kind of magical. They're a category of math equations that are really good at describing change over space and time, and thus very handy for describing the physical phenomena in our universe. They can be used to model everything from planetary orbits to plate tectonics to the air turbulence that disturbs a flight, which in turn allows us to do practical things like predict seismic activity and design safe planes. The catch is PDEs are notoriously hard to solve. And here, the meaning of "solve" is perhaps best illustrated by an example. Say you are trying to simulate air turbulence to test a new plane design. There is a known PDE called Navier-Stokes that is used to describe the motion of any fluid. "Solving" Navier-Stokes allows you to take a snapshot of the air's motion (a.k.a. wind conditions) at any point in time and model how it will continue to move, or how it was moving before.

These calculations are highly complex and computationally intensive, which is why disciplines that use a lot of PDEs often rely on supercomputers to do the math. It's also why the AI field has taken a special interest in these equations. If we could use deep learning to speed up the process of solving them, it could do a whole lot of good for scientific inquiry and engineering. Now researchers at Caltech have introduced a new deep-learning technique for solving PDEs that is dramatically more accurate than deep-learning methods developed previously. It's also much more generalizable, capable of solving entire families of PDEs -- such as the Navier-Stokes equation for any type of fluid -- without needing retraining. Finally, it is 1,000 times faster than traditional mathematical formulas, which would ease our reliance on supercomputers and increase our computational capacity to model even bigger problems. That's right. Bring it on.

Math

Microsoft Overhauls Excel With Live Custom Data Types (theverge.com) 27

Microsoft is overhauling Excel with the ability to support custom live data types. The Verge reports: You could import the data type for Seattle, for example, and then create a formula that references that single cell to pull out information on the population of Seattle. These data types work by cramming a set of structured data into a single cell in Excel that can then be referenced by the rest of the spreadsheet. Data can also be refreshed to keep it up to date. If you're a student who is researching the periodic table, for example, you could create a cell for each element and easily pull out individual data from there.

Microsoft is bringing more than 100 new data types into Excel for Microsoft 365 Personal or Family subscribers. Excel users will be able to track stocks, pull in nutritional information for dieting plans, and much more, thanks to data from Wolfram Alpha's service. This is currently available for Office beta testers in the Insiders program. Where these custom data types will be most powerful is obviously for businesses that rely on Excel daily. Microsoft is leveraging its Power BI service to act as the connector to bring sources of data into Excel data types on the commercial side, allowing businesses to connect up a variety of data. This could be hierarchical data or even references to other data types and images. Businesses will even be able to convert existing cells into linked data types, making data analysis a lot easier.

Power BI won't be the only way for this feature to work, though. When you import data into Excel, you can now transform it into a data type with Power Query. That could include information from files, databases, websites, and more. The data that's imported can be cleaned up and then converted into a data type to be used in spreadsheets. If you've pulled in data using Power Query, it's easy to refresh the data from its original source. [...] These new Power BI data types will be available in Excel for Windows for all Microsoft 365 / Office 365 subscribers that also have a Power BI Pro service plan. Power Query data types are also rolling out to subscribers. On the consumer side, Wolfram Alpha data types are currently available in preview for Office insiders and should be available to all Microsoft 365 subscribers soon.

Math

Winning Bid: How Auction Theory Took the Nobel Memorial Prize in Economics (ft.com) 18

Tim Harford, writing for Financial Times: A well-designed auction forces bidders to reveal the truth about their own estimate of the prize's value. At the same time, the auction shares that information with the other bidders. And it sets the price accordingly. It is quite a trick. But, in practice, it is a difficult trick to get right. In the 1990s, the US Federal government turned to auction theorists -- Milgrom and Wilson prominent among them -- for advice on auctioning radio-spectrum rights. "The theory that we had in place had only a little bit to do with the problems that they actually faced," Milgrom recalled in an interview in 2007. "But the proposals that were being made by the government were proposals that we were perfectly capable of analysing the flaws in and improving."

The basic challenge with radio-spectrum auctions is that many prizes are on offer, and bidders desire only certain combinations. A TV company might want the right to use Band A, or Band B, but not both. Or the right to broadcast in the east of England, but only if they also had the right to broadcast in the west. Such combinatorial auctions are formidably challenging to design, but Milgrom and Wilson got to work. Joshua Gans, a former student of Milgrom's who is now a professor at the University of Toronto, praises both men for their practicality. Their theoretical work is impressive, he said, "but they realised that when the world got too complex, they shouldn't adhere to proving strict theorems."

Math

Computer Scientists Break Traveling Salesperson Record (quantamagazine.org) 72

After 44 years, there's finally a better way to find approximate solutions to the notoriously difficult traveling salesperson problem. From a report: When Nathan Klein started graduate school two years ago, his advisers proposed a modest plan: to work together on one of the most famous, long-standing problems in theoretical computer science. Even if they didn't manage to solve it, they figured, Klein would learn a lot in the process. He went along with the idea. "I didn't know to be intimidated," he said. "I was just a first-year grad student -- I don't know what's going on." Now, in a paper posted online in July, Klein and his advisers at the University of Washington, Anna Karlin and Shayan Oveis Gharan, have finally achieved a goal computer scientists have pursued for nearly half a century: a better way to find approximate solutions to the traveling salesperson problem. This optimization problem, which seeks the shortest (or least expensive) round trip through a collection of cities, has applications ranging from DNA sequencing to ride-sharing logistics. Over the decades, it has inspired many of the most fundamental advances in computer science, helping to illuminate the power of techniques such as linear programming. But researchers have yet to fully explore its possibilities -- and not for want of trying. The traveling salesperson problem "isn't a problem, it's an addiction," as Christos Papadimitriou, a leading expert in computational complexity, is fond of saying.

Most computer scientists believe that there is no algorithm that can efficiently find the best solutions for all possible combinations of cities. But in 1976, Nicos Christofides came up with an algorithm that efficiently finds approximate solutions -- round trips that are at most 50% longer than the best round trip. At the time, computer scientists expected that someone would soon improve on Christofides' simple algorithm and come closer to the true solution. But the anticipated progress did not arrive. "A lot of people spent countless hours trying to improve this result," said Amin Saberi of Stanford University. Now Karlin, Klein and Oveis Gharan have proved that an algorithm devised a decade ago beats Christofides' 50% factor, though they were only able to subtract 0.2 billionth of a trillionth of a trillionth of a percent. Yet this minuscule improvement breaks through both a theoretical logjam and a psychological one. Researchers hope that it will open the floodgates to further improvements.

Windows

ZDNet Argues Linux-Based Windows 'Makes Perfect Sense' (zdnet.com) 100

Last week open-source advocate Eric S. Raymond argued Microsoft was quietly switching over to a Linux kernel that emulates Windows. "He's on to something," says ZDNet's contributing editor Steven J. Vaughan-Nichols: I've long thought that Microsoft was considering migrating the Windows interface to running on the Linux kernel. Why...? [Y]ou can run standard Linux programs now on WSL2 without any trouble.

That's because Linux is well on its way to becoming a first-class citizen on the Windows desktop. Multiple Linux distros, starting with Ubuntu, Red Hat Fedora, and SUSE Linux Enterprise Desktop (SLED), now run smoothly on WSL2. That's because Microsoft has replaced its WSL1 translation layer, which converted Linux kernel calls into Windows calls, with WSL2. With WSL2 Microsoft's own Linux kernel is running on a thin version of the Hyper-V hypervisor. That's not all. With the recent Windows 10 Insider Preview build 20211, you can now access Linux file systems, such as ext4, from Windows File Manager and PowerShell. On top of that, Microsoft developers are making it easy to run Linux graphical applications on Windows...

[Raymond] also observed, correctly, that Microsoft no longer depends on Windows for its cash flow but on its Azure cloud offering. Which, by the way, is running more Linux instances than it is Windows Server instances. So, that being the case, why should Microsoft keep pouring money into the notoriously trouble-prone Windows kernel — over 50 serious bugs fixed in the last Patch Tuesday roundup — when it can use the free-as-in-beer Linux kernel? Good question. He thinks Microsoft can do the math and switch to Linux.

I think he's right. Besides his points, there are others. Microsoft already wants you to replace your existing PC-based software, like Office 2019, with software-as-a-service (SaaS) programs like Office 365. Microsoft also encourages you to move your voice, video, chat, and texting to Microsoft's Azure Communication Services even if you don't use Teams. With SaaS programs, Microsoft doesn't care what operating system you're running. They're still going to get paid whether you run Office 365 on Windows, a Chromebook, or, yes, Linux.

I see two possible paths ahead for Windows. First, there's Linux-based Windows. It simply makes financial sense. Or, the existing Windows desktop being replaced by the Windows Virtual Desktop or other Desktop-as-a-Service (DaaS) offerings.... Google chose to save money and increase security by using Linux as the basis for Chrome OS. This worked out really well for Google. It can for Microsoft with — let's take a blast from the past — and call it Lindows as well.

Math

Teenager on TiKTok Resurrects an Essential Question: What is Math? (smithsonianmag.com) 160

Long-time Slashdot reader fahrbot-bot shares a story that all started with a high school student's innocuous question on TikTok, leading academic mathematicians and philosophers to weigh in on "a very ancient and unresolved debate in the philosophy of science," reports Smithsonian magazine.

"What, exactly, is math?" Is it invented, or discovered? And are the things that mathematicians work with — numbers, algebraic equations, geometry, theorems and so on — real? Some scholars feel very strongly that mathematical truths are "out there," waiting to be discovered — a position known as Platonism.... Many mathematicians seem to support this view. The things they've discovered over the centuries — that there is no highest prime number; that the square root of two is an irrational number; that the number pi, when expressed as a decimal, goes on forever — seem to be eternal truths, independent of the minds that found them....

Other scholars — especially those working in other branches of science — view Platonism with skepticism. Scientists tend to be empiricists; they imagine the universe to be made up of things we can touch and taste and so on; things we can learn about through observation and experiment. The idea of something existing "outside of space and time" makes empiricists nervous: It sounds embarrassingly like the way religious believers talk about God, and God was banished from respectable scientific discourse a long time ago. Platonism, as mathematician Brian Davies has put it, "has more in common with mystical religions than it does with modern science." The fear is that if mathematicians give Plato an inch, he'll take a mile. If the truth of mathematical statements can be confirmed just by thinking about them, then why not ethical problems, or even religious questions? Why bother with empiricism at all...?

Platonism has various alternatives. One popular view is that mathematics is merely a set of rules, built up from a set of initial assumptions — what mathematicians call axioms... But this view has its own problems. If mathematics is just something we dream up from within our own heads, why should it "fit" so well with what we observe in nature...? Theoretical physicist Eugene Wigner highlighted this issue in a famous 1960 essay titled, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences." Wigner concluded that the usefulness of mathematics in tackling problems in physics "is a wonderful gift which we neither understand nor deserve."

Medicine

'Why Modeling the Spread of COVID-19 Is So Damn Hard' (ieee.org) 117

Slashdot reader the_newsbeagle writes: At the beginning of the pandemic, modelers pulled out everything they had to predict the spread of the virus. This article explains the three main types of models used: 1) compartmental models that sort people into categories of exposure and recovery, 2) data-driven models that often use neural networks to make predictions, and 3) agent-based models that are something like a Sim Pandemic.
"Researchers say they've learned a lot of lessons modeling this pandemic, lessons that will carry over to the next..." the article points out: Finally, researchers emphasize the need for agility. Jarad Niemi, an associate professor of statistics at Iowa State University who helps run the forecast hub used by the CDC, says software packages have made it easier to build models quickly, and the code-sharing site GitHub lets people share and compare their models. COVID-19 is giving modelers a chance to try out all their newest tools, says biologist Lauren Ancel Meyers, the head of the COVID-19 Modeling Consortium at the University of Texas at Austin. "The pace of innovation, the pace of development, is unlike ever before," she says. "There are new statistical methods, new kinds of data, new model structures."

"If we want to beat this virus," says Mikhail Prokopenko, a computer scientist at the University of Sydney, "we have to be as adaptive as it is."

Idle

Researcher Discusses Whether Time Travel Could Prevent a Pandemic (popularmechanics.com) 145

University of Queensland student Germain Tobar who worked with UQ physics professor Fabio Costa on a new peer-reviewed paper "says he has mathematically proven the physical feasibility of a specific kind of time travel" without paradoxes, reports Popular Mechanics: Time travel discussion focuses on closed time-like curves, something Albert Einstein first posited. And Tobar and Costa say that as long as just two pieces of an entire scenario within a closed time-like curve are still in "causal order" when you leave, the rest is subject to local free will... In a university statement, Costa illustrates the science with an analogy


"Say you travelled in time, in an attempt to stop COVID-19's patient zero from being exposed to the virus. However if you stopped that individual from becoming infected, that would eliminate the motivation for you to go back and stop the pandemic in the first place. This is a paradox, an inconsistency that often leads people to think that time travel cannot occur in our universe. [L]ogically it's hard to accept because that would affect our freedom to make any arbitrary action. It would mean you can time travel, but you cannot do anything that would cause a paradox to occur...."


But the real truth, in terms of the mathematical outcomes, is more like another classic parable: the monkey's paw. Be careful what you wish for, and be careful what you time travel for. Tobar explains in the statement:


"In the coronavirus patient zero example, you might try and stop patient zero from becoming infected, but in doing so you would catch the virus and become patient zero, or someone else would. No matter what you did, the salient events would just recalibrate around you. Try as you might to create a paradox, the events will always adjust themselves, to avoid any inconsistency."

Digital

Researchers Found the Manual For the World's Oldest Surviving Computer (engadget.com) 74

Researchers will be able to gain a deeper understanding of what's considered the world's oldest surviving (digital) computer after its long-lost user manual was unearthed. Engadget reports: The Z4, which was built in 1945, runs on tape, takes up most of a room and needs several people to operate it. The machine now takes residence at the Deutsches Museum in Munich, but it hasn't been used in quite some time. An archivist at ETH Zurich, Evelyn Boesch, discovered the manual among her father's documents in March, according to retired lecturer Herbert Bruderer. Rene Boesch worked with the Swiss Aeronautical Engineering Association, which was based at the university's Institute for Aircraft Statics and Aircraft Construction. The Z4 was housed there in the early 1950s.

Among Boesch's documents were notes on math problems the Z4 solved that were linked to the development of the P-16 jet fighter. "These included calculations on the trajectory of rockets, on aircraft wings, on flutter vibrations [and] on nosedive," Bruderer wrote in a Association of Computing Machinery blog post.

Books

Bill Gates vs. Steve Jobs: the Books They Recommended (mostrecommendedbooks.com) 45

Slashdot has featured "the 61 books Elon Musk has recommended on Twitter" as well as the 41 books Mark Zuckerberg recommended on Facebook. Both lists were compiled by a slick web site (with Amazon referrer codes) called "Most Recommended Books." But they've also created pages showing books recommended by over 400 other public figuresincuding Bill Gates and the late Steve Jobs — which provide surprisingly revealing glimpses into the minds of two very different men.

Here's some of the highlights...
Math

UK Mathematician Wins Richest Prize in Academia For His Work On Stochastic Analysis (theguardian.com) 21

Lanodonal writes: A mathematician who tamed a nightmarish family of equations that behave so badly they make no sense has won the most lucrative prize in academia. Martin Hairer, an Austrian-British researcher at Imperial College London, is the winner of the 2021 Breakthrough prize for mathematics, an annual $3m award that has come to rival the Nobels in terms of kudos and prestige. Hairer landed the prize for his work on stochastic analysis, a field that describes how random effects turn the maths of things like stirring a cup of tea, the growth of a forest fire, or the spread of a water droplet that has fallen on a tissue into a fiendishly complex problem. His major work, a 180-page treatise that introduced the world to "regularity structures," so stunned his colleagues that one suggested it must have been transmitted to Hairer by a more intelligent alien civilisation.

Hairer, who rents a London flat with his wife and fellow Imperial mathematician, Xue-Mei Li, heard he had won the prize in a Skype call while the UK was still in lockdown. "It was completely unexpected," he said. "I didn't think about it at all, so it was a complete shock. We couldn't go out or anything, so we celebrated at home." The award is one of several Breakthrough prizes announced each year by a foundation set up by the Israeli-Russian investor Yuri Milner and Facebook's Mark Zuckerberg. A committee of previous recipients chooses the winners who are all leading lights in mathematics and the sciences. Other winners announced on Thursday include a Hong Kong scientist, Dennis Lo, who was inspired by a 3D Harry Potter movie to develop a test for genetic mutations in DNA shed by unborn babies, and a team of physicists whose experiments revealed that if extra dimensions of reality exist, they are curled up smaller than a third of a hair's width.

Math

Mathematicians Finally Answer 2,000-Year-Old Question About Dodecahedrons (quantamagazine.org) 31

NCamero (Slashdot reader #35,481) brings some news from the world of 12-sided dodecahedrons: Quanta magazine reports that a trio of mathematicians has resolved one of the most basic questions about the dodecahedron. The cube, tetrahedron, octahedron and icosahedron cannot have a straight path you could take [starting from a corner] that would eventually return you to your starting point without passing through any of the other corners. The dodecahedron can.
Mathematicians studied dodecahedrons for over 2,000 years without solving the problem, reports Quanta magazine. But now... Jayadev Athreya, David Aulicino and Patrick Hooper have shown that an infinite number of such paths do in fact exist on the dodecahedron. Their paper, published in May in Experimental Mathematics, shows that these paths can be divided into 31 natural families. The solution required modern techniques and computer algorithms.

"Twenty years ago, [this question] was absolutely out of reach; 10 years ago it would require an enormous effort of writing all necessary software, so only now all the factors came together," wrote Anton Zorich, of the Institute of Mathematics of Jussieu in Paris, in an email.

Slashdot Top Deals