AI

Calculations Show It'll Be Impossible To Control a Super-Intelligent AI (sciencealert.com) 194

schwit1 shares a report from ScienceAlert: [S]cientists have just delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The answer? Almost definitely not. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze. But if we're unable to comprehend it, it's impossible to create such a simulation. Rules such as "cause no harm to humans" can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.

Part of the team's reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once. Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not -- it's mathematically impossible for us to be absolutely sure either way, which means it's not containable.

The alternative to teaching AI some ethics and telling it not to destroy the world -- something which no algorithm can be absolutely certain of doing, the researchers say -- is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example. The new study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence -- the argument goes that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all? If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we're going in.

AI

New XPrize Challenge: Predicting Covid-19's Spread and Prescribing Interventions (ieee.org) 22

Slashdot reader the_newsbeagle shares an article from IEEE Spectrum: Many associate XPrize with a $10-million award offered in 1996 to motivate a breakthrough in private space flight. But the organization has since held other competitions related to exploration, ecology, and education. And in November, they launched the Pandemic Response Challenge, which will culminate in a $500,000 award to be split between two teams that not only best predict the continuing global spread of COVID-19, but also prescribe policies to curtail it...

For Phase 1, teams had to submit prediction models by 22 December... Up to 50 teams will make it to Phase 2, where they must submit a prescription model... The top two teams will split half a million dollars. The competition may not end there. Amir Banifatemi, XPrize's chief innovation and growth officer, says a third phase might test models on vaccine deployment prescriptions. And beyond the contest, some cities or countries might put some of the Phase 2 or 3 models into practice, if Banifatemi can find adventurous takers.

The organizers expect a wide variety of solutions. Banifatemi says the field includes teams from AI strongholds such as Stanford, Microsoft, MIT, Oxford, and Quebec's Mila, but one team consists of three women in Tunisia. In all, 104 teams from 28 countries have registered. "We're hoping that this competition can be a springboard for developing solutions for other really big problems as well," Miikkulainen says. Those problems include pandemics, global warming, and challenges in business, education, and healthcare. In this scenario, "humans are still in charge," he emphasizes. "They still decide what they want, and AI gives them the best alternatives from which the decision-makers choose."

But Miikkulainen hopes that data science can help humanity find its way. "Maybe in the future, it's considered irresponsible not to use AI for making these policies," he says.

For the Covid-19 competition, Banifatemi emphasized that one goal was "to make the resulting insights available freely to everyone, in an open-source manner — especially for all those communities that may not have access to data and epidemiology divisions, statisticians, or data scientists."
Intel

Linus Torvalds Rails At Intel For 'Killing' the ECC Industry (theregister.com) 218

An anonymous reader quotes a report from The Register: Linux creator Linus Torvalds has accused Intel of preventing widespread use of error-correcting memory and being "instrumental in killing the whole ECC industry with its horribly bad market segmentation." ECC stands for error-correcting code. ECC memory uses additional parity bits to verify that the data read from memory is the same as the data that was written. Without this check, memory is vulnerable to occasional corruption where a bit is flipped spontaneously, for example, by background radiation. Memory can also be attacked using a technique called Rowhammer, where rapid repeated reads of the same memory locations can cause adjacent locations to change their state. ECC memory solves these problems and has been available for over 50 years yet most personal computers do not use it. Cost is a factor but what riles Torvalds is that Intel has made ECC support a feature of its Xeon range, aimed at servers and high-end workstations, and does not support it in other ranges such as the Core series.

The topic came up in a discussion about AMD's new Zen 3 Ryzen 9 5000 series processors on the Real World Tech forum site. AMD has semi-official ECC support in most of its processors. "I don't really see AMD's unofficial ECC support being a big deal," said an unwary contributor. "ECC absolutely matters," retorted Torvalds. "Intel has been detrimental to the whole industry and to users because of their bad and misguided policies wrt ECC. Seriously. And if you don't believe me, then just look at multiple generations of rowhammer, where each time Intel and memory manufacturers bleated about how it's going to be fixed next time... And yes, that was -- again -- entirely about the misguided and arse-backwards policy of 'consumers don't need ECC', which made the market for ECC memory go away."

The accusation is significant particularly at a time when security issues are high on the agenda. The suggestion is that Intel's marketing decisions have held back adoption of a technology that makes users more secure -- though rowhammer is only one of many potential attack mechanisms -- as well as making PCs more stable. "The arguments against ECC were always complete and utter garbage. Now even the memory manufacturers are starting to do ECC internally because they finally owned up to the fact that they absolutely have to," said Torvalds. Torvalds said that Xeon prices deterred usage. "I used to look at the Xeon CPU's, and I could never really make the math work. The Intel math was basically that you get twice the CPU for five times the price. So for my personal workstations, I ended up using Intel consumer CPU's." Prices, he said, dropped last year "because of Ryzen and Threadripper... but it was a 'too little, much too late' situation." By way of mitigation, he added that "apart from their ECC stance I was perfectly happy with [Intel's] consumer offerings."

Programming

Study Finds Brain Activity of Coders Isn't Like Language or Math (boingboing.net) 88

"When you do computer programming, what sort of mental work are you doing?" asks science/tech journalist Clive Thompson: For a long time, folks have speculated on this. Since coding involves pondering hierarchies of symbols, maybe the mental work is kinda like writing or reading? Others have speculated it's more similar to the way our brains process math and puzzles. A group of MIT neuroscientists recently did fMRI brain-scans of young adults while they were solving a small coding challenge using a textual programming language (Python) and a visual one (Scratch Jr.). The results?

The brain activity wasn't similar to when we process language. Instead, coding seems to activate the "multiple demand network," which — as the scientists note in a public-relations writeup of their work — "is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles."

So, coding is more like doing math than processing language?

Sorrrrrrt of ... but not exactly so. The scientists saw activity patterns that differ from those you'd see during math, too.

The upshot: Coding — in this (very preliminary!) work, anyway — looks to be a little different from either language or math. As the note, in a media release...

"Understanding computer code seems to be its own thing...."

Just anecdotally — having interviewed hundreds of coders and computer scientists for my book CODERS — I've met amazing programmers and computer scientists with all manner of intellectual makeups. There were math-heads, and there were people who practically counted on their fingers. There were programmers obsessed with — and eloquent in — language, and ones gently baffled by written and spoken communication. Lots of musicians, lots of folks who slid in via a love of art and visual design, then whose brains just seized excitedly on the mouthfeel of algorithms.

Math

The Lasting Lessons of John Conway's Game of Life 84

Siobhan Roberts, writing for The New York Times: In March of 1970, Martin Gardner opened a letter jammed with ideas for his Mathematical Games column in Scientific American. Sent by John Horton Conway, then a mathematician at the University of Cambridge, the letter ran 12 pages, typed hunt-and-peck style. Page 9 began with the heading "The game of life." It described an elegant mathematical model of computation -- a cellular automaton, a little machine, of sorts, with groups of cells that evolve from iteration to iteration, as a clock advances from one second to the next. Dr. Conway, who died in April, having spent the latter part of his career at Princeton, sometimes called Life a "no-player, never-ending game." Mr. Gardner called it a "fantastic solitaire pastime." The game was simple: Place any configuration of cells on a grid, then watch what transpires according to three rules that dictate how the system plays out.

Birth rule: An empty, or "dead," cell with precisely three "live" neighbors (full cells) becomes live.
Death rule: A live cell with zero or one neighbors dies of isolation; a live cell with four or more neighbors dies of overcrowding.
Survival rule: A live cell with two or three neighbors remains alive.
With each iteration, some cells live, some die and "Life-forms" evolve, one generation to the next. Among the first creatures to emerge was the glider -- a five-celled organism that moved across the grid with a diagonal wiggle and proved handy for transmitting information. It was discovered by a member of Dr. Conway's research team, Richard Guy, in Cambridge, England. The glider gun, producing a steady stream of gliders, was discovered soon after by Bill Gosper, then at the Massachusetts Institute of Technology.
AI

AI Solves Schrodinger's Equation (phys.org) 67

An anonymous reader quotes a report from Phys.Org: A team of scientists at Freie Universitat Berlin has developed an artificial intelligence (AI) method for calculating the ground state of the Schrodinger equation in quantum chemistry. The goal of quantum chemistry is to predict chemical and physical properties of molecules based solely on the arrangement of their atoms in space, avoiding the need for resource-intensive and time-consuming laboratory experiments. In principle, this can be achieved by solving the Schrodinger equation, but in practice this is extremely difficult. Up to now, it has been impossible to find an exact solution for arbitrary molecules that can be efficiently computed. But the team at Freie Universitat has developed a deep learning method that can achieve an unprecedented combination of accuracy and computational efficiency.

The deep neural network designed by [the] team is a new way of representing the wave functions of electrons. "Instead of the standard approach of composing the wave function from relatively simple mathematical components, we designed an artificial neural network capable of learning the complex patterns of how electrons are located around the nuclei," [Professor Frank Noe, who led the team effort] explains. "One peculiar feature of electronic wave functions is their antisymmetry. When two electrons are exchanged, the wave function must change its sign. We had to build this property into the neural network architecture for the approach to work," adds [Dr. Jan Hermann of Freie Universitat Berlin, who designed the key features of the method in the study]. This feature, known as 'Pauli's exclusion principle,' is why the authors called their method 'PauliNet.' Besides the Pauli exclusion principle, electronic wave functions also have other fundamental physical properties, and much of the innovative success of PauliNet is that it integrates these properties into the deep neural network, rather than letting deep learning figure them out by just observing the data. "Building the fundamental physics into the AI is essential for its ability to make meaningful predictions in the field," says Noe. "This is really where scientists can make a substantial contribution to AI, and exactly what my group is focused on."
The results were published in the journal Nature Chemistry.
Education

After Canceling Exam, College Board Touts Record Number of AP CSP Exam Takers 47

theodp writes: Q. How many AP Computer Science Principles 'exam takers' would you have if you cancelled the AP CSP exam due to the coronavirus? A. More than 116,000!

That's according to the math behind a new College Board press release, which boasts, "In 2020, more than 116,000 students took the AP CSP Exam -- more than double the number of exam takers in the course's first year, and a 21% increase over the previous year. In 2020, 39,570 women took the AP CSP exam, nearly three times the number who tested in 2017." Which is somewhat confusing, since the College Board actually cancelled the 2020 AP CSP Exam last spring, explaining to students, "This year, there will be no end-of-year multiple-choice exam in Computer Science Principles [the exam was to have counted for 60% of students' scores] -- your AP score will be computed from the Create and Explore performance tasks only."

Still, Sunday's Washington Post reported the good PR news, as did tech-bankrolled College Board partner Code.org, which exclaimed, "Young women set records in computer science exams, again!" In 2018, Code.org lamented that many students enrolled in AP CSP wouldn't get college credit for the course "because they don't take the exam", so perhaps an increase in AP CSP scores awarded -- if not AP CSP exams taken -- should be added to the list of silver linings of the pandemic.
Medicine

Poor Countries Face Long Wait for Vaccines Despite Promises 235

With Americans, Britons and Canadians rolling up their sleeves to receive coronavirus vaccines, the route out of the pandemic now seems clear to many in the West, even if the rollout will take many months. But for poorer countries, the road will be far longer and rougher. From a report: The ambitious initiative known as COVAX created to ensure the entire world has access to COVID-19 vaccines has secured only a fraction of the 2 billion doses it hopes to buy over the next year, has yet to confirm any actual deals to ship out vaccines and is short on cash. The virus that has killed more than 1.6 million people has exposed vast inequities between countries, as fragile health systems and smaller economies were often hit harder. COVAX was set up by the World Health Organization, vaccines alliance GAVI and CEPI, a global coalition to fight epidemics, to avoid the international stampede for vaccines that has accompanied past outbreaks and would reinforce those imbalances.

But now some experts say the chances that coronavirus shots will be shared fairly between rich nations and the rest are fading fast. With vaccine supplies currently limited, developed countries, some of which helped fund the research with taxpayer money, are under tremendous pressure to protect their own populations and are buying up shots. Meanwhile, some poorer countries that signed up to the initiative are looking for alternatives because of fears it won't deliver. "It's simple math," said Arnaud Bernaert, head of global health at the World Economic Forum. Of the approximately 12 billion doses the pharmaceutical industry is expected to produce next year, about 9 billion shots have already been reserved by rich countries. "COVAX has not secured enough doses, and the way the situation may unfold is they will probably only get these doses fairly late." To date, COVAX's only confirmed, legally binding agreement is for up to 200 million doses, though that includes an option to order several times that number of additional doses, GAVI spokesman James Fulker said. It has agreements for another 500 million vaccines, but those are not legally binding.
Math

Are Fragments of Energy the Fundamental Building Blocks of the Universe? (theconversation.com) 99

hcs_$reboot shares a remarkable new theory from Larry M. Silverberg, an aerospace engineering professor at North Carolina State University (with colleague Jeffrey Eischen). They're proposing that matter is not made of particles (or even waves), as was long thought, but fragments of energy.

[W]hile the theories and math of waves and particles allow scientists to make incredibly accurate predictions about the universe, the rules break down at the largest and tiniest scales. Einstein proposed a remedy in his theory of general relativity. Using the mathematical tools available to him at the time, Einstein was able to better explain certain physical phenomena and also resolve a longstanding paradox relating to inertia and gravity. But instead of improving on particles or waves, he eliminated them as he proposed the warping of space and time.Using newer mathematical tools, my colleague and I have demonstrated a new theory that may accurately describe the universe... Instead of basing the theory on the warping of space and time, we considered that there could be a building block that is more fundamental than the particle and the wave....

Much to our surprise, we discovered that there were only a limited number of ways to describe a concentration of energy that flows. Of those, we found just one that works in accordance with our mathematical definition of flow. We named it a fragment of energy... Using the fragment of energy as a building block of matter, we then constructed the math necessary to solve physics problems... More than 100 [years] ago, Einstein had turned to two legendary problems in physics to validate general relativity: the ever-so-slight yearly shift — or precession — in Mercury's orbit, and the tiny bending of light as it passes the Sun... In both problems, we calculated the trajectories of the moving fragments and got the same answers as those predicted by the theory of general relativity. We were stunned.

Our initial work demonstrated how a new building block is capable of accurately modeling bodies from the enormous to the minuscule. Where particles and waves break down, the fragment of energy building block held strong. The fragment could be a single potentially universal building block from which to model reality mathematically — and update the way people think about the building blocks of the universe.

Math

Physicists Nail Down the 'Magic Number' That Shapes the Universe (quantamagazine.org) 177

Natalie Wolchover writes via Quanta Magazine: As fundamental constants go, the speed of light, c, enjoys all the fame, yet c's numerical value says nothing about nature; it differs depending on whether it's measured in meters per second or miles per hour. The fine-structure constant, by contrast, has no dimensions or units. It's a pure number that shapes the universe to an astonishing degree -- "a magic number that comes to us with no understanding," as Richard Feynman described it. Paul Dirac considered the origin of the number "the most fundamental unsolved problem of physics."

Numerically, the fine-structure constant, denoted by the Greek letter a (alpha), comes very close to the ratio 1/137. It commonly appears in formulas governing light and matter. [...] The constant is everywhere because it characterizes the strength of the electromagnetic force affecting charged particles such as electrons and protons. Because 1/137 is small, electromagnetism is weak; as a consequence, charged particles form airy atoms whose electrons orbit at a distance and easily hop away, enabling chemical bonds. On the other hand, the constant is also just big enough: Physicists have argued that if it were something like 1/138, stars would not be able to create carbon, and life as we know it wouldn't exist.

Today, in a new paper in the journal Nature, a team of four physicists led by Saida Guellati-Khelifa at the Kastler Brossel Laboratory in Paris reported the most precise measurement yet of the fine-structure constant. The team measured the constant's value to the 11th decimal place, reporting that a = 1/137.03599920611. (The last two digits are uncertain.) With a margin of error of just 81 parts per trillion, the new measurement is nearly three times more precise than the previous best measurement in 2018 by Muller's group at Berkeley, the main competition. (Guellati-Khelifa made the most precise measurement before Muller's in 2011.) Muller said of his rival's new measurement of alpha, "A factor of three is a big deal. Let's not be shy about calling this a big accomplishment."

Math

South Africa's Lottery Probed As 5, 6, 7, 8, 9 and 10 Drawn (bbc.com) 195

AmiMoJo shares a report from the BBC: The winning numbers in South Africa's national lottery have caused a stir and sparked accusations of fraud over their unusual sequence. Tuesday's PowerBall lottery saw the numbers five, six, seven, eight and nine drawn, while the Powerball itself was, you've guessed it, 10. Some South Africans have alleged a scam and an investigation is under way. The organizers said 20 people purchased a winning ticket and won 5.7 million rand ($370,000; 278,000 pounds) each. Another 79 ticketholders won 6,283 rand each for guessing the sequence from five up to nine but missing the PowerBall.

The chances of winning South Africa's PowerBall lottery are one in 42,375,200 -- the number of different combinations when selecting five balls from a set of 50, plus an additional bonus ball from a pool of 20. The odds of the draw resulting in the numbers seen in Tuesday's televised live event are the same as any other combination. Competitions resulting in multiple winners are rare, but this may have something to do with this particular sequence.

Education

Assigning Homework Exacerbates Class Divides, Researchers Find (vice.com) 312

"Education scholars say that math homework as it's currently assigned reinforces class divides in society and needs to change for good," according to Motherboard — citing a new working paper from education scholars: Status-reinforcing processes, or ones that fortify pre-existing divides, are a dime a dozen in education. Standardized testing, creating honors and AP tracks, and grouping students based on perceived ability all serve to disadvantage students who lack the support structures and parental engagement associated with affluence. Looking specifically at math homework, the authors of the new working paper wanted to see if homework was yet another status-reinforcing process. As it turns out, it was, and researchers say that the traditional solutions offered up to fix the homework gap won't work.

"Here, teachers knew that students were getting unequal support with homework," said Jessica Calarco, the first author of the paper and an associate professor of psychology at Indiana University. "And yet, because of these standard, taken-for-granted policies that treated homework as students' individual responsibilities, it erased those unequal contexts of support and led teachers to interpret and respond to homework in these status-reinforcing ways...."

The teachers interviewed for the paper acknowledged the unequal contexts affecting whether students could complete their math homework fully and correctly, Calarco said. However, that did not stop the same teachers from using homework as a way to measure students' abilities. "The most shocking and troubling part to me was hearing teachers write off students because they didn't get their homework done," Calarco said.... Part of the reason why homework can serve as a status-reinforcing process is that formal school policies and grading schemes treat it as a measure of a student's individual effort and responsibility, when many other factors affect completion, Calarco said....

"I'm not sure I want to completely come out and say that we need to ban homework entirely, but I think we need to really seriously reconsider when and how we assign it."

Businesses

IBM Apologizes For Firing Computer Pioneer For Being Transgender... 52 Years Later (forbes.com) 164

On August 29, 1968, IBM's CEO fired computer scientist and transgender pioneer Lynn Conway to avoid the public embarrassment of employing a transwoman. Nearly 52 years later, in an act that defines its present-day culture, IBM is apologizing and seeking forgiveness. Jeremy Alicandri writes via Forbes reports: On January 2, 1938, Lynn Conway's life began in Mount Vernon, NY. With a reported IQ of 155, Conway was an exceptional and inquisitive child who loved math and science during her teens. She went on to study physics at MIT and earned her bachelor's and master's degrees in electrical engineering at Columbia University's Engineering School. In 1964, Conway joined IBM Research, where she made major innovations in computer design, ensuring a promising career in the international conglomerate (IBM was the 7th largest corporation in the world at the time). Recently married and with two young daughters, she lived a seemingly perfect life. But Conway faced a profound existential challenge: she had been born as a boy.
[...]
[W]hile IBM knew of its key role in the Conway saga, the company remained silent. That all changed in August 2020. When writing an article on LGBTQ diversity in the automotive industry, I included Conway's story as an example of the costly consequences to employers that fail to promote an inclusive culture. I then reached out to IBM to learn if its stance had changed after 52 years. To my surprise, IBM admitted regrets and responsibility for Conway's firing, stating, "We deeply regret the hardship Lynn encountered." The company also explained that it was in communication with Conway for a formal resolution, which came two months later. Arvind Krishna, IBM's CEO, and other senior executives had determined that Conway should be recognized and awarded "for her lifetime body of technical achievements, both during her time at IBM and throughout her career."

Dario Gil, Director of IBM Research, who revealed the award during the online event, says, "Lynn was recently awarded the rare IBM Lifetime Achievement Award, given to individuals who have changed the world through technology inventions. Lynn's extraordinary technical achievements helped define the modern computing industry. She paved the way for how we design and make computing chips today -- and forever changed microelectronics, devices, and people's lives." The company also acknowledged that after Conway's departure in 1968, her research aided its own success. "In 1965 Lynn created the architectural level Advanced Computing System-1 simulator and invented a method that led to the development of a superscalar computer. This dynamic instruction scheduling invention was later used in computer chips, greatly improving their performance," a spokesperson stated.

Math

Computer Scientists Achieve 'Crown Jewel' of Cryptography (quantamagazine.org) 69

A cryptographic master tool called indistinguishability obfuscation has for years seemed too good to be true. Three researchers have figured out that it can work. Erica Klarreich, reporting for Quanta Magazine: In 2018, Aayush Jain, a graduate student at the University of California, Los Angeles, traveled to Japan to give a talk about a powerful cryptographic tool he and his colleagues were developing. As he detailed the team's approach to indistinguishability obfuscation (iO for short), one audience member raised his hand in bewilderment. "But I thought iO doesn't exist?" he said. At the time, such skepticism was widespread. Indistinguishability obfuscation, if it could be built, would be able to hide not just collections of data but the inner workings of a computer program itself, creating a sort of cryptographic master tool from which nearly every other cryptographic protocol could be built. It is "one cryptographic primitive to rule them all," said Boaz Barak of Harvard University. But to many computer scientists, this very power made iO seem too good to be true. Computer scientists set forth candidate versions of iO starting in 2013. But the intense excitement these constructions generated gradually fizzled out, as other researchers figured out how to break their security. As the attacks piled up, "you could see a lot of negative vibes," said Yuval Ishai of the Technion in Haifa, Israel. Researchers wondered, he said, "Who will win: the makers or the breakers?" "There were the people who were the zealots, and they believed in [iO] and kept working on it," said Shafi Goldwasser, director of the Simons Institute for the Theory of Computing at the University of California, Berkeley. But as the years went by, she said, "there was less and less of those people."

Now, Jain -- together with Huijia Lin of the University of Washington and Amit Sahai, Jain's adviser at UCLA -- has planted a flag for the makers. In a paper posted online on August 18, the three researchers show for the first time how to build indistinguishability obfuscation using only "standard" security assumptions. All cryptographic protocols rest on assumptions -- some, such as the famous RSA algorithm, depend on the widely held belief that standard computers will never be able to quickly factor the product of two large prime numbers. A cryptographic protocol is only as secure as its assumptions, and previous attempts at iO were built on untested and ultimately shaky foundations. The new protocol, by contrast, depends on security assumptions that have been widely used and studied in the past. "Barring a really surprising development, these assumptions will stand," Ishai said. While the protocol is far from ready to be deployed in real-world applications, from a theoretical standpoint it provides an instant way to build an array of cryptographic tools that were previously out of reach. For instance, it enables the creation of "deniable" encryption, in which you can plausibly convince an attacker that you sent an entirely different message from the one you really sent, and "functional" encryption, in which you can give chosen users different levels of access to perform computations using your data. The new result should definitively silence the iO skeptics, Ishai said. "Now there will no longer be any doubts about the existence of indistinguishability obfuscation," he said. "It seems like a happy end."

AI

AI Has Cracked a Key Mathematical Puzzle For Understanding Our World (technologyreview.com) 97

An anonymous reader shares a report: Unless you're a physicist or an engineer, there really isn't much reason for you to know about partial differential equations. I know. After years of poring over them in undergrad while studying mechanical engineering, I've never used them since in the real world. But partial differential equations, or PDEs, are also kind of magical. They're a category of math equations that are really good at describing change over space and time, and thus very handy for describing the physical phenomena in our universe. They can be used to model everything from planetary orbits to plate tectonics to the air turbulence that disturbs a flight, which in turn allows us to do practical things like predict seismic activity and design safe planes. The catch is PDEs are notoriously hard to solve. And here, the meaning of "solve" is perhaps best illustrated by an example. Say you are trying to simulate air turbulence to test a new plane design. There is a known PDE called Navier-Stokes that is used to describe the motion of any fluid. "Solving" Navier-Stokes allows you to take a snapshot of the air's motion (a.k.a. wind conditions) at any point in time and model how it will continue to move, or how it was moving before.

These calculations are highly complex and computationally intensive, which is why disciplines that use a lot of PDEs often rely on supercomputers to do the math. It's also why the AI field has taken a special interest in these equations. If we could use deep learning to speed up the process of solving them, it could do a whole lot of good for scientific inquiry and engineering. Now researchers at Caltech have introduced a new deep-learning technique for solving PDEs that is dramatically more accurate than deep-learning methods developed previously. It's also much more generalizable, capable of solving entire families of PDEs -- such as the Navier-Stokes equation for any type of fluid -- without needing retraining. Finally, it is 1,000 times faster than traditional mathematical formulas, which would ease our reliance on supercomputers and increase our computational capacity to model even bigger problems. That's right. Bring it on.

Math

Microsoft Overhauls Excel With Live Custom Data Types (theverge.com) 27

Microsoft is overhauling Excel with the ability to support custom live data types. The Verge reports: You could import the data type for Seattle, for example, and then create a formula that references that single cell to pull out information on the population of Seattle. These data types work by cramming a set of structured data into a single cell in Excel that can then be referenced by the rest of the spreadsheet. Data can also be refreshed to keep it up to date. If you're a student who is researching the periodic table, for example, you could create a cell for each element and easily pull out individual data from there.

Microsoft is bringing more than 100 new data types into Excel for Microsoft 365 Personal or Family subscribers. Excel users will be able to track stocks, pull in nutritional information for dieting plans, and much more, thanks to data from Wolfram Alpha's service. This is currently available for Office beta testers in the Insiders program. Where these custom data types will be most powerful is obviously for businesses that rely on Excel daily. Microsoft is leveraging its Power BI service to act as the connector to bring sources of data into Excel data types on the commercial side, allowing businesses to connect up a variety of data. This could be hierarchical data or even references to other data types and images. Businesses will even be able to convert existing cells into linked data types, making data analysis a lot easier.

Power BI won't be the only way for this feature to work, though. When you import data into Excel, you can now transform it into a data type with Power Query. That could include information from files, databases, websites, and more. The data that's imported can be cleaned up and then converted into a data type to be used in spreadsheets. If you've pulled in data using Power Query, it's easy to refresh the data from its original source. [...] These new Power BI data types will be available in Excel for Windows for all Microsoft 365 / Office 365 subscribers that also have a Power BI Pro service plan. Power Query data types are also rolling out to subscribers. On the consumer side, Wolfram Alpha data types are currently available in preview for Office insiders and should be available to all Microsoft 365 subscribers soon.

Math

Winning Bid: How Auction Theory Took the Nobel Memorial Prize in Economics (ft.com) 18

Tim Harford, writing for Financial Times: A well-designed auction forces bidders to reveal the truth about their own estimate of the prize's value. At the same time, the auction shares that information with the other bidders. And it sets the price accordingly. It is quite a trick. But, in practice, it is a difficult trick to get right. In the 1990s, the US Federal government turned to auction theorists -- Milgrom and Wilson prominent among them -- for advice on auctioning radio-spectrum rights. "The theory that we had in place had only a little bit to do with the problems that they actually faced," Milgrom recalled in an interview in 2007. "But the proposals that were being made by the government were proposals that we were perfectly capable of analysing the flaws in and improving."

The basic challenge with radio-spectrum auctions is that many prizes are on offer, and bidders desire only certain combinations. A TV company might want the right to use Band A, or Band B, but not both. Or the right to broadcast in the east of England, but only if they also had the right to broadcast in the west. Such combinatorial auctions are formidably challenging to design, but Milgrom and Wilson got to work. Joshua Gans, a former student of Milgrom's who is now a professor at the University of Toronto, praises both men for their practicality. Their theoretical work is impressive, he said, "but they realised that when the world got too complex, they shouldn't adhere to proving strict theorems."

Math

Computer Scientists Break Traveling Salesperson Record (quantamagazine.org) 72

After 44 years, there's finally a better way to find approximate solutions to the notoriously difficult traveling salesperson problem. From a report: When Nathan Klein started graduate school two years ago, his advisers proposed a modest plan: to work together on one of the most famous, long-standing problems in theoretical computer science. Even if they didn't manage to solve it, they figured, Klein would learn a lot in the process. He went along with the idea. "I didn't know to be intimidated," he said. "I was just a first-year grad student -- I don't know what's going on." Now, in a paper posted online in July, Klein and his advisers at the University of Washington, Anna Karlin and Shayan Oveis Gharan, have finally achieved a goal computer scientists have pursued for nearly half a century: a better way to find approximate solutions to the traveling salesperson problem. This optimization problem, which seeks the shortest (or least expensive) round trip through a collection of cities, has applications ranging from DNA sequencing to ride-sharing logistics. Over the decades, it has inspired many of the most fundamental advances in computer science, helping to illuminate the power of techniques such as linear programming. But researchers have yet to fully explore its possibilities -- and not for want of trying. The traveling salesperson problem "isn't a problem, it's an addiction," as Christos Papadimitriou, a leading expert in computational complexity, is fond of saying.

Most computer scientists believe that there is no algorithm that can efficiently find the best solutions for all possible combinations of cities. But in 1976, Nicos Christofides came up with an algorithm that efficiently finds approximate solutions -- round trips that are at most 50% longer than the best round trip. At the time, computer scientists expected that someone would soon improve on Christofides' simple algorithm and come closer to the true solution. But the anticipated progress did not arrive. "A lot of people spent countless hours trying to improve this result," said Amin Saberi of Stanford University. Now Karlin, Klein and Oveis Gharan have proved that an algorithm devised a decade ago beats Christofides' 50% factor, though they were only able to subtract 0.2 billionth of a trillionth of a trillionth of a percent. Yet this minuscule improvement breaks through both a theoretical logjam and a psychological one. Researchers hope that it will open the floodgates to further improvements.

Windows

ZDNet Argues Linux-Based Windows 'Makes Perfect Sense' (zdnet.com) 100

Last week open-source advocate Eric S. Raymond argued Microsoft was quietly switching over to a Linux kernel that emulates Windows. "He's on to something," says ZDNet's contributing editor Steven J. Vaughan-Nichols: I've long thought that Microsoft was considering migrating the Windows interface to running on the Linux kernel. Why...? [Y]ou can run standard Linux programs now on WSL2 without any trouble.

That's because Linux is well on its way to becoming a first-class citizen on the Windows desktop. Multiple Linux distros, starting with Ubuntu, Red Hat Fedora, and SUSE Linux Enterprise Desktop (SLED), now run smoothly on WSL2. That's because Microsoft has replaced its WSL1 translation layer, which converted Linux kernel calls into Windows calls, with WSL2. With WSL2 Microsoft's own Linux kernel is running on a thin version of the Hyper-V hypervisor. That's not all. With the recent Windows 10 Insider Preview build 20211, you can now access Linux file systems, such as ext4, from Windows File Manager and PowerShell. On top of that, Microsoft developers are making it easy to run Linux graphical applications on Windows...

[Raymond] also observed, correctly, that Microsoft no longer depends on Windows for its cash flow but on its Azure cloud offering. Which, by the way, is running more Linux instances than it is Windows Server instances. So, that being the case, why should Microsoft keep pouring money into the notoriously trouble-prone Windows kernel — over 50 serious bugs fixed in the last Patch Tuesday roundup — when it can use the free-as-in-beer Linux kernel? Good question. He thinks Microsoft can do the math and switch to Linux.

I think he's right. Besides his points, there are others. Microsoft already wants you to replace your existing PC-based software, like Office 2019, with software-as-a-service (SaaS) programs like Office 365. Microsoft also encourages you to move your voice, video, chat, and texting to Microsoft's Azure Communication Services even if you don't use Teams. With SaaS programs, Microsoft doesn't care what operating system you're running. They're still going to get paid whether you run Office 365 on Windows, a Chromebook, or, yes, Linux.

I see two possible paths ahead for Windows. First, there's Linux-based Windows. It simply makes financial sense. Or, the existing Windows desktop being replaced by the Windows Virtual Desktop or other Desktop-as-a-Service (DaaS) offerings.... Google chose to save money and increase security by using Linux as the basis for Chrome OS. This worked out really well for Google. It can for Microsoft with — let's take a blast from the past — and call it Lindows as well.

Math

Teenager on TiKTok Resurrects an Essential Question: What is Math? (smithsonianmag.com) 160

Long-time Slashdot reader fahrbot-bot shares a story that all started with a high school student's innocuous question on TikTok, leading academic mathematicians and philosophers to weigh in on "a very ancient and unresolved debate in the philosophy of science," reports Smithsonian magazine.

"What, exactly, is math?" Is it invented, or discovered? And are the things that mathematicians work with — numbers, algebraic equations, geometry, theorems and so on — real? Some scholars feel very strongly that mathematical truths are "out there," waiting to be discovered — a position known as Platonism.... Many mathematicians seem to support this view. The things they've discovered over the centuries — that there is no highest prime number; that the square root of two is an irrational number; that the number pi, when expressed as a decimal, goes on forever — seem to be eternal truths, independent of the minds that found them....

Other scholars — especially those working in other branches of science — view Platonism with skepticism. Scientists tend to be empiricists; they imagine the universe to be made up of things we can touch and taste and so on; things we can learn about through observation and experiment. The idea of something existing "outside of space and time" makes empiricists nervous: It sounds embarrassingly like the way religious believers talk about God, and God was banished from respectable scientific discourse a long time ago. Platonism, as mathematician Brian Davies has put it, "has more in common with mystical religions than it does with modern science." The fear is that if mathematicians give Plato an inch, he'll take a mile. If the truth of mathematical statements can be confirmed just by thinking about them, then why not ethical problems, or even religious questions? Why bother with empiricism at all...?

Platonism has various alternatives. One popular view is that mathematics is merely a set of rules, built up from a set of initial assumptions — what mathematicians call axioms... But this view has its own problems. If mathematics is just something we dream up from within our own heads, why should it "fit" so well with what we observe in nature...? Theoretical physicist Eugene Wigner highlighted this issue in a famous 1960 essay titled, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences." Wigner concluded that the usefulness of mathematics in tackling problems in physics "is a wonderful gift which we neither understand nor deserve."

Slashdot Top Deals