×
Space

How Many Atoms Are In the Observable Universe? (livescience.com) 77

Long-time Slashdot reader fahrbot-bot quotes LiveScience's exploration of the math: To start out 'small,' there are around 7 octillion, or 7x10^27 (7 followed by 27 zeros), atoms in an average human body, according to The Guardian. Given this vast sum of atoms in one person alone, you might think it would be impossible to determine how many atoms are in the entire universe. And you'd be right: Because we have no idea how large the entire universe really is, we can't find out how many atoms are within it.

However, it is possible to work out roughly how many atoms are in the observable universe — the part of the universe that we can see and study — using some cosmological assumptions and a bit of math.

[...]

Doing the math

To work out the number of atoms in the observable universe, we need to know its mass, which means we have to find out how many stars there are. There are around 10^11 to 10^12 galaxies in the observable universe, and each galaxy contains between 10^11 and 10^12 stars, according to the European Space Agency. This gives us somewhere between 10^22 and 10^24 stars. For the purposes of this calculation, we can say that there are 10^23 stars in the observable universe. Of course, this is just a best guess; galaxies can range in size and number of stars, but because we can't count them individually, this will have to do for now.

On average, a star weighs around 2.2x10^32 pounds (10^32 kilograms), according to Science ABC, which means that the mass of the universe is around 2.2x10^55 pounds (10^55 kilograms). Now that we know the mass, or amount of matter, we need to see how many atoms fit into it. On average, each gram of matter has around 10^24 protons, according to Fermilab, a national laboratory for particle physics in Illinois. That means it is the same as the number of hydrogen atoms, because each hydrogen atom has only one proton (hence why we made the earlier assumption about hydrogen atoms).

This gives us 10^82 atoms in the observable universe. To put that into context, that is 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 atoms.

This number is only a rough guess, based on a number of approximations and assumptions. But given our current understanding of the observable universe, it is unlikely to be too far off the mark.

Crime

French Engineer Claims He's Solved the Zodiac Killer's Final Code (msn.com) 57

The New York Times tells the story of Fayçal Ziraoui, a 38-year-old French-Moroccan business consultant who "caused an online uproar" after saying he'd cracked the last two unsolved ciphers of the four attributed to the Zodiac killer in California "and identified him, potentially ending a 50-year-old quest." Maybe because he said he cracked them in just two weeks. Many Zodiac enthusiasts consider the remaining ciphers — Z32 and Z13 — unsolvable because they are too short to determine the encryption key. An untold number of solutions could work, they say, rendering verification nearly impossible.

But Mr. Ziraoui said he had a sudden thought. The code-crackers who had solved the [earlier] 340-character cipher in December had been able to do so by identifying the encryption key, which they had put into the public domain when announcing their breakthrough. What if the killer used that same encryption key for the two remaining ciphers? So he said he applied it to the 32-character cipher, which the killer had included in a letter as the key to the location of a bomb set to go off at a school in the fall of 1970. (It never did, even though police failed to crack the code.) That produced a sequence of random letters from the alphabet. Mr. Ziraoui said he then worked through a half-dozen steps including letter-to-number substitutions, identifying coordinates in numbers and using a code-breaking program he created to crunch jumbles of letters into coherent words...

After two weeks of intense code-cracking, he deciphered the sentence, "LABOR DAY FIND 45.069 NORT 58.719 WEST." The message referred to coordinates based on the earth's magnetic field, not the more familiar geographic coordinates. The sequence zeroed in on a location near a school in South Lake Tahoe, a city in California referred to in another postcard believed to have been sent by the Zodiac killer in 1971.

An excited Mr. Ziraoui said he immediately turned to Z13, which supposedly revealed the killer's name, using the same encryption key and various cipher-cracking techniques. [The mostly un-coded letter includes a sentence which says "My name is _____," followed by a 13-character cipher.] After about an hour, Mr. Ziraoui said he came up with "KAYR," which he realized resembled the last name of Lawrence Kaye, a salesman and career criminal living in South Lake Tahoe who had been a suspect in the case. Mr. Kaye, who also used the pseudonym Kane, died in 2010.

The typo was similar to ones found in previous ciphers, he noticed, likely errors made by the killer when encoding the message. The result that was so close to Mr. Kaye's name and the South Lake Tahoe location were too much to be a coincidence, he thought. Mr. Kaye had been the subject of a report by Harvey Hines, a now-deceased police detective, who was convinced he was the Zodiac killer but was unable to convince his superiors. Around 2 a.m. on Jan. 3, an exhausted but elated Mr. Ziraoui posted a message entitled "Z13 — My Name is KAYE" on a 50,000-member Reddit forum dedicated to the Zodiac Killer.

The message was deleted within 30 minutes.

"Sorry, I've removed this one as part of a sort of general policy against Z13 solution posts," the forum's moderator wrote, arguing that the cipher was too short to be solvable.

Math

Mathematicians Welcome Computer-Assisted Proof in 'Grand Unification' Theory (nature.com) 36

Proof-assistant software handles an abstract concept at the cutting edge of research, revealing a bigger role for software in mathematics. From a report: Mathematicians have long used computers to do numerical calculations or manipulate complex formulas. In some cases, they have proved major results by making computers do massive amounts of repetitive work -- the most famous being a proof in the 1970s that any map can be coloured with just four different colours, and without filling any two adjacent countries with the same colour. But systems known as proof assistants go deeper. The user enters statements into the system to teach it the definition of a mathematical concept -- an object -- based on simpler objects that the machine already knows about.

A statement can also just refer to known objects, and the proof assistant will answer whether the fact is 'obviously' true or false based on its current knowledge. If the answer is not obvious, the user has to enter more details. Proof assistants thus force the user to lay out the logic of their arguments in a rigorous way, and they fill in simpler steps that human mathematicians had consciously or unconsciously skipped. Once researchers have done the hard work of translating a set of mathematical concepts into a proof assistant, the program generates a library of computer code that can be built on by other researchers and used to define higher-level mathematical objects. In this way, proof assistants can help to verify mathematical proofs that would otherwise be time-consuming and difficult, perhaps even practically impossible, for a human to check. Proof assistants have long had their fans, but this is the first time that they had a major role at the cutting edge of a field, says Kevin Buzzard, a mathematician at Imperial College London who was part of a collaboration that checked Scholze and Clausen's result. "The big remaining question was: can they handle complex mathematics?" says Buzzard. "We showed that they can."

Math

When Graphs Are a Matter of Life and Death (newyorker.com) 122

Pie charts and scatter plots seem like ordinary tools, but they revolutionized the way we solve problems. From a report: John Carter has only an hour to decide. The most important auto race of the season is looming; it will be broadcast live on national television and could bring major prize money. If his team wins, it will get a sponsorship deal and a chance to start making some real profits for a change. There's just one problem. In seven of the past twenty-four races, the engine in the Carter Racing car has blown out. An engine failure live on TV will jeopardize sponsorships -- and the driver's life. But withdrawing has consequences, too. The wasted entry fee means finishing the season in debt, and the team won't be happy about the missed opportunity for glory. As Burns's First Law of Racing says, "Nobody ever won a race sitting in the pits."

One of the engine mechanics has a hunch about what's causing the blowouts. He thinks that the engine's head gasket might be breaking in cooler weather. To help Carter decide what to do, a graph is devised that shows the conditions during each of the blowouts: the outdoor temperature at the time of the race plotted against the number of breaks in the head gasket. The dots are scattered into a sort of crooked smile across a range of temperatures from about fifty-five degrees to seventy-five degrees. The upcoming race is forecast to be especially cold, just forty degrees, well below anything the cars have experienced before. So: race or withdraw?

This case study, based on real data, and devised by a pair of clever business professors, has been shown to students around the world for more than three decades. Most groups presented with the Carter Racing story look at the scattered dots on the graph and decide that the relationship between temperature and engine failure is inconclusive. Almost everyone chooses to race. Almost no one looks at that chart and asks to see the seventeen missing data points -- the data from those races which did not end in engine failure.

Space

Jeff Bezos Plans to Travel to Space on Blue Origin Flight (bloomberg.com) 131

Jeff Bezos will go to space next month when his company, Blue Origin, launches its first passenger-carrying mission. From a report: The 57-year-old, who plans to travel alongside his brother, Mark, made the announcement in an Instagram post Monday. The scheduled launch next month will be about two weeks after the billionaire plans to step down as chief executive officer of Amazon.com. "Ever since I was five years old, I've dreamed of traveling to space," Bezos said in the post. "On July 20th, I will take that journey with my brother. The greatest adventure, with my best friend."

Blue Origin is one of several high-profile space-tourism companies backed by a wealthy entrepreneur, alongside Elon Musk's Space Exploration Technologies and Richard Branson-backed Virgin Galactic Holdings. Both of those companies are making plans to carry paying customers. Blue Origin is auctioning off a seat on its New Shepard rocket for the July 20 flight, an 11-minute trip to suborbital space that will reach an altitude of about 100 kilometers (62 miles). The spot will be the only one available for purchase on the flight, and the proceeds will go to a Blue Origin foundation that promotes math and science education.

Education

California's Controversial Math Overhaul Focuses on Equity (latimes.com) 308

A plan to reimagine math instruction for 6 million California students has become ensnared in equity and fairness issues -- with critics saying proposed guidelines will hold back gifted students and supporters saying it will, over time, give all kindergartners through 12th-graders a better chance to excel. From a report: The proposed new guidelines aim to accelerate achievement while making mathematical understanding more accessible and valuable to as many students as possible, including those shut out from high-level math in the past because they had been "tracked" in lower level classes. The guidelines call on educators generally to keep all students in the same courses until their junior year in high school, when they can choose advanced subjects, including calculus, statistics and other forms of data science.

Although still a draft, the Mathematics Framework achieved a milestone Wednesday, earning approval from the state's Instructional Quality Commission. The members of that body moved the framework along, approving numerous recommendations that a writing team is expected to incorporate. The commission told writers to remove a document that had become a point of contention for critics. It described its goals as calling out systemic racism in mathematics, while helping educators create more inclusive, successful classrooms. Critics said it needlessly injected race into the study of math. The state Board of Education is scheduled to have the final say in November.

Supercomputing

World's Fastest AI Supercomputer Built from 6,159 NVIDIA A100 Tensor Core GPUs (nvidia.com) 57

Slashdot reader 4wdloop shared this report from NVIDIA's blog, joking that maybe this is where all NVIDIA's chips are going: It will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more. Perlmutter, officially dedicated Thursday at the National Energy Research Scientific Computing Center (NERSC), is a supercomputer that will deliver nearly four exaflops of AI performance for more than 7,000 researchers. That makes Perlmutter the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses. And that performance doesn't even include a second phase coming later this year to the system based at Lawrence Berkeley National Lab.

More than two dozen applications are getting ready to be among the first to ride the 6,159 NVIDIA A100 Tensor Core GPUs in Perlmutter, the largest A100-powered system in the world. They aim to advance science in astrophysics, climate science and more. In one project, the supercomputer will help assemble the largest 3D map of the visible universe to date. It will process data from the Dark Energy Spectroscopic Instrument (DESI), a kind of cosmic camera that can capture as many as 5,000 galaxies in a single exposure. Researchers need the speed of Perlmutter's GPUs to capture dozens of exposures from one night to know where to point DESI the next night. Preparing a year's worth of the data for publication would take weeks or months on prior systems, but Perlmutter should help them accomplish the task in as little as a few days.

"I'm really happy with the 20x speedups we've gotten on GPUs in our preparatory work," said Rollin Thomas, a data architect at NERSC who's helping researchers get their code ready for Perlmutter. DESI's map aims to shed light on dark energy, the mysterious physics behind the accelerating expansion of the universe.

A similar spirit fuels many projects that will run on NERSC's new supercomputer. For example, work in materials science aims to discover atomic interactions that could point the way to better batteries and biofuels. Traditional supercomputers can barely handle the math required to generate simulations of a few atoms over a few nanoseconds with programs such as Quantum Espresso. But by combining their highly accurate simulations with machine learning, scientists can study more atoms over longer stretches of time. "In the past it was impossible to do fully atomistic simulations of big systems like battery interfaces, but now scientists plan to use Perlmutter to do just that," said Brandon Cook, an applications performance specialist at NERSC who's helping researchers launch such projects. That's where Tensor Cores in the A100 play a unique role. They accelerate both the double-precision floating point math for simulations and the mixed-precision calculations required for deep learning.

Science

Analyzing 30 Years of Brain Research Finds No Meaningful Differences Between Male and Female Brains (theconversation.com) 256

"As a neuroscientist long experienced in the field, I recently completed a painstaking analysis of 30 years of research on human brain sex differences..." reports Lise Eliot in a recent article on The Conversation. "[T]here's no denying the decades of actual data, which show that brain sex differences are tiny and swamped by the much greater variance in individuals' brain measures across the population."

Bloomberg follows up: In 2005, Harvard's then president Lawrence Summers theorized that so few women went into science because, well, they just weren't inherently good at it. "Issues of intrinsic aptitude," Summers said, such as "overall IQ, mathematical ability, scientific ability" kept many women out of the field... "I would like nothing better than to be proved wrong," Summers said back in 2005. Well, sixteen years later, it appears his wish came true.

In a new study published in in the June edition of Neuroscience & Behavioral Reviews, Lise Eliot, a professor of neuroscience at Rosalind Franklin University, analyzed 30 years' worth of brain research (mostly fMRIs and postmortem studies) and found no meaningful cognitive differences between men and women. Men's brains were on average about 11% larger than women's — as were their hearts, lungs and other organs — because brain size is proportional to body size. But just as taller people aren't any more intelligent than shorter people, neither, Eliot and her co-authors found, were men smarter than women. They weren't better at math or worse at language processing, either.

In her paper, Eliot and her co-authors acknowledge that psychological studies have found gendered personality traits (male aggression, for example) but at the brain level those differences don't seem to appear.

"Another way to think about it is every individual brain is a mosaic of circuits that control the many dimensions of masculinity and femininity, such as emotional expressiveness, interpersonal style, verbal and analytic reasoning, sexuality and gender identity itself," Eliot's original article had stated.

"Or, to use a computer analogy, gendered behavior comes from running different software on the same basic hardware."
Classic Games (Games)

Teaching Children To Play Chess Found To Decrease Risk Aversion (phys.org) 132

An anonymous reader quotes a report from Phys.Org: A trio of researchers from Monash University and Deakin University has found that teaching children to play chess can reduce their aversion to risk. In their paper published in Journal of Development Economics, Asad Islam, Wang-Sheng Lee and Aaron Nicholas describe studying the impact of learning chess on 400 children in the U.K. The researchers found that most of the children experienced a decrease in risk aversion in a variety of game playing scenarios. They also noticed that playing chess also led to better math scores for some of the students and improvements in logic or rational thinking.

The researchers note that the game of chess is very well suited to building confidence in risk taking when there is reason to believe it might improve an outcome. In contrast, students also learned to avoid taking risks haphazardly, finding that such risks rarely lead to a positive outcome. They [...] line between good and poor risk-taking is especially evident in chess, which means that the more a person plays, the sharper their skills become. The researchers also found that the skills learned during chess playing appeared to be long lasting -- most of the children retained their decrease in risk aversion a full year after the end of their participation in the study. The researchers [...] did not find any evidence of changes in other cognitive skills, such as improvements in grades other than math or general creativity.

Verizon

Verizon Will Shut Down Its 3G Network In 2022 (engadget.com) 64

An anonymous reader quotes a report from Engadget: Verizon will shut down its 3G services on December 31st, 2022, VP of network engineering Mike Haberman announced today. According to Haberman, less than 1 percent of Verizon customers still access the 3G network, with 99 percent on 4G LTE or 5G. Verizon has roughly 94 million customers, so by the company's own math, as many as 940,000 people are still using Verizon's 3G network.

"Customers who still have a 3G device will continue to be strongly encouraged to make a change now," Haberman wrote. "As we move closer to the shut-off date customers still accessing the 3G network may experience a degradation or complete loss of service, and our service centers will only be able to offer extremely limited troubleshooting help on these older devices." Verizon has been teasing a shut-off of its 3G CDMA services for years. [...] The delay to 2022 is final — there will be no more extensions, Haberman said. He noted that this will be "months after our competitors have shut off their networks completely."

Math

Quantum Computer Solves Decades-Old Problem Three Million Times Faster Than a Classical Computer (zdnet.com) 77

ZDNet reports: Scientists from quantum computing company D-Wave have demonstrated that, using a method called quantum annealing, they could simulate some materials up to three million times faster than it would take with corresponding classical methods.

Together with researchers from Google, the scientists set out to measure the speed of simulation in one of D-Wave's quantum annealing processors, and found that performance increased with both simulation size and problem difficulty, to reach a million-fold speedup over what could be achieved with a classical CPU... The calculation that D-Wave and Google's teams tackled is a real-world problem; in fact, it has already been resolved by the 2016 winners of the Nobel Prize in Physics, Vadim Berezinskii, J. Michael Kosterlitz and David Thouless, who studied the behavior of so-called "exotic magnetism", which occurs in quantum magnetic systems....

Instead of proving quantum supremacy, which happens when a quantum computer runs a calculation that is impossible to resolve with classical means, D-Wave's latest research demonstrates that the company's quantum annealing processors can lead to a computational performance advantage... "What we see is a huge benefit in absolute terms," said Andrew King, director of performance research at D-Wave. "This simulation is a real problem that scientists have already attacked using the algorithms we compared against, marking a significant milestone and an important foundation for future development. This wouldn't have been possible today without D-Wave's lower noise processor."

Equally as significant as the performance milestone, said D-Wave's team, is the fact that the quantum annealing processors were used to run a practical application, instead of a proof-of-concept or an engineered, synthetic problem with little real-world relevance. Until now, quantum methods have mostly been leveraged to prove that the technology has the potential to solve practical problems, and is yet to make tangible marks in the real world.

Looking ahead to the future, long-time Slashdot reader schwit1 asks, "Is this is bad news for encryption that depends on brute-force calculations being prohibitively difficult?"
Earth

Solar and Wind Are Reaching for the Last 90% of the US Power Market (bloomberg.com) 253

An anonymous reader shares a report: Three decades ago, the U.S. passed an infinitesimal milestone: solar and wind power generated one-tenth of one percent of the country's electricity. It took 18 years, until 2008, for solar and wind to reach 1% of U.S. electricity. It took 12 years for solar and wind to increase by another factor of 10. In 2020, wind and solar generated 10.5% of U.S. electricity. If this sounds a bit like a math exercise, that's because it is. Anything growing at a compounded rate of nearly 18%, as U.S. wind and solar have done for the past three decades, will double in four years, then double again four years after that, then again four years after that, and so on. It gets confusing to think in so many successive doublings, especially when they occur more than twice a decade. Better, then, to think in orders of magnitude -- 10^10.

There are a number of reasons why exponential consideration matters. The first is that U.S. power demand isn't growing, and hasn't since wind and solar reached that 1% milestone in the late 2000s. That means that the growth of wind and solar -- and that of natural gas-fired power -- have come entirely at the expense of coal-fired power. That replacement of coal with either natural gas (half the emissions of coal) or with wind and solar (zero emissions) is certainly an environmental achievement. Coupled with last year's massive drop in emissions, that power shift also makes it much easier for the U.S. to meet its Paris Agreement obligations.

Math

Machines Are Inventing New Math We've Never Seen (vice.com) 44

An anonymous reader quotes a report from Motherboard: [A] group of researchers from the Technion in Israel and Google in Tel Aviv presented an automated conjecturing system that they call the Ramanujan Machine, named after the mathematician Srinivasa Ramanujan, who developed thousands of innovative formulas in number theory with almost no formal training. The software system has already conjectured several original and important formulas for universal constants that show up in mathematics. The work was published last week in Nature.

One of the formulas created by the Machine can be used to compute the value of a universal constant called Catalan's number more efficiently than any previous human-discovered formulas. But the Ramanujan Machine is imagined not to take over mathematics, so much as provide a sort of feeding line for existing mathematicians. As the researchers explain in the paper, the entire discipline of mathematics can be broken down into two processes, crudely speaking: conjecturing things and proving things. Given more conjectures, there is more grist for the mill of the mathematical mind, more for mathematicians to prove and explain. That's not to say their system is unambitious. As the researchers put it, the Ramanujan Machine is "trying to replace the mathematical intuition of great mathematicians and providing leads to further mathematical research." In particular, the researchers' system produces conjectures for the value of universal constants (like pi), written in terms of elegant formulas called continued fractions. Continued fractions are essentially fractions, but more dizzying. The denominator in a continued fraction includes a sum of two terms, the second of which is itself a fraction, whose denominator itself contains a fraction, and so on, out to infinity.

The Ramanujan Machine is built off of two primary algorithms. These find continued fraction expressions that, with a high degree of confidence, seem to equal universal constants. That confidence is important, as otherwise, the conjectures would be easily discarded and provide little value. Each conjecture takes the form of an equation. The idea is that the quantity on the left side of the equals sign, a formula involving a universal constant, should be equal to the quantity on the right, a continued fraction. To get to these conjectures, the algorithm picks arbitrary universal constants for the left side and arbitrary continued fractions for the right, and then computes each side separately to a certain precision. If the two sides appear to align, the quantities are calculated to higher precision to make sure their alignment is not a coincidence of imprecision. Critically, formulas already exist to compute the value of universal constants like pi to an arbitrary precision, so that the only obstacle to verifying the sides match is computing time.

Math

Quixotic Californian Crusade To Officially Recognize the Hellabyte (theregister.com) 128

An anonymous reader quotes a report from The Register: In 2010, Austin Sendek, then a physics student at UC Davis, created a petition seeking recognition for prefix "hella-" as an official International System of Units (SI) measurement representing 10^27. "Northern California is home to many influential research institutions, including the University of California, Davis, the University of California, Berkeley, Stanford University, and the Lawrence Livermore and Lawrence Berkeley National Laboratories," he argued. "However, science isn't all that sets Northern California apart from the rest of the world. The area is also the only region in the world currently practicing widespread usage of the English slang 'hella,' which typically means 'very,' or can refer to a large quantity (e.g. 'there are hella stars out tonight')."

To this day, the SI describes prefixes for quantities for up to 10^24. Those with that many bytes have a yottabyte. If you only have 10^21 bytes, you have a zettabyte. There's also exabyte (10^18), petabyte (10^15), terabyte (10^12), gigabyte(10^9), and so on. Support for "hella-" would allow you to talk about hellabytes of data, he argues, pointing out that this would make the number of atoms in 12 kg of carbon-12 would be simplified from 600 yottaatoms to 0.6 hellaatoms. Similarly, the sun (mass of 2.2 hellatons) would release energy at 0.3 hellawatts, rather than 300 yottawatts. [...] The soonest [a proposal for a "hella-" SI could be officially adopted] is in November 2022, at the quadrennial meeting of the International Bureau of Weights and Measures (BIPM)'s General Conference on Weight and Measures, where changes to the SI usually must be agreed upon.
The report notes that Google customized its search engine in 2010 to let you convert "bytes to hellabytes." A year later, Wolfram Alpha added support for "hella-" calculations.

"Sendek said 'hellabyte' initially started as a joke with some college friends but became a more genuine concern as he looked into how measurements get defined and as his proposal garnered support," reports The Register. He believes it could be useful for astronomical measurements.
AI

Calculations Show It'll Be Impossible To Control a Super-Intelligent AI (sciencealert.com) 194

schwit1 shares a report from ScienceAlert: [S]cientists have just delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The answer? Almost definitely not. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze. But if we're unable to comprehend it, it's impossible to create such a simulation. Rules such as "cause no harm to humans" can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.

Part of the team's reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once. Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not -- it's mathematically impossible for us to be absolutely sure either way, which means it's not containable.

The alternative to teaching AI some ethics and telling it not to destroy the world -- something which no algorithm can be absolutely certain of doing, the researchers say -- is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example. The new study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence -- the argument goes that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all? If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we're going in.

AI

New XPrize Challenge: Predicting Covid-19's Spread and Prescribing Interventions (ieee.org) 22

Slashdot reader the_newsbeagle shares an article from IEEE Spectrum: Many associate XPrize with a $10-million award offered in 1996 to motivate a breakthrough in private space flight. But the organization has since held other competitions related to exploration, ecology, and education. And in November, they launched the Pandemic Response Challenge, which will culminate in a $500,000 award to be split between two teams that not only best predict the continuing global spread of COVID-19, but also prescribe policies to curtail it...

For Phase 1, teams had to submit prediction models by 22 December... Up to 50 teams will make it to Phase 2, where they must submit a prescription model... The top two teams will split half a million dollars. The competition may not end there. Amir Banifatemi, XPrize's chief innovation and growth officer, says a third phase might test models on vaccine deployment prescriptions. And beyond the contest, some cities or countries might put some of the Phase 2 or 3 models into practice, if Banifatemi can find adventurous takers.

The organizers expect a wide variety of solutions. Banifatemi says the field includes teams from AI strongholds such as Stanford, Microsoft, MIT, Oxford, and Quebec's Mila, but one team consists of three women in Tunisia. In all, 104 teams from 28 countries have registered. "We're hoping that this competition can be a springboard for developing solutions for other really big problems as well," Miikkulainen says. Those problems include pandemics, global warming, and challenges in business, education, and healthcare. In this scenario, "humans are still in charge," he emphasizes. "They still decide what they want, and AI gives them the best alternatives from which the decision-makers choose."

But Miikkulainen hopes that data science can help humanity find its way. "Maybe in the future, it's considered irresponsible not to use AI for making these policies," he says.

For the Covid-19 competition, Banifatemi emphasized that one goal was "to make the resulting insights available freely to everyone, in an open-source manner — especially for all those communities that may not have access to data and epidemiology divisions, statisticians, or data scientists."
Intel

Linus Torvalds Rails At Intel For 'Killing' the ECC Industry (theregister.com) 218

An anonymous reader quotes a report from The Register: Linux creator Linus Torvalds has accused Intel of preventing widespread use of error-correcting memory and being "instrumental in killing the whole ECC industry with its horribly bad market segmentation." ECC stands for error-correcting code. ECC memory uses additional parity bits to verify that the data read from memory is the same as the data that was written. Without this check, memory is vulnerable to occasional corruption where a bit is flipped spontaneously, for example, by background radiation. Memory can also be attacked using a technique called Rowhammer, where rapid repeated reads of the same memory locations can cause adjacent locations to change their state. ECC memory solves these problems and has been available for over 50 years yet most personal computers do not use it. Cost is a factor but what riles Torvalds is that Intel has made ECC support a feature of its Xeon range, aimed at servers and high-end workstations, and does not support it in other ranges such as the Core series.

The topic came up in a discussion about AMD's new Zen 3 Ryzen 9 5000 series processors on the Real World Tech forum site. AMD has semi-official ECC support in most of its processors. "I don't really see AMD's unofficial ECC support being a big deal," said an unwary contributor. "ECC absolutely matters," retorted Torvalds. "Intel has been detrimental to the whole industry and to users because of their bad and misguided policies wrt ECC. Seriously. And if you don't believe me, then just look at multiple generations of rowhammer, where each time Intel and memory manufacturers bleated about how it's going to be fixed next time... And yes, that was -- again -- entirely about the misguided and arse-backwards policy of 'consumers don't need ECC', which made the market for ECC memory go away."

The accusation is significant particularly at a time when security issues are high on the agenda. The suggestion is that Intel's marketing decisions have held back adoption of a technology that makes users more secure -- though rowhammer is only one of many potential attack mechanisms -- as well as making PCs more stable. "The arguments against ECC were always complete and utter garbage. Now even the memory manufacturers are starting to do ECC internally because they finally owned up to the fact that they absolutely have to," said Torvalds. Torvalds said that Xeon prices deterred usage. "I used to look at the Xeon CPU's, and I could never really make the math work. The Intel math was basically that you get twice the CPU for five times the price. So for my personal workstations, I ended up using Intel consumer CPU's." Prices, he said, dropped last year "because of Ryzen and Threadripper... but it was a 'too little, much too late' situation." By way of mitigation, he added that "apart from their ECC stance I was perfectly happy with [Intel's] consumer offerings."

Programming

Study Finds Brain Activity of Coders Isn't Like Language or Math (boingboing.net) 88

"When you do computer programming, what sort of mental work are you doing?" asks science/tech journalist Clive Thompson: For a long time, folks have speculated on this. Since coding involves pondering hierarchies of symbols, maybe the mental work is kinda like writing or reading? Others have speculated it's more similar to the way our brains process math and puzzles. A group of MIT neuroscientists recently did fMRI brain-scans of young adults while they were solving a small coding challenge using a textual programming language (Python) and a visual one (Scratch Jr.). The results?

The brain activity wasn't similar to when we process language. Instead, coding seems to activate the "multiple demand network," which — as the scientists note in a public-relations writeup of their work — "is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles."

So, coding is more like doing math than processing language?

Sorrrrrrt of ... but not exactly so. The scientists saw activity patterns that differ from those you'd see during math, too.

The upshot: Coding — in this (very preliminary!) work, anyway — looks to be a little different from either language or math. As the note, in a media release...

"Understanding computer code seems to be its own thing...."

Just anecdotally — having interviewed hundreds of coders and computer scientists for my book CODERS — I've met amazing programmers and computer scientists with all manner of intellectual makeups. There were math-heads, and there were people who practically counted on their fingers. There were programmers obsessed with — and eloquent in — language, and ones gently baffled by written and spoken communication. Lots of musicians, lots of folks who slid in via a love of art and visual design, then whose brains just seized excitedly on the mouthfeel of algorithms.

Math

The Lasting Lessons of John Conway's Game of Life 84

Siobhan Roberts, writing for The New York Times: In March of 1970, Martin Gardner opened a letter jammed with ideas for his Mathematical Games column in Scientific American. Sent by John Horton Conway, then a mathematician at the University of Cambridge, the letter ran 12 pages, typed hunt-and-peck style. Page 9 began with the heading "The game of life." It described an elegant mathematical model of computation -- a cellular automaton, a little machine, of sorts, with groups of cells that evolve from iteration to iteration, as a clock advances from one second to the next. Dr. Conway, who died in April, having spent the latter part of his career at Princeton, sometimes called Life a "no-player, never-ending game." Mr. Gardner called it a "fantastic solitaire pastime." The game was simple: Place any configuration of cells on a grid, then watch what transpires according to three rules that dictate how the system plays out.

Birth rule: An empty, or "dead," cell with precisely three "live" neighbors (full cells) becomes live.
Death rule: A live cell with zero or one neighbors dies of isolation; a live cell with four or more neighbors dies of overcrowding.
Survival rule: A live cell with two or three neighbors remains alive.
With each iteration, some cells live, some die and "Life-forms" evolve, one generation to the next. Among the first creatures to emerge was the glider -- a five-celled organism that moved across the grid with a diagonal wiggle and proved handy for transmitting information. It was discovered by a member of Dr. Conway's research team, Richard Guy, in Cambridge, England. The glider gun, producing a steady stream of gliders, was discovered soon after by Bill Gosper, then at the Massachusetts Institute of Technology.
AI

AI Solves Schrodinger's Equation (phys.org) 67

An anonymous reader quotes a report from Phys.Org: A team of scientists at Freie Universitat Berlin has developed an artificial intelligence (AI) method for calculating the ground state of the Schrodinger equation in quantum chemistry. The goal of quantum chemistry is to predict chemical and physical properties of molecules based solely on the arrangement of their atoms in space, avoiding the need for resource-intensive and time-consuming laboratory experiments. In principle, this can be achieved by solving the Schrodinger equation, but in practice this is extremely difficult. Up to now, it has been impossible to find an exact solution for arbitrary molecules that can be efficiently computed. But the team at Freie Universitat has developed a deep learning method that can achieve an unprecedented combination of accuracy and computational efficiency.

The deep neural network designed by [the] team is a new way of representing the wave functions of electrons. "Instead of the standard approach of composing the wave function from relatively simple mathematical components, we designed an artificial neural network capable of learning the complex patterns of how electrons are located around the nuclei," [Professor Frank Noe, who led the team effort] explains. "One peculiar feature of electronic wave functions is their antisymmetry. When two electrons are exchanged, the wave function must change its sign. We had to build this property into the neural network architecture for the approach to work," adds [Dr. Jan Hermann of Freie Universitat Berlin, who designed the key features of the method in the study]. This feature, known as 'Pauli's exclusion principle,' is why the authors called their method 'PauliNet.' Besides the Pauli exclusion principle, electronic wave functions also have other fundamental physical properties, and much of the innovative success of PauliNet is that it integrates these properties into the deep neural network, rather than letting deep learning figure them out by just observing the data. "Building the fundamental physics into the AI is essential for its ability to make meaningful predictions in the field," says Noe. "This is really where scientists can make a substantial contribution to AI, and exactly what my group is focused on."
The results were published in the journal Nature Chemistry.

Slashdot Top Deals