Math

Major Breakthrough In Quantum Computing Shows That MIP* = RE (arxiv.org) 28

Slashdot reader JoshuaZ writes:
In a major breakthrough in quantum computing it was shown that MIP* equals RE. MIP* is the set of problems that can be efficiently demonstrated to a classical computer interacting with multiple quantum computers with any amount of shared entanglement between the quantum computers. RE is the set of problems which are recursive; this is essentially all problems which can be computed.

This result comes through years of deep development of understanding interactive protocols, where one entity, a verifier, has much less computing power than another set of entities, provers, who wish to convince the verifier of the truth of a claim. In 1990, a major result was that a classical computer with a polynomial amount of time could be convince of any claim in PSPACE by interacting with an arbitrarily powerful classical computer. Here PSPACE is the set of problems solvable by a classical computer with a polynomial amount of space. Subsequent results showed that if one allowed a verifier able to interact with multiple provers, the verifier could be convinced of a solution of any problem in NEXPTIME, a class conjectured to be much larger than PSPACE. For a while, it was believed that in the quantum case, the set of problems might actually be smaller, since multiple quantum computers might be able to use their shared entangled qubits to "cheat" the verifier. However, this has turned out not just to not be the case, but the exact opposite: MIP* is not only large, it is about as large as a computable class can naturally be.

This result while a very big deal from a theoretical standpoint is unlikely to have any immediate applications since it supposes quantum computers with arbitrarily large amounts of computational power and infinite amounts of entanglement.

The paper in question is a 165 tour de force which includes incidentally showing that the The Connes embedding conjecture, a 50 year old major conjecture from the theory of operator algebras, is false.

Education

Teaching Assistants Say They've Won Millions From UC Berkeley (vice.com) 72

The university underemployed more than 1,000 students -- primarily undergraduates in computer science and engineering -- in order to avoid paying union benefits, UAW Local 2865 says. From a report: The University of California at Berkeley owes student workers $5 million in back pay, a third-party arbitrator ruled on Monday, teaching assistants at the university say. More than 1,000 students -- primarily undergraduates in Berkeley's electrical engineering and computer science department -- are eligible for compensation, the United Auto Workers (UAW) Local 2865, which represents 19,000 student workers in the University of California system, told Motherboard. In some cases, individual students will receive around $7,500 per term, the union says. "This victory means that the university cannot get away with a transparent erosion of labor rights guaranteed under our contract," Nathan Kenshur, head steward of UAW Local 2865 and a third-year undergraduate math major at Berkeley, told Motherboard.

Thanks to their union contract, students working 10 hours a week or more at Berkeley are entitled to a full waiver of their in-state tuition fees, $150 in campus fees each semester, and childcare benefits. (Graduate students also receive free healthcare.) But in recent years, Berkeley has avoided paying for these benefits, according to UAW Local 2865. Instead, the university has hired hundreds of students as teaching assistants with appointments of less than 10 hours a week. On Monday, an arbitrator agreed upon by the UAW and the university ruled that Berkeley had intentionally avoided paying its student employees' benefits by hiring part-time workers. It ordered the university to pay the full tuition amount for students who worked these appointments between fall 2017 and today, a press release from the union says.

Transportation

Letting Slower Passengers Board Airplane First Really Is Faster, Study Finds (arstechnica.com) 166

According to physicist Jason Steffen, letting slower passengers board airplanes first actually results in a more efficient process and less time before takeoff. An anonymous reader shares a report from Ars Technica: Back in 2011, Jason Steffen, now a physicist at the University of Nevada, Las Vegas, became intrigued by the problem and applied the same optimization routine used to solve the famous traveling salesman problem to airline boarding strategies. Steffen fully expected that boarding from the back to the front would be the most efficient strategy and was surprised when his results showed that strategy was actually the least efficient. The most efficient, aka the "Steffen method," has the passengers board in a series of waves. "Adjacent passengers in line will be seated two rows apart from each other," Steffen wrote at The Conversation in 2014. "The first wave of passengers would be, in order, 30A, 28A, 26A, 24A, and so on, starting from the back."

Field tests bore out the results, showing that Steffen's method was almost twice as fast as boarding back-to-front or rotating blocks of rows and 20-30 percent faster than random boarding. The key is parallelism, according to Steffen: the ideal scenario is having more than one person sitting down at the same time. "The more parallel you can make the boarding process, the faster it will go," he told Ars. "It's not about structuring things as much as it is about finding the best way to facilitate multiple people sitting down at the same time." Steffen used a standard agent-based model using particles to represent individual agents. This latest study takes a different approach, modeling the boarding process using Lorentzian geometry -- the mathematical foundation of Einstein's general theory of relativity. Co-author Sveinung Erland of Western Norway University and colleagues from Latvia and Israel exploited the well-known connection between microscopic dynamics of interacting particles and macroscopic properties and applied it to the boarding process. In this case, the microscopic interacting particles are the passengers waiting in line to board, and the macroscopic property is how long it takes all the passengers to settle into their assigned seats.
The paper has been published in the journal Physical Review E.
Math

'Why the Foundations of Physics Have Not Progressed For 40 Years' (iai.tv) 231

Sabine Hossenfelder, research fellow at the Frankfurt Institute for Advanced Studies, writes: What we have here in the foundation of physics is a plain failure of the scientific method. All these wrong predictions should have taught physicists that just because they can write down equations for something does not mean this math is a scientifically promising hypothesis. String theory, supersymmetry, multiverses. There's math for it, alright. Pretty math, even. But that doesn't mean this math describes reality. Physicists need new methods. Better methods. Methods that are appropriate to the present century. And please spare me the complaints that I supposedly do not have anything better to suggest, because that is a false accusation. I have said many times that looking at the history of physics teaches us that resolving inconsistencies has been a reliable path to breakthroughs, so that's what we should focus on. I may be on the wrong track with this, of course.

Why don't physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it. You may want to put this down as a minor worry because -- $40 billion dollar collider aside -- who really cares about the foundations of physics? Maybe all these string theorists have been wasting tax-money for decades, alright, but in the large scheme of things it's not all that much money. I grant you that much. Theorists are not expensive. But even if you don't care what's up with strings and multiverses, you should worry about what is happening here. The foundations of physics are the canary in the coal mine. It's an old discipline and the first to run into this problem. But the same problem will sooner or later surface in other disciplines if experiments become increasingly expensive and recruit large fractions of the scientific community. Indeed, we see this beginning to happen in medicine and in ecology, too.

Printer

MIT Scientists Made a Shape-Shifting Material that Morphs Into a Human Face (arstechnica.com) 24

An anonymous reader quotes Ars Technica: The next big thing in 3D printing just might be so-called "4D materials" which employ the same manufacturing techniques, but are designed to deform over time in response to changes in the environment, like humidity and temperature. They're also sometimes known as active origami or shape-morphing systems. MIT scientists successfully created flat structures that can transform into much more complicated structures than had previously been achieved, including a human face. They published their results last fall in the Proceedings of the National Academy of Sciences...

MIT mechanical engineer Wim van Rees, a co-author of the PNAS paper, devised a theoretical method to turn a thin flat sheet into more complex shapes, like spheres, domes, or a human face. "My goal was to start with a complex 3-D shape that we want to achieve, like a human face, and then ask, 'How do we program a material so it gets there?'" he said. "That's a problem of inverse design..." van Rees and his colleagues decided to use a mesh-like lattice structure instead of the continuous sheet modeled in the initial simulations. They made the lattice out of a rubbery material that expands when the temperature increases. The gaps in the lattice make it easier for the material to adapt to especially large changes in its surface area. The MIT team used an image of [19th century mathematician Carl Friedrich] Gauss to create a virtual map of how much the flat surface would have to bend to reconfigure into a face. Then they devised an algorithm to translate that into the right pattern of ribs in the lattice.

They designed the ribs to grow at different rates across the mesh sheet, each one able to bend sufficiently to take on the shape of a nose or an eye socket. The printed lattice was cured in a hot oven, and then cooled to room temperature in a saltwater bath.

And voila! It morphed into a human face.

"The team also made a lattice containing conductive liquid metal that transformed into an active antenna, with a resonance frequency that changes as it deforms."
Math

Why Some Rope Knots Hold Better Than Others (scitechdaily.com) 45

A reader shares a report from SciTechDaily: MIT mathematicians and engineers have developed a mathematical model that predicts how stable a knot is, based on several key properties, including the number of crossings involved and the direction in which the rope segments twist as the knot is pulled tight. "These subtle differences between knots critically determine whether a knot is strong or not," says Jorn Dunkel, associate professor of mathematics at MIT. "With this model, you should be able to look at two knots that are almost identical, and be able to say which is the better one." "Empirical knowledge refined over centuries has crystallized out what the best knots are," adds Mathias Kolle, the Rockwell International Career Development Associate Professor at MIT. "And now the model shows why."
[...]
In comparing the diagrams of knots of various strengths, the researchers were able to identify general "counting rules," or characteristics that determine a knot's stability. Basically, a knot is stronger if it has more strand crossings, as well as more "twist fluctuations" -- changes in the direction of rotation from one strand segment to another. For instance, if a fiber segment is rotated to the left at one crossing and rotated to the right at a neighboring crossing as a knot is pulled tight, this creates a twist fluctuation and thus opposing friction, which adds stability to a knot. If, however, the segment is rotated in the same direction at two neighboring crossing, there is no twist fluctuation, and the strand is more likely to rotate and slip, producing a weaker knot. They also found that a knot can be made stronger if it has more "circulations," which they define as a region in a knot where two parallel strands loop against each other in opposite directions, like a circular flow.

By taking into account these simple counting rules, the team was able to explain why a reef knot, for instance, is stronger than a granny knot. While the two are almost identical, the reef knot has a higher number of twist fluctuations, making it a more stable configuration. Likewise, the zeppelin knot, because of its slightly higher circulations and twist fluctuations, is stronger, though possibly harder to untie, than the Alpine butterfly -- a knot that is commonly used in climbing.
The findings have been published in the journal Science.
Math

A Computer Made From DNA Can Compute the Square Root of 900 (newscientist.com) 36

A computer made from strands of DNA in a test tube can calculate the square root of numbers up to 900. New Scientist reports: Chunlei Guo at the University of Rochester in New York state and colleagues developed a computer that uses 32 strands of DNA to store and process information. It can calculate the square root of square numbers 1, 4, 9, 16, 25 and so on up to 900. The DNA computer uses a process known as hybridization, which occurs when two strands of DNA attach together to form double-stranded DNA.

To start, the team encodes a number onto the DNA using a combination of ten building blocks. Each combination represents a different number up to 900, and is attached to a fluorescence marker. The team then controls hybridization in such a way that it changes the overall fluorescent signal so that it corresponds to the square root of the original number. The number can then be deduced from the color. The DNA computer could help to develop more complex computing circuits, says Guo. Guo believes DNA computers may one day replace traditional computers for complex computations.
The findings have been published in the journal Small.
Education

Microsoft Wants Schoolchildren Playing Minecraft To Learn Math (minecraft.net) 39

Long-time Slashdot reader theodp writes: A Microsoft blog post notes the company has lined up K-12 educators to sing the praises of Minecraft Education Edition at the Future of Education Technology Conference, where it'll also be pitching Microsoft Education in general. A 2019 Recap of Minecraft: Education Edition (and an accompanying video) highlight Microsoft's success in getting teachers to use Minecraft to teach subjects across the K-12 curriculum, not just Hour of Code tutorials. Microsoft's ambitions for Minecraft were tipped in a 2015 press release, which included the lofty claim that "Minecraft has the power to transform learning on a global scale...."

There are some teacher walkthrough videos available for review, like the unlisted one for Math Bed Wars! , a Common Core-aligned Minecraft-based lesson that teaches multiplication commutativity ("Students build arrays to show commutative properties of multiplication while constructing defenses as part of a Minecraft mini-game"). The lesson plan for Math Bed Wars! warns that children who fail to get enough hands-on Minecraft play time aren't likely to get much of a math education:

"While there is not much actually doing of math in the section of the lesson plan, it is by far the most important. It is in the game play where they get its meaning, and deeper thinking happens. For example, they will start thinking how to use math to build strategically. However, the most important part is what it does for the students' engagement across math. So please give them at least 30 minutes of game play, even if you have to break up the lesson into two days."

Is it okay for schools to make children play Microsoft Minecraft if the kids want to learn math and other subjects?

Education

How Classroom Technology is Holding Students Back (technologyreview.com) 87

Schools are increasingly adopting a "one-to-one" policy of giving each child a digital device -- often an iPad -- and most students in the U.S. now use digital learning tools in school. There's near-universal enthusiasm for technology on the part of educators. Unfortunately, the evidence is equivocal at best. Some studies have found positive effects, at least from moderate amounts of computer use, especially in math. But much of the data shows a negative impact. It looks like the most vulnerable students can be harmed the most by a heavy dose of technology -- or, at best, not helped. Why are these devices so unhelpful for learning? Various explanations have been offered. When students read text from a screen, they absorb less information than when they read it on paper, for example. But there are deeper reasons, too. Unless we pay attention to these, we risk embedding a deeper digital divide.
AI

Facebook Has a Neural Network That Can Do Advanced Math (technologyreview.com) 36

Guillaume Lample and Francois Charton, at Facebook AI Research in Paris, say they have developed an algorithm that can calculate integrals and solve differential equations. MIT Technology Review reports: Neural networks have become hugely accomplished at pattern-recognition tasks such as face and object recognition, certain kinds of natural language processing, and even playing games like chess, Go, and Space Invaders. But despite much effort, nobody has been able to train them to do symbolic reasoning tasks such as those involved in mathematics. The best that neural networks have achieved is the addition and multiplication of whole numbers. For neural networks and humans alike, one of the difficulties with advanced mathematical expressions is the shorthand they rely on. For example, the expression x^3 is a shorthand way of writing x multiplied by x multiplied by x. In this example, "multiplication" is shorthand for repeated addition, which is itself shorthand for the total value of two quantities combined.

Enter Lample and Charton, who have come up with an elegant way to unpack mathematical shorthand into its fundamental units. They then teach a neural network to recognize the patterns of mathematical manipulation that are equivalent to integration and differentiation. Finally, they let the neural network loose on expressions it has never seen and compare the results with the answers derived by conventional solvers like Mathematica and Matlab. The first part of this process is to break down mathematical expressions into their component parts. Lample and Charton do this by representing expressions as tree-like structures. The leaves on these trees are numbers, constants, and variables like x; the internal nodes are operators like addition, multiplication, differentiate-with-respect-to, and so on. [...] Trees are equal when they are mathematically equivalent. For example, 2 + 3 = 5 = 12 - 7 = 1 x 5 are all equivalent; therefore their trees are equivalent too. These trees can also be written as sequences, taking each node consecutively. In this form, they are ripe for processing by a neural network approach called seq2seq.

The next stage is the training process, and this requires a huge database of examples to learn from. Lample and Charton create this database by randomly assembling mathematical expressions from a library of binary operators such as addition, multiplication, and so on; unary operators such as cos, sin, and exp; and a set of variables, integers, and constants, such as [pi] and e. They also limit the number of internal nodes to keep the equations from becoming too big. [...] Finally, Lample and Charton put their neural network through its paces by feeding it 5,000 expressions it has never seen before and comparing the results it produces in 500 cases with those from commercially available solvers, such as Maple, Matlab, and Mathematica. The comparisons between these and the neural-network approach are revealing. "On all tasks, we observe that our model significantly outperforms Mathematica," say the researchers. "On function integration, our model obtains close to 100% accuracy, while Mathematica barely reaches 85%." And the Maple and Matlab packages perform less well than Mathematica on average.
The paper, called "Deep Learning For Symbolic Mathematics," can be found on arXiv.
Programming

Tony Brooker, Pioneer of Computer Programming, Dies At 94 (nytimes.com) 26

Cade Metz from The New York Times pays tribute to Tony Brooker, the mathematician and computer scientist who designed the programming language for the world's first commercial computer. Brooker died on Nov. 20 at the age of 94. From the report: Mr. Brooker had been immersed in early computer research at the University of Cambridge when one day, on his way home from a mountain-climbing trip in North Wales, he stopped at the University of Manchester to tour its computer lab, which was among the first of its kind. Dropping in unannounced, he introduced himself to Alan Turing, a founding father of the computer age, who at the time was the lab's deputy director. When Mr. Brooker described his own research at the University of Cambridge, he later recalled, Mr. Turing said, "Well, we can always employ someone like you." Soon they were colleagues.

Mr. Brooker joined the Manchester lab in October 1951, just after it installed a new machine called the Ferranti Mark 1. His job, he told the British Library in an interview in 2010, was to make the Mark 1 "usable." Mr. Turing had written a user's manual, but it was far from intuitive. To program the machine, engineers had to write in binary code -- patterns made up of 0s and 1s -- and they had to write them backward, from right to left, because this was the way the hardware read them. It was "extremely neat and very clever but pretty meaningless and very unfriendly," Mr. Brooker said. In the months that followed, Mr. Brooker wrote a language he called Autocode, based on ordinary numbers and letters. It allowed anyone to program the machine -- not just the limited group of trained engineers who understood the hardware. This marked the beginning of what were later called "high-level" programming languages -- languages that provide increasingly simple and intuitive ways of giving commands to computers, from the IBM mainframes of the 1960s to the PCs of the 1980s to the iPhones of today.

Math

Mathematician Proves Huge Result on 'Dangerous' Problem (quantamagazine.org) 167

Mathematicians regard the Collatz conjecture as a quagmire and warn each other to stay away. But now Terence Tao has made more progress than anyone in decades. From a report: It's a siren song, they say: Fall under its trance and you may never do meaningful work again. The Collatz conjecture is quite possibly the simplest unsolved problem in mathematics -- which is exactly what makes it so treacherously alluring. "This is a really dangerous problem. People become obsessed with it and it really is impossible," said Jeffrey Lagarias, a mathematician at the University of Michigan and an expert on the Collatz conjecture. Earlier this year one of the top mathematicians in the world dared to confront the problem -- and came away with one of the most significant results on the Collatz conjecture in decades. On September 8, Terence Tao posted a proof showing that -- at the very least -- the Collatz conjecture is "almost" true for "almost" all numbers. While Tao's result is not a full proof of the conjecture, it is a major advance on a problem that doesn't give up its secrets easily. "I wasn't expecting to solve this problem completely," said Tao, a mathematician at the University of California, Los Angeles. "But what I did was more than I expected."

Lothar Collatz likely posed the eponymous conjecture in the 1930s. The problem sounds like a party trick. Pick a number, any number. If it's odd, multiply it by 3 and add 1. If it's even, divide it by 2. Now you have a new number. Apply the same rules to the new number. The conjecture is about what happens as you keep repeating the process. Intuition might suggest that the number you start with affects the number you end up with. Maybe some numbers eventually spiral all the way down to 1. Maybe others go marching off to infinity. But Collatz predicted that's not the case. He conjectured that if you start with a positive whole number and run this process long enough, all starting values will lead to 1. And once you hit 1, the rules of the Collatz conjecture confine you to a loop: 1, 4, 2, 1, 4, 2, 1, on and on forever.

Math

Mathematicians Catch a Pattern By Figuring Out How To Avoid It (quantamagazine.org) 26

We finally know how big a set of numbers can get before it has to contain a pattern known as a "polynomial progression." From a report: A new proof by Sarah Peluse of the University of Oxford establishes that one particularly important type of numerical sequence is, ultimately, unavoidable: It's guaranteed to show up in every single sufficiently large collection of numbers, regardless of how the numbers are chosen. "There's a sort of indestructibility to these patterns," said Terence Tao of the University of California, Los Angeles. Peluse's proof concerns sequences of numbers called "polynomial progressions." They are easy to generate -- you could create one yourself in short order -- and they touch on the interplay between addition and multiplication among the numbers. For several decades, mathematicians have known that when a collection, or set, of numbers is small (meaning it contains relatively few numbers), the set might not contain any polynomial progressions. They also knew that as a set grows it eventually crosses a threshold, after which it has so many numbers that one of these patterns has to be there, somewhere. It's like a bowl of alphabet soup -- the more letters you have, the more likely it is that the bowl will contain words. But prior to Peluse's work, mathematicians didn't know what that critical threshold was. Her proof provides an answer -- a precise formula for determining how big a set needs to be in order to guarantee that it contains certain polynomial progressions. Previously, mathematicians had only a vague understanding that polynomial progressions are embedded among the whole numbers (1, 2, 3 and so on). Now they know exactly how to find them.
Graphics

Ask Slashdot: How Much Faster Is an ASIC Than a Programmable GPU? 63

dryriver writes: When you run a real-time video processing algorithm on a GPU, you notice that some math functions execute very quickly on the GPU and some math functions take up a lot more processing time or cycles, slowing down the algorithm. If you were to implement that exact GPU algorithm as a dedicated ASIC hardware chip or perhaps on a beefy FPGA, what kind of speedup -- if any -- could you expect over a midrange GPU like a GTX 1070? Would hardwiring the same math operations as ASIC circuitry lead to a massive execution time speedup as some people claim -- e.g. 5x or 10x faster than a general purpose Nvidia GPU -- or are GPUs and ASICs close to each other in execution speed?

Bonus question: Is there a way to calculate the speed of an algorithm implemented as an ASIC chip without having an actual physical ASIC chip produced? Could you port the algorithm to, say, Verilog or similar languages and then use a software tool to calculate or predict how fast it would run if implemented as an ASIC with certain properties (clock speed, core count, manufacturing process... )?
Education

How Texas Instruments Monopolized Math Class (medium.com) 220

Texas Instruments' $100 calculators have been required in classrooms for more than twenty years, as students and teachers still struggle to afford them. From a report: Texas Instruments released its first graphing calculator, the TI-81, to the public in 1990. Designed for use in pre-algebra and algebra courses, it was superseded by other Texas Instruments models with varying shades of complexity but these calculators remained virtually untouched aesthetically. Today, Texas Instruments still sells a dozen or so different calculator models intended for different kinds of students, ranging from the TI-73 and TI-73 Explorer for middle school classes to the TI-Nspire CX and TI-Nspire CX CAS ($149), an almost smartphone-like calculator with more processing power. But the most popular calculators, teachers tell me, include the TI-83 Plus ($94), launched in 1999; the TI-84 Plus ($118), launched in 2004; the very similar TI-84 Plus Silver Edition, also launched in 2004; and the TI-89 Titanium ($128).

Thompson (anecdote in the story), like many teachers, works in a district where it's a financial impossibility to ask students and their parents to shell out $100 for a new calculator. (Graphing calculators of any brand are recommended at Thompson's school, and they are essential for the curriculum.) So the onus falls on him and other teachers, who rely on their teacher salaries -- Thompson makes $62,000 a year -- to fill in the gaps. At first, Thompson bought cheaper calculators: four-function, $3 calculators. This, he quickly realized, would be insufficient. "A lot of students were angry and actually left the class and went to the classroom of the more experienced teacher next to me and asked to borrow her calculators," he told me. The bulky, rectangular Texas Instruments calculators act more like mini-handheld computers than basic calculators, plotting graphs and solving complex functions. Seeing expressions, formulas, and graphs on-screen is integral for students in geometry, calculus, physics, statistics, business, and finance classes. They provide students access to more advanced features, letting them do all the calculations of a scientific calculator, as well as graph equations and make function tables. Giving a child a four-function calculator -- allowing for only addition, subtraction, multiplication, and division -- would leave them woefully underprepared for the requirements of more advanced math and science classes.
Further reading: This is the Story of the 1970s Great Calculator Race.
Software

Finland Has an App Showing Shopping's True Carbon Footprint (bloomberg.com) 57

An anonymous reader quotes a report from Bloomberg: Unlike other carbon-footprint calculators already on the market, the application developed by Enfuce Financial Services Oy, a Finnish payment services provider, does not rely on users inputting the data manually. Instead, it combines data from credit cards and banks with purchase data from retailers to provide real-time calculations of how a given product affects the climate. With an estimated 70% of carbon emissions globally attributed to end users, Enfuce chairman and co-founder Monika Liikamaa says the app will help people adapt their lifestyles and make them compatible with the goal of keeping global warming within 1.5 degrees Celsius.

After the initial set up and opt-in, the app will calculate a carbon footprint based on the user's purchases -- to the level of individual steaks or tomatoes. It will then propose actions to reduce their carbon impact. Typical suggestions may include taking a shorter shower, hopping on the bus instead of the car, turning down the thermostat and going vegan for a week. The app is a side project for Enfuce, which already handles sensitive payments data securely. Its core business is to run credit card systems for clients that do not require owning expensive computer servers. Enfuce is in talks with three major banks and is already working with Mastercard Inc. and Amazon.com Inc.'s cloud-server unit. No vendor will have exclusive rights to the system, which should be available by March, the company said.

Math

US Workers Show Little Improvement In 21st Century Skills 88

A new government agency report found that U.S. workers are failing to improve the skills needed to succeed in an increasingly global economy. Bloomberg reports: The National Center for Education Statistics asked 3,300 respondents ages 16-to-65 to read simple passages and solve basic math problems. What the researchers found is that literacy, numeracy and digital problem-solving ability in the U.S. have stagnated over the past few years. Some 19% of the test-takers ranked at the lowest of three levels for literacy and 24% lacked basic digital problem-solving abilities. Meanwhile, a shocking 29% performed at the lowest level for numeracy, the same as findings from the previous study conducted in 2012-2014. Almost one in three couldn't correctly answer "how much gas is in a 24-gallon tank if the gas gauge reads three-quarters full."

There were a few bright spots among the research. Latino adults saw their overall scores improve in both literacy and digital problem solving. Some 35% ranked at the highest of three levels for the latter, up from 24% during the 2012-2014 survey period. In addition, high school graduation rates climbed 2 percentage-points to 14%, while the percentage of people with more than a high school diploma jumped 3 percentage-points to 48%.
AI

AI Cracks Centuries-Old 'Three Body Problem' In Under a Second (livescience.com) 146

Long-time Slashdot reader taiwanjohn shared this article from Live Science: The mind-bending calculations required to predict how three heavenly bodies orbit each other have baffled physicists since the time of Sir Isaac Newton. Now artificial intelligence (A.I.) has shown that it can solve the problem in a fraction of the time required by previous approaches.

Newton was the first to formulate the problem in the 17th century, but finding a simple way to solve it has proved incredibly difficult. The gravitational interactions between three celestial objects like planets, stars and moons result in a chaotic system -- one that is complex and highly sensitive to the starting positions of each body. Current approaches to solving these problems involve using software that can take weeks or even months to complete calculations. So researchers decided to see if a neural network -- a type of pattern recognizing A.I. that loosely mimics how the brain works -- could do better.

The algorithm they built provided accurate solutions up to 100 million times faster than the most advanced software program, known as Brutus. That could prove invaluable to astronomers trying to understand things like the behavior of star clusters and the broader evolution of the universe, said Chris Foley, a biostatistician at the University of Cambridge and co-author of a paper to the arXiv database, which has yet to be peer-reviewed.

Transportation

14-Year-Old Inventor Solves Blind Spots (gizmodo.com) 125

Alaina Gassler, a 14-year-old inventor from West Grove, Pennsylvania, came up with a clever way to eliminate the blind spot created by the thick pillars on the side of a car's windshield. All it requires is some relatively inexpensive and readily available tech available at an electronics store. Gizmodo reports: Her solution involves installing an outward-facing webcam on the outside of a vehicle's windshield pillar, and then projecting a live feed from that camera onto the inside of that pillar. Custom 3D-printed parts allowed her to perfectly align the projected image so that it seamlessly blends with what a driver sees through the passenger window and the windshield, essentially making the pillar invisible.

Gassler's invention isn't quite ready to be installed in vehicles across the country just yet, but the technologies already exist that would allow it to be implemented in cars without serving as a distraction to the driver. Short-throw projectors could be installed at the base of the passenger side windshield pillar to create the image without having to worry about the passenger blocking the beam. And many cars have already replaced side mirrors with cameras, or include nearly invisible cameras in the rear for safely backing up, so adding one more on the side of the pillar would presumably be trivial.
Gassler won the top prize for her invention at this year's Society for Science and the Public's Broadcom MASTERS (Math, Applied Science, Technology, and Engineering for Rising Stars) science and engineering competition.
Education

The World's Top Economists Just Made the Case For Why We Still Need English Majors (washingtonpost.com) 169

An anonymous reader writes: A great migration is happening on U.S. college campuses. Ever since the fall of 2008, a lot of students have walked out of English and humanities lectures and into STEM classes, especially computer science and engineering. English majors are down more than a quarter (25.5 percent) since the Great Recession, according to data compiled by the National Center for Education Statistics. It's the biggest drop for any major tracked by the center in its annual data and is quite startling, given that college enrollment has jumped in the past decade. Ask any college student or professor why this big shift from studying Chaucer to studying coding is happening and they will probably tell you it's about jobs. As students feared for their job prospects, they -- and their parents -- wanted a degree that would lead to a steady paycheck after graduation. The perception is that STEM (science, technology, engineering and math) is the path to employment. Majors in computer science and health fields have nearly doubled from 2009 to 2017. Engineering and math have also seen big jumps. As humanities majors slump to the lowest level in decades, calls are coming from surprising places for a revival. Some prominent economists are making the case for why it still makes a lot of sense to major (or at least take classes) in humanities alongside more technical fields.

Slashdot Top Deals