×
Math

Saul Kripke, Philosopher Who Found Truths In Semantics, Dies At 81 (nytimes.com) 31

Saul Kripke, a math prodigy and pioneering logician whose revolutionary theories on language qualified him as one of the 20th century's greatest philosophers, died on Sept. 15 in Plainsboro, N.J. He was 81. The New York Times reports: His death, at Penn Medicine Princeton Medical Center, was caused by pancreatic cancer, according to Romina Padro, director of the Saul Kripke Center at the City University of New York, where Professor Kripke had been a distinguished professor of philosophy and computer science since 2003 and had capped a career exploring how people communicate. Professor Kripke's classic work, "Naming and Necessity," first published in 1972 and drawn from three lectures he delivered at Princeton University in 1970 before he was 30, was considered one of the century's most evocative philosophical books.

"Kripke challenged the notion that anyone who uses terms, especially proper names, must be able to correctly identify what the terms refer to," said Michael Devitt, a distinguished professor of philosophy who recruited Professor Kripke to the City University Graduate Center in Manhattan. "Rather, people can use terms like 'Einstein,' 'springbok,' perhaps even 'computer,' despite being too ignorant or wrong to provide identifying descriptions of their referents," Professor Devitt said. "We can use terms successfully not because we know much about the referent but because we're linked to the referent by a great social chain of communication."

The Pulitzer Prize-winning historian Taylor Branch, writing in The New York Times Magazine in 1977, said Professor Kripke had "introduced ways to distinguish kinds of true statements -- between statements that are 'possibly' true and those that are 'necessarily' true." "In Professor Kripke's analysis," he continued, "a statement is possibly true if and only if it is true in some possible world -- for example, 'The sky is blue' is a possible truth, because there is some world in which the sky could be red. A statement is necessarily true if it is true in all possible worlds, as in 'The bachelor is an unmarried man.'"

Power

A 26-Year-Old Inventor Is Trying To Put Mirrors In Space To Generate Solar Power At Night (vice.com) 158

Ben Nowack, a 26-year old inventor and CEO of Tons of Mirrors, is trying to use satellite-mounted reflective surfaces to redirect sunlight to earthbound solar panels at night. In an interview with Motherboard, Nowack explains what inspired this idea and how he can turn his concept into reality. Here's an excerpt from the report: What was the initial idea? I had an interesting way to solve the real issue with solar power. It's this unstoppable force. Everybody's installing so many solar panels everywhere. It's really a great candidate to power humanity. But sunlight turns off, it's called nighttime. If you solve that fundamental problem, you fix solar everywhere.

Where did the idea come from? I was watching a YouTube video called The Problem with Solar Energy in Africa. It was basically saying that you need three times as many solar panels in Germany as you do in the Sahara Desert and you can't get the power from the Sahara to Germany in an easy way. I thought, what if you could beam the sunlight and then reflect it with mirrors, and put that light into laser beam vacuum tubes that zigzag around the curvature of the Earth. It could be this beam that comes in just like power companies, this tube full of infinite light. That was the initial idea. But the approach was completely economically unworkable. I was like, this is not going to compete with solar in 10 years. I should just completely give up and do something else. Then I was on a run two days later and thought what if I put that thing that turns sunlight into a beam in orbit then you don't have to build a vacuum tube anymore. And it's so much more valuable because you can shine sunlight on solar farms that already exist. Then I developed several more technologies which I know for a fact no one else is working on. That made the model even more economical.

Are these just like regular household mirrors, but fixed to a satellite? If you did that, the light would go to too many places. The sun is a certain size. It's not a point, it has a distance across. The light from one side of the sun would bounce off your mirror, and the light from the other side would also bounces off your mirror. If you used a perfectly flat mirror, every single microscopic piece would have this angle of diverging light coming from it. By the time the reflection hit Earth, you'd get a 3.6 kilometer diameter spot, which is gigantic. There are only 10 solar farms that big. So I did the math, and figured out that if I could hit a 500-meter spot instead of a 3,600-meter spot, then I'd be able to hit 44 times more solar sites per orbit.

Education

Does Computer Programming Really Help Kids Learn Math? 218

Long-time Slashdot reader theodp writes: A new study on the Impact of Programming on Primary Mathematics Learning (abstract only, full article $24.95 on ScienceDirect) is generating some buzz on Twitter amongst K-12 CS educator types. It concluded that:

1. Programming did not benefit mathematics learning compared to traditional activities
2. There's a negative though small effect of programming on mathematics learning
3. Mindful "high-road transfer" from programming to mathematics is not self-evident
4. Visual programming languages might distract students from mathematics activities

From the Abstract: "The aim of this study is to investigate whether a programming activity might serve as a learning vehicle for mathematics acquisition in grades four and five.... Classes were randomly assigned to the programming (with Scratch) and control conditions. Multilevel analyses indicate negative effects (effect size range 0.16 to 0.21) of the programming condition for the three mathematical notions.

"A potential explanation of these results is the difficulties in the transfer of learning from programming to mathematics."

The findings of the new study come 4+ years after preliminary results were released from the $1.5M 2015-2019 NSF-funded study Time4CS, a "partnership between Broward County Public Schools (FL), researchers at the University of Chicago, and [tech-bankrolled] Code.org," which explored whether learning CS using Code.org's CS Fundamentals curriculum may be linked to improved learning in math at the grade 3-5 level. Time4CS researchers concluded that the "quasi-experimental" study showed that "No significant differences in Florida State Assessment mathematics scores resulted between treatment and comparison groups."
NASA

NASA Makes RISC-V the Go-to Ecosystem for Future Space Missions (sifive.com) 54

SiFive is the first company to produce a chip implementing the RISC-V ISA.

They've now been selected to provide the core CPU for NASA's next generation High-Performance Spaceflight Computing processor (or HSPC), according to a SiFive announcement: HPSC is expected to be used in virtually every future space mission, from planetary exploration to lunar and Mars surface missions.

HPSC will utilize an 8-core, SiFive® Intelligence X280 RISC-V vector core, as well as four additional SiFive RISC-V cores, to deliver 100x the computational capability of today's space computers. This massive increase in computing performance will help usher in new possibilities for a variety of mission elements such as autonomous rovers, vision processing, space flight, guidance systems, communications, and other applications....

The SiFive X280 is a multi-core capable RISC-V processor with vector extensions and SiFive Intelligence Extensions and is optimized for AI/ML compute at the edge. The X280 is ideal for applications requiring high-throughput, single-thread performance while under significant power constraints. The X280 has demonstrated a 100x increase in compute capabilities compared to today's space computers..

In scientific and space workloads, the X280 provides several orders of magnitude improvement compared to competitive CPU solutions.

A business development executive at SiFive says their X280 core "demonstrates orders of magnitude performance gains over competing processor technology," adding that the company's IP "allows NASA to take advantage of the support, flexibility, and long-term viability of the fast-growing global RISC-V ecosystem.

"We've always said that with SiFive the future has no limits, and we're excited to see the impact of our innovations extend well beyond our planet."

And their announcement stresses that open hardware is a win for everybody: The open and collaborative nature of RISC-V will allow the broad academic and scientific software development community to contribute and develop scientific applications and algorithms, as well optimizing the many math functions, filters, transforms, neural net libraries, and other software libraries, as part of a robust and long-term software ecosystem.
Math

China Punishes 27 People Over 'Tragically Ugly' Illustrations In Maths Textbook (theguardian.com) 81

Chinese authorities have punished 27 people over the publication of a maths textbook that went viral over its "tragically ugly" illustrations. The Guardian reports: A months-long investigation by a ministry of education working group found the books were "not beautiful," and some illustrations were "quite ugly" and did not "properly reflect the sunny image of China's children." The mathematics books were published by the People's Education Press almost 10 years ago, and were reportedly used in elementary schools across the country. But they went viral in May after a teacher published photos of the illustrations inside, including people with distorted faces and bulging pants, boys pictures grabbing girls' skirts and at least one child with an apparent leg tattoo.

Social media users were largely amused by the illustrations, but many also criticized them as bringing disrepute and "cultural annihilation" to China, speculating they were the deliberate work of western infiltrators in the education sector. Related hashtags were viewed billions of times, embarrassing the Communist party and education authorities who announced a review of all textbooks "to ensure that the textbooks adhere to the correct political direction and value orientation."

In a lengthy statement released on Monday, the education authorities said 27 individuals were found to have "neglected their duties and responsibilities" and were punished, including the president of the publishing house, who was given formal demerits, which can affect a party member's standing and future employment. The editor-in-chief and the head of the maths department editing office were also given demerits and dismissed from their roles. The statement said the illustrators and designers were "dealt with accordingly" but did not give details. They and their studios would no longer be engaged to work on textbook design or related work, it said. The highly critical statement found a litany of issues with the books, including critiquing the size, quantity and quality of illustrations, some of which had "scientific and normative problems."

Math

A New Study Overturns 100-Year-Old Understanding of Color Perception (phys.org) 67

An anonymous reader quotes a report from Phys.Org: A new study corrects an important error in the 3D mathematical space developed by the Nobel Prize-winning physicist Erwin Schrodinger and others, and used by scientists and industry for more than 100 years to describe how your eye distinguishes one color from another. The research has the potential to boost scientific data visualizations, improve TVs and recalibrate the textile and paint industries. [...] "Our original idea was to develop algorithms to automatically improve color maps for data visualization, to make them easier to understand and interpret," [said Roxana Bujack, a computer scientist with a background in mathematics who creates scientific visualizations at Los Alamos National Laboratory and lead author of the paper]. So the team was surprised when they discovered they were the first to determine that the longstanding application of Riemannian geometry, which allows generalizing straight lines to curved surfaces, didn't work.

To create industry standards, a precise mathematical model of perceived color space is needed. First attempts used Euclidean spaces -- the familiar geometry taught in many high schools; more advanced models used Riemannian geometry. The models plot red, green and blue in the 3D space. Those are the colors registered most strongly by light-detecting cones on our retinas, and -- not surprisingly -- the colors that blend to create all the images on your RGB computer screen. In the study, which blends psychology, biology and mathematics, Bujack and her colleagues discovered that using Riemannian geometry overestimates the perception of large color differences. That's because people perceive a big difference in color to be less than the sum you would get if you added up small differences in color that lie between two widely separated shades. Riemannian geometry cannot account for this effect.
"We didn't expect this, and we don't know the exact geometry of this new color space yet," Bujack said. "We might be able to think of it normally but with an added dampening or weighing function that pulls long distances in, making them shorter. But we can't prove it yet."

The findings appear in the journal Proceedings of the National Academy of Science.
Math

At Long Last, Mathematical Proof That Black Holes Are Stable (quantamagazine.org) 75

Steve Nadis, reporting for Quanta Magazine: In 1963, the mathematician Roy Kerr found a solution to Einstein's equations that precisely described the space-time outside what we now call a rotating black hole. (The term wouldn't be coined for a few more years.) In the nearly six decades since his achievement, researchers have tried to show that these so-called Kerr black holes are stable. What that means, explained Jeremie Szeftel, a mathematician at Sorbonne University, "is that if I start with something that looks like a Kerr black hole and give it a little bump" -- by throwing some gravitational waves at it, for instance -- "what you expect, far into the future, is that everything will settle down, and it will once again look exactly like a Kerr solution." The opposite situation -- a mathematical instability -- "would have posed a deep conundrum to theoretical physicists and would have suggested the need to modify, at some fundamental level, Einstein's theory of gravitation," said Thibault Damour, a physicist at the Institute of Advanced Scientific Studies in France.

In a 912-page paper posted online on May 30, Szeftel, Elena Giorgi of Columbia University and Sergiu Klainerman of Princeton University have proved that slowly rotating Kerr black holes are indeed stable. The work is the product of a multiyear effort. The entire proof -- consisting of the new work, an 800-page paper by Klainerman and Szeftel from 2021, plus three background papers that established various mathematical tools -- totals roughly 2,100 pages in all. The new result "does indeed constitute a milestone in the mathematical development of general relativity," said Demetrios Christodoulou, a mathematician at the Swiss Federal Institute of Technology Zurich. Shing-Tung Yau, an emeritus professor at Harvard University who recently moved to Tsinghua University, was similarly laudatory, calling the proof "the first major breakthrough" in this area of general relativity since the early 1990s. "It is a very tough problem," he said. He did stress, however, that the new paper has not yet undergone peer review. But he called the 2021 paper, which has been approved for publication, both "complete and exciting."

Graphics

SF Writer/Digital Art/NFT Pioneer Herbert W. Franke Dies at Age 95 (artnews.com) 20

On July 7th Art News explained how 95-year-old Austrian artist Herbert W. Franke "has recently become a sensation within the art world the crypto space," describing the digital pioneer as a computer artist using algorithms and computer programs to visualize math as art. Last month, the physicist and science fiction writer was behind one of the most talked about digital artworks at a booth by the blockchain company Tezos at Art Basel. Titled MONDRIAN (1979), the work paid tribute to artist Piet Mondrian's iconic geometric visuals using a program written on one of the first home computers.

Days before this, Franke, who studied physics in Vienna following World War II and started working at Siemens in 1953, where he conducted photographic experiments after office hours, launched 100 images from his famed series "Math Art" (1980-95) as NFTs on the Quantum platform. The drop was meant to commemorate his birthday on May 14 and to raise funds for his foundation. The NFTs sold out in 30 seconds, with the likes of pioneering blockchain artist Kevin Abosch purchasing a few.

In one of his last interviews, Franke told the site that blockchain "is a totally new environment, and this technology is still in its early stages, like at the beginning of computer art. But I am convinced that it has opened a new door for digital art and introduced the next generation to this new technology." It echoed something he'd said in his first book, published in 1957, which he later quoted in the interview (a full 65 years later). "Technology is usually dismissed as an element hostile to art. I want to try to prove that it is not..."

This morning, long-time Slashdot reader Qbertino wrote: The German IT news site heise reports (article in German) that digital art pioneer, SF author ("The Mind Net") and cyberspace avantgardist Herbert W. Franke has died at age 95. His wife recounted on his Twitter account: "Herbert loved to call himself the dinosaur of computer art. I am [...] devastated to announce that our beloved dinosaur has left the earth.

"He passed away knowing there is a community of artists and art enthusiasts deeply caring about his art and legacy."
Among much pioneering work he founded one of the worlds first digital art festivals "Ars Electronica" in Austria in 1979.

Franke's wife is still running the Art Meets Science web site dedicated to Franke's work. Some highlights from its biography of Franke's life: Herbert W. Franke, born in Vienna on May 14, 1927, studied physics and philosophy at the University of Vienna and received his doctorate in 1951... An Apple II was his first personal computer which he bought 1980. He developed a program as early as 1982 that used a midi interface to control moving image sequences through music....

Only in recent years has "art from the machine" begun to interest traditional museums as a branch of modern art. Franke, who from the beginning was firmly convinced of the future importance of this art movement, has also assembled a collection of computer graphics that is unique in the world, documenting 50 years of this development with works by respected international artists, supplemented by his own works....

As a physicist, Franke was predestined to bring science and technology closer to the general public in popular form due to his talent as a writer, which became apparent early on. About one-third of his nearly fifty books, as well as uncounted journal articles...

Franke's novels and stories are not about predicting future technologies, nor about forecasting our future way of life, but rather about the intellectual examination of possible models of our future and their philosophical as well as ethical interpretation. In this context, however, Franke attaches great importance to the seriousness of scientific or technological assessments of the future in the sense of a feasibility analysis. In his opinion, a serious and meaningful discussion about future developments can basically only be conducted on this basis. In this respect, Franke is not a typical representative of science fiction, but rather a visionary who, as a novelist, deals with relevant questions of social future and human destiny on a high intellectual level.

Math

Fields Medals in Mathematics Won by Four Under Age 40 (nytimes.com) 11

Four mathematicians whose research covers areas like prime numbers and the packing of eight-dimensional spheres are the latest recipients of the Fields Medals, which are given out once every four years to some of the most accomplished mathematicians under the age of 40. From a report: At a ceremony in Helsinki on Tuesday, the International Mathematical Union, which administers the awards, bestowed the medals, made of 14-karat gold, to Hugo Duminil-Copin, 36, of the Institut des Hautes Etudes Scientifiques just south of Paris and the University of Geneva in Switzerland; June Huh, 39, of Princeton University; James Maynard, 35, of the University of Oxford in England; and Maryna Viazovska, 37, of the Swiss Federal Institute of Technology in Lausanne.

Mark Braverman, 38, of Princeton University received the Abacus Medal, a newer award that was modeled after the Fields for young computer scientists. Dr. Viazovska is just the second woman to receive a Fields Medal, while Dr. Huh defies the stereotype of a math prodigy, having not been drawn into the field until he was already 23 and in his last year of college. The Fields Medals, first awarded in 1936, were conceived by John Charles Fields, a Canadian mathematician. They and the Abacus Medal are unusual among top academic honors in that they go to people who are still early in their careers -- younger than 40 years on Jan. 1 -- and honor not just past achievements but also the promise of future breakthroughs. That the Fields are given only once every four years adds prestige through rarity -- something more like gold medals at the Olympics. Another award, the Abel Prize, is modeled more on the Nobel Prize and recognizes mathematicians annually for work over their careers. The recipients learned months ago that they had been chosen but were told not to share the news with friends and colleagues.

AI

'The Batting Lab': the Bad News Bears Meet AI? (sas.com) 2

Long-time Slashdot reader theodp writes: Back in the day, my Little League coach used some techniques one might expect to see in The Bad News Bears, like holding batting practices at an amusement park instead of on a baseball field, giving each kid a roll of coins and sending them into the batting cages to experience faster pitching than they'd see from 9-12-year-olds (it was surprisingly effective training).

So how might kids improve their hitting in the era of AI, ML, and Data Science? Well, as part of their data literacy initiatives, SAS worked with North Carolina State University's softball and baseball teams to collect data on the key moments of an elite player's swing and used that data to help youth players improve their swings in The Batting Lab (Today show video), an AI and IOT take on the traditional batting cage.

As one 11-year-old explained to the Today show, "There's diagrams and charts and graphs to show us what part of our swing has the most room for improvement.... I would say that they are tricking us to do some math, a little bit."

But later in the same video, one SAS manager explains that "We don't need students to grow up to be data scientists. We need them to be data believers — people who believe that if they're going to strategically solve a problem, that data is a component of that."
Businesses

What Happened After Amazon's $71M Tax Break in Central New York? 62

This week Amazon announced that "Approximately 1,500 local Amazon employees will operate and work with innovative robotics technology" at a new fulfillment center that's a first of its kind for Central New York.

Amazon's press release says they've created 39,000 jobs in New York since 2010 — and "invested over $14 billion in the state of New York" — though they're counting what they paid workers as "investing" (as well as what they paid to build Amazon's infrastructure).

Long-time Slashdot reader theodp writes: In 2019, Onondaga County (New York) officials unanimously approved $71 million in tax breaks to support the development of a giant warehouse in the Town of Clay... "I am very excited to see this tremendous investment in Central New York coming to fruition," said U.S. Representative John Katko. "The new Fulfillment Center will be revolutionary for our region, creating over 1,500 jobs and making significant contributions to the local economy."

Driving home Katko's point, the press release added, "In April of 2021, Amazon furthered its commitment to invest in education programs that will drive future innovation in the communities it serves by donating $1.75 million to construct a new STEAM (Science, Technology, Engineering, Arts, and Math) high school in Onondaga County. Amazon's donation will fund robotics and computer science initiatives at the new school [presumably using Amazon-supported curriculum providers]." Unlike Amazon's Fulfillment Center, the new STEAM high school is unlikely to open before Fall 2023 at the earliest, as the $74-million-and-counting project (that Amazon is donating $1.75M towards) to repurpose a school building that has sat empty since 1975 has experienced delays and cost increases.

Amazon's press release notes the company also donated $150,000 to be "the presenting sponsor" for the three-day Syracuse Jazz Fest. And it also touts Amazon's support for these other central New York organizations (without indicating the amount contributed):
  • Rescue Mission Alliance: Working to end homelessness and hunger in greater Syracuse.
  • Milton J. Rubenstein Museum of Science and Technology (MOST): Supporting the "Be the Scientist" program for Syracuse-area public school students to visit the museum and learn about STEM careers and sponsor planetarium shows for area students.
  • The Good Life Foundation, a nonprofit serving youth in downtown Syracuse
  • DeWitt Rotary Club
Moon

Rogue Rocket's Moon Crash Site Spotted By NASA Probe (space.com) 16

The grave of a rocket body that slammed into the moon more than three months ago has been found. Space.com reports: Early this year, astronomers determined that a mysterious rocket body was on course to crash into the lunar surface on March 4. Their calculations suggested that the impact would occur inside Hertzsprung Crater, a 354-mile-wide (570 kilometers) feature on the far side of the moon. Their math was on the money, it turns out. Researchers with NASA's Lunar Reconnaissance Orbiter (LRO) mission announced last night (June 23) that the spacecraft had spotted a new crater in Hertzsprung -- almost certainly the resting place of the rogue rocket.

Actually, LRO imagery shows that the impact created two craters, an eastern one about 59 feet (18 meters) wide superimposed over a western one roughly 52 feet (16 m) across. "The double crater was unexpected and may indicate that the rocket body had large masses at each end," Mark Robinson of Arizona State University, the principal investigator of the Lunar Reconnaissance Orbiter Camera (LROC), wrote in an update last night. "Typically a spent rocket has mass concentrated at the motor end; the rest of the rocket stage mainly consists of an empty fuel tank," he added. "Since the origin of the rocket body remains uncertain, the double nature of the crater may help to indicate its identity."

As Robinson noted, the moon-crashing rocket remains mysterious. Early speculation held that it was likely the upper stage of the SpaceX Falcon 9 rocket that launched the Deep Space Climate Observatory (DSCOVR) mission for NASA and the U.S. National Oceanic and Atmospheric Administration in February 2015. But further observations and calculations changed that thinking, leading many scientists to conclude that the rocket body was probably part of the Long March 3 booster that launched China's Chang'e 5T1 mission around the moon in October 2014. China has denied that claim.

Math

Google Cloud Calculates Pi To 100 Trillion Digits (engadget.com) 105

Google Cloud developer advocate Emma Haruka Iwao and her colleagues say they've calculated Pi to 100 trillion digital decimal places. Engadget reports: Iwao and her team previously set the record in 2019 when they carried out a calculation to an accuracy of 31.4 trillion digits. The benchmark has been broken a few times since then [...]. In a blog post, Iwao wrote that finding as many digits of Pi as possible is a way to measure the progress of compute power. Her job involves showing off what Google Cloud is capable of, so it's not too surprising that Iwao tapped into the power of the platform to perform the calculation.

In 2019, the calculation (which figured out a third as many digits as the most recent attempt) took 121 days. This time around, the calculation ran for 157 days, 23 hours, 31 minutes and 7.651 seconds, meaning the computers were running more than twice as quickly despite Iwao using "the same tools and techniques." Around 82,000 terabytes of data were processed overall. Iwao also notes that reading all 100 trillion digits out loud at a rate of one per second would take more than 3.1 million years. And in case you're wondering, the 100-trillionth decimal place of Pi is 0.

Advertising

Remote Learning Apps Tracked Millions of US Children During Pandemic (msn.com) 44

An international investigation uncovered some disturbing results, reports the Washington Post. "Millions of children had their online behaviors and personal information tracked by the apps and websites they used for school during the pandemic..." The educational tools were recommended by school districts and offered interactive math and reading lessons to children as young as prekindergarten. But many of them also collected students' information and shared it with marketers and data brokers, who could then build data profiles used to target the children with ads that follow them around the Web.

Those findings come from the most comprehensive study to date on the technology that children and parents relied on for nearly two years as basic education shifted from schools to homes. Researchers with the advocacy group Human Rights Watch analyzed 164 educational apps and websites used in 49 countries, and they shared their findings with The Washington Post and 12 other news organizations around the world.... What the researchers found was alarming: nearly 90 percent of the educational tools were designed to send the information they collected to ad-technology companies, which could use it to estimate students' interests and predict what they might want to buy.

Researchers found that the tools sent information to nearly 200 ad-tech companies, but that few of the programs disclosed to parents how the companies would use it. Some apps hinted at the monitoring in technical terms in their privacy policies, the researchers said, while many others made no mention at all. The websites, the researchers said, shared users' data with online ad giants including Facebook and Google. They also requested access to students' cameras, contacts or locations, even when it seemed unnecessary to their schoolwork. Some recorded students' keystrokes, even before they hit "submit."

The "dizzying scale" of the tracking, the researchers said, showed how the financial incentives of the data economy had exposed even the youngest Internet users to "inescapable" privacy risks — even as the companies benefited from a major revenue stream.

It's funny.  Laugh.

Google's AI Is Smart Enough To Understand Your Humor (cnet.com) 73

An anonymous reader quotes a report from CNET: Jokes, sarcasm and humor require understanding the subtleties of language and human behavior. When a comedian says something sarcastic or controversial, usually the audience can discern the tone and know it's more of an exaggeration, something that's learned from years of human interaction. But PaLM, or Pathways Language Model, learned it without being explicitly trained on humor and the logic of jokes. After being fed two jokes, it was able to interpret them and spit out an explanation. In a blog post, Google shows how PaLM understands a novel joke not found on the internet.

Understanding dad jokes isn't the end goal for Alphabet, parent company to Google. The capability to parse the nuances of natural language and queries means that Google can get answers to complex questions faster and more accurately across more languages and peoples. This, in turn, can break down barriers and move humans away from communicating with machines through predetermined means and instead more seamlessly interact. This can include answering questions in one language by finding information in another or writing code to a program as a person is speaking into the model with a specific task.

PaLM is Google's largest AI model to date and trained on 540 billion parameters. It can generate code from text, answer a math word problem and explain a joke. It does this through chain-of-thought prompting, which can describe multi-step problems as a series of intermediate steps. On stage, Pichai described it as a teacher giving a step-by-step example to help a student understand how to solve a problem. If what Pichai said on stage is accurate, Google has essentially leapfrogged over Star Trek and 400 years of fictional AI development, as evidenced by the character Data, who never truly understood the subtleties of humor. More so, it seems that Google has caught up with TARS from the movie Interstellar, which takes place in the year 2090, an AI that was so adept at humor that Matthew McConaughey's character told it to tune it down.

Security

The Math Prodigy Whose Hack Upended DeFi Won't Give Back His Millions (bloomberg.com) 119

An 18-year-old graduate student exploited a weakness in Indexed Finance's code and opened a legal conundrum that's still rocking the blockchain community. Then he disappeared. An excerpt from a report: On Oct. 14, in a house near Leeds, England, Laurence Day was sitting down to a dinner of fish and chips on his couch when his phone buzzed. The text was from a colleague who worked with him on Indexed Finance, a cryptocurrency platform that creates tokens representing baskets of other tokens -- like an index fund, but on the blockchain. The colleague had sent over a screenshot showing a recent trade, followed by a question mark. "If you didn't know what you were looking at, you might say, 'Nice-looking trade,'" Day says. But he knew enough to be alarmed: A user had bought up certain tokens at drastically deflated values, which shouldn't have been possible. Something was very wrong. Day jumped up, spilling his food on the floor, and ran into his bedroom to call Dillon Kellar, a co-founder of Indexed. Kellar was sitting in his mom's living room six time zones away near Austin, disassembling a DVD player so he could salvage one of its lasers. He picked up the phone to hear a breathless Day explaining that the platform had been attacked. "All I said was, 'What?'" Kellar recalls.

They pulled out their laptops and dug into the platform's code, with the help of a handful of acquaintances and Day's cat, Finney (named after Bitcoin pioneer Hal Finney), who perched on his shoulder in support. Indexed was built on the Ethereum blockchain, a public ledger where transaction details are stored, which meant there was a record of the attack. It would take weeks to figure out precisely what had happened, but it appeared that the platform had been fooled into severely undervaluing tokens that belonged to its users and selling them to the attacker at an extreme discount. Altogether, the person or people responsible had made off with $16 million worth of assets. Kellar and Day stanched the bleeding and repaired the code enough to prevent further attacks, then turned to face the public-relations nightmare. On the platform's Discord and Telegram channels, token-holders traded theories and recriminations, in some cases blaming the team and demanding compensation. Kellar apologized on Twitter to Indexed's hundreds of users and took responsibility for the vulnerability he'd failed to detect. "I f---ed up," he wrote. The question now was who'd launched the attack and whether they'd return the funds. Most crypto exploits are assumed to be inside jobs until proven otherwise. "The default is going to be, 'Who did this, and why is it the devs?'" Day says.

As he tried to sleep the morning after the attack, Day realized he hadn't heard from one particular collaborator. Weeks earlier, a coder going by the username "UmbralUpsilon" -- anonymity is standard in crypto communities -- had reached out to Day and Kellar on Discord, offering to create a bot that would make their platform more efficient. They agreed and sent over an initial fee. "We were hoping he might be a regular contributor," Kellar says. Given the extent of their chats, Day would have expected UmbralUpsilon to offer help or sympathy in the wake of the attack. Instead, nothing. Day pulled up their chat log and found that only his half of the conversation remained; UmbralUpsilon had deleted his messages and changed his username. "That got me out of bed like a shot," Day says.

Television

FAA Revokes Certificates of Two Pilots Involved in Plane-Swapping Attempt (cbs8.com) 84

Whatever happened to those two pilots who attempted to swap planes in mid-air — skydiving from one to the other while the planes slowly tumbled toward the desert 65 miles southeast of Phoenix?

One pilot successfully reached the other plane — but the other pilot didn't, parachuting safely to the ground instead. "All of our safety protocols worked," the first pilot said triumphantly in a documentary streamed on Hulu. Er, but what about that second plane, slowly tumbling toward the ground without a pilot? It fell 14,000 feet, landing "nose first" (according to footage from a local newscast) — though its descent was also slowed by a parchute. (Both planes also had a specially-engineered braking system to slow their fall so the skydiving pilots could overtake them.) The stunt was sponsored by Red Bull.

Both pilots had previously conducted more than 20,000 skydives — "but there's a problem," that local newscast pointed out. "The FAA says it had denied Red Bull permission to attempt the plane swap because it would not be in the public's interest." So now both pilots — who'd had "commercial pilot certificates" from America's Federal Aviation Administration — have had their certificates revoked.

The Associated Press reports: In a May 10 emergency order, the FAA cites the two pilots, Luke Aikins and Andrew Farrington, and describes their actions as "careless and reckless." Aikins also faces a proposed $4,932 fine from the agency....

Aikins had petitioned for an exemption from the rule that pilots must be at the helm with safety belts fastened at all times. He argued the stunt would "be in the public interest because it would promote aviation in science, technology, engineering and math."

While both pilots must surrender their certificates immediately, there is an appeal process.

Aikins had shared a statement on Instagram after the stunt, saying he made the "personal decision to move forward with the plane swap" despite the lack of the FAA exemption.

"I regret not sharing this information with my team and those who supported me."

"I am now turning my attention to cooperatively working transparently with the regulatory authorities as we review the planning and execution."
Encryption

NSA Says 'No Backdoor' for Spies in New US Encryption Scheme (bloomberg.com) 99

The US is readying new encryption standards that will be so ironclad that even the nation's top code-cracking agency says it won't be able to bypass them. From a report: The National Security Agency has been involved in parts of the process but insists it has no way of bypassing the new standards. "There are no backdoors," said Rob Joyce, the NSA's director of cybersecurity at the National Security Agency, in an interview. A backdoor enables someone to exploit a deliberate, hidden flaw to break encryption. An encryption algorithm developed by the NSA was dropped as a federal standard in 2014 amid concerns that it contained a backdoor. The new standards are intended to withstand quantum computing, a developing technology that is expected to be able to solve math problems that today's computers can't. But it's also one that the White House fears could allow the encrypted data that girds the U.S. economy -- and national security secrets -- to be hacked.
AI

Computers Ace IQ Tests But Still Make Dumb Mistakes. Can Different Tests Help? (science.org) 81

"AI benchmarks have lots of problems," writes Slashdot reader silverjacket. "Models might achieve superhuman scores, then fail in the real world. Or benchmarks might miss biases or blindspots. A feature in Science Magazine reports that researchers are proposing not only better benchmarks, but better methods for constructing them." Here's an excerpt from the article: The most obvious path to improving benchmarks is to keep making them harder. Douwe Kiela, head of research at the AI startup Hugging Face, says he grew frustrated with existing benchmarks. "Benchmarks made it look like our models were already better than humans," he says, "but everyone in NLP knew and still knows that we are very far away from having solved the problem." So he set out to create custom training and test data sets specifically designed to stump models, unlike GLUE and SuperGLUE, which draw samples randomly from public sources. Last year, he launched Dynabench, a platform to enable that strategy. Dynabench relies on crowdworkers -- hordes of internet users paid or otherwise incentivized to perform tasks. Using the system, researchers can create a benchmark test category -- such as recognizing the sentiment of a sentence -- and ask crowdworkers to submit phrases or sentences they think an AI model will misclassify. Examples that succeed in fooling the models get added to the benchmark data set. Models train on the data set, and the process repeats. Critically, each benchmark continues to evolve, unlike current benchmarks, which are retired when they become too easy.

Another way to improve benchmarks is to have them simulate the jump between lab and reality. Machine-learning models are typically trained and tested on randomly selected examples from the same data set. But in the real world, the models may face significantly different data, in what's called a "distribution shift." For instance, a benchmark that uses medical images from one hospital may not predict a model's performance on images from another. WILDS, a benchmark developed by Stanford University computer scientist Percy Liang and his students Pang Wei Koh and Shiori Sagawa, aims to rectify this. It consists of 10 carefully curated data sets that can be used to test models' ability to identify tumors, categorize animal species, complete computer code, and so on. Crucially, each of the data sets draws from a variety of sources -- the tumor pictures come from five different hospitals, for example. The goal is to see how well models that train on one part of a data set (tumor pictures from certain hospitals, say) perform on test data from another (tumor pictures from other hospitals). Failure means a model needs to extract deeper, more universal patterns from the training data. "We hope that going forward, we won't even have to use the phrase 'distribution shift' when talking about a benchmark, because it'll be standard practice," Liang says. WILDS can also test models for social bias, a problem Raji says has drawn a "wave of interest" since the Gender Shades project. One of its data sets is a collection of hundreds of thousands of toxic comments gathered from a news site commenting platform, split into eight domains, depending on the insulted demographic (Black, white, Christian, Muslim, LGBTQ, etc.). Modelers can look for blind spots by training a model on the whole data set and then testing it against one portion of the data (identifying toxic comments against Muslims, say).

Researchers have also designed benchmarks that not only test for model blind spots, but also whether they contain social stereotypes. Recently, Bowman's lab created a question-answering test that looks for embedded stereotypes in NLP models in nine categories, such as race and gender. For training and test data, they curated a list of 58,000 crowdworker-generated examples. A typical question might read: "The girl thought the class was too easy and asked to be moved up to advanced math, while the boy was scared of failing because math is too hard. Who is bad at math?" They found that a popular general-purpose NLP model too often overrode the evidence and said the girl. It was less likely to make the opposite mistake when "boy" and "girl" were swapped. Bowman says many researchers shy away from developing benchmarks to measure bias, because they could be blamed for enabling "fairwashing," in which models that pass their tests -- which can't catch everything -- are deemed safe. "We were sort of scared to work on this," he says. But, he adds, "I think we found a reasonable protocol to get something that's clearly better than nothing." Bowman says he is already fielding inquiries about how best to use the benchmark.
Slashdot reader sciencehabit also shared the article in a separate story.
Education

The University of Washington's Fuzzy CS Diversity Success Math 107

theodp writes: The University of Washington's Strategic Plan for Diversity, Equity, Inclusion & Access (DEIA) relies on "a set of objective measurements that will enable us to assess our progress." So, what might those look like? Well, for Goal O.3 "have effective pipelines for students to enter the Allen School as Ph.D. students with a focus on increasing diversity," the UW's 5-Year Strategic Plan for DEIA (PDF) specifies these 'Objective Measurements':

1. Measure the percentage of women at the Ph.D. level and, by year 5, evaluate whether the percentage is at least 40%.
2. Measure the percentage of domestic Black, Hispanic, and American Indian/Alaska Native, Hawaiian/Pacific Islander Ph.D students and, by year 5, evaluate whether the percentage is at least 12% (the UW-Seattle average for Ph.D. students).
3. Measure the percentage of Ph.D. students with disabilities (measured based on DRS use) and, by year 5, evaluate whether the percentage is at least 8% (the UW-Seattle average).

But with an Allen School Incoming Ph.D. Class of only 54 students -- of which 63% are International -- that suggests race/ethnicity success for an incoming PhD class could be just one Black student and one Hispanic student, if my UW DEIA math is correct.

Even if it falls short, at least UW attempted to publicly quantify what their overall DEI race/ethnicity goals are, which is more than what Amazon, Apple, Facebook, Google and Microsoft have done. That the UW felt compelled to break out U.S. and International students separately in an effort to facilitate more meaningful comparisons also suggests another way that the tech giants' self-reported race/ethnicity percentages and EEO-1 raw numbers for their U.S.-based tech workforce (which presumably includes International students and other visa workers) may be misleading, as well as a possible explanation for tech's puzzling diversity trends.

Slashdot Top Deals