AI

'Modern AI is Good at a Few Things But Bad at Everything Else' (wired.com) 200

Jason Pontin, writing for Wired: Sundar Pichai, the chief executive of Google, has said that AI "is more profound than ... electricity or fire." Andrew Ng, who founded Google Brain and now invests in AI startups, wrote that "If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future." Their enthusiasm is pardonable.

[...] But there are many things that people can do quickly that smart machines cannot. Natural language is beyond deep learning; new situations baffle artificial intelligences, like cows brought up short at a cattle grid. None of these shortcomings is likely to be solved soon. Once you've seen you've seen it, you can't un-see it: deep learning, now the dominant technique in artificial intelligence, will not lead to an AI that abstractly reasons and generalizes about the world. By itself, it is unlikely to automate ordinary human activities.

To see why modern AI is good at a few things but bad at everything else, it helps to understand how deep learning works. Deep learning is math: a statistical method where computers learn to classify patterns using neural networks. [...] Deep learning's advances are the product of pattern recognition: neural networks memorize classes of things and more-or-less reliably know when they encounter them again. But almost all the interesting problems in cognition aren't classification problems at all.

AI

Nearly Three-Quarters of Adults in US Believe AI Will Eliminate More Jobs Than It Will Create -- and They Want Companies To Pay For the Retraining (gallup.com) 331

Key findings from a Gallup poll: Nearly three-quarters of adults (73%) say an increased use of AI will eliminate more jobs than it creates (PDF). Results are consistent across most demographic groups. However, those with blue-collar jobs are particularly pessimistic, with 82% saying the transition will result in a net job loss, compared with 71% of those with white-collar jobs.

Nearly half of Americans (49%) say "soft" skills, such as teamwork, communication, creativity and critical thinking, are the most important for U.S. workers to cultivate to avoid losing their jobs to AI. Alternatively, 51% say learning "hard" skills, including math, science, coding and the ability to work with data, are the most important to maintain a job in the face of new technology adoption.

When asked to choose among seven options concerning who should pay for retraining, a clear majority of U.S. adults overall (61%) say employers should fund these programs. The federal government comes in second at 50%.

China

This Chinese Math Problem Has No Answer. Perhaps, It Has a Lot of Them. (washingtonpost.com) 443

Fifth-graders in China's Shunqing district were recently asked to answer this question: "If a ship had 26 sheep and 10 goats on board, how old is the ship's captain?" The Washington Post: The apparently unsolvable question sparked a debate over the merits of the Chinese education system and the value it places on the memorization of information over the importance of developing critical thinking skills. "Some surveys show that primary school students in our country lack a sense of critical awareness in regard to mathematics," a statement by the Shunqing Education Department posted Jan. 26 reportedly said. One student offered a pragmatic law-abiding answer: "The captain is at least 18 because he has to be an adult to drive the ship." Meanwhile on Twitter, some have gone with 42, a reference to the science fiction novel "A Hitchhiker's Guide to the Galaxy," by Douglas Adams, in which 42 is the "Answer to the Ultimate Question of Life, The Universe, and Everything." BBC: "If a school had 26 teachers, 10 of which weren't thinking, how old is the principal?" another asked. Some however, defended the school -- which has not been named -- saying the question promoted critical thinking. "The whole point of it is to make the students think. It's done that," one person commented. "This question forces children to explain their thinking and gives them space to be creative. We should have more questions like this," another said.
Math

Has the Decades-Old Floating Point Error Problem Been Solved? (insidehpc.com) 174

overheardinpdx quotes HPCwire: Wednesday a company called Bounded Floating Point announced a "breakthrough patent in processor design, which allows representation of real numbers accurate to the last digit for the first time in computer history. This bounded floating point system is a game changer for the computing industry, particularly for computationally intensive functions such as weather prediction, GPS, and autonomous vehicles," said the inventor, Alan Jorgensen, PhD. "By using this system, it is possible to guarantee that the display of floating point values is accurate to plus or minus one in the last digit..."

The innovative bounded floating point system computes two limits (or bounds) that contain the represented real number. These bounds are carried through successive calculations. When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value. It is fail-safe and performs in real time.

Jorgensen is described as a cyber bounty hunter and part time instructor at the University of Nevada, Las Vegas teaching computer science to non-computer science students. In November he received US Patent number 9,817,662 -- "Apparatus for calculating and retaining a bound on error during floating point operations and methods thereof." But in a followup, HPCwire reports: After this article was published, a number of readers raised concerns about the originality of Jorgensen's techniques, noting the existence of prior art going back years. Specifically, there is precedent in John Gustafson's work on unums and interval arithmetic both at Sun and in his 2015 book, The End of Error, which was published 19 months before Jorgensen's patent application was filed. We regret the omission of this information from the original article.
Math

Largest Prime Number Discovered – With More Than 23m Digits (mersenne.org) 117

chalsall writes: Persistence pays off. Jonathan Pace, a GIMPS volunteer for over 14 years, discovered the 50th known Mersenne prime, 2^77,232,917 -- 1 on December 26, 2017. The prime number is calculated by multiplying together 77,232,917 twos, and then subtracting one. It weighs in at 23,249,425 digits, becoming the largest prime number known to mankind. It bests the previous record prime, also discovered by GIMPS, by 910,807 digits. You can read a little more in the press release.
Transportation

Math Says You're Driving Wrong and It's Slowing Us All Down (wired.com) 404

A new study in IEEE Transactions on Intelligent Transportation Systems mathematically suggests that if you and everyone else on the road kept an equal distance between the cars ahead and behind, traffic would move twice as quickly. From a report: Now sure, you're probably not going to convince everyone on the road to do that. Still, the finding could be a simple yet powerful way to optimize semi-autonomous cars long before the fully self-driving car of tomorrow arrives. Traffic is perhaps the world's most infuriating example of what's known as an emergent property. Meaning, lots of individual things forming together to create something more complex. Emergent properties are usually quite astounding. You've probably seen video of starlings forming a murmuration, a great shifting blob of thousands upon thousands of birds. Bats flying en masse out of a cave is another example, swarming sometimes by the millions through a small exit. And scientists are just beginning to understand how they do so.
Math

How Pirates Of The Caribbean Hijacked America's Metric System (npr.org) 440

If the United States were more like the rest of the world, a McDonald's Quarter Pounder might be known as the McDonald's 113-Grammer, John Henry's 9-pound hammer would be 4.08 kilograms, and any 800-pound gorillas in the room would likely weigh 362 kilos. NPR explores: One reason this country never adopted the metric system might be pirates. Here's what happened: In 1793, the brand new United States of America needed a standard measuring system because the states were using a hodgepodge of systems. "For example, in New York, they were using Dutch systems, and in New England, they were using English systems," says Keith Martin, of the research library at the National Institute of Standards and Technology. This made interstate commerce difficult. The secretary of state at the time was Thomas Jefferson. Jefferson knew about a new French system and thought it was just what America needed. He wrote to his pals in France, and the French sent a scientist named Joseph Dombey off to Jefferson carrying a small copper cylinder with a little handle on top. It was about 3 inches tall and about the same wide. This object was intended to be a standard for weighing things, part of a weights and measure system being developed in France, now known as the metric system. The object's weight was 1 kilogram. Crossing the Atlantic, Dombey ran into a giant storm. "It blew his ship quite far south into the Caribbean Sea," says Martin. And you know who was lurking in Caribbean waters in the late 1700s? Pirates.
AI

CMU Researchers Reveal How Their AI Beat The World's Top Poker Players (triblive.com) 36

2017 began with an AI named "Libratus" defeating four of the world's best poker players. Now the AI's creators reveal how exactly they did it. An anonymous reader quotes the Pittsburgh Tribune-Review: First, the AI made the game easier to understand. There are 10**161 potential outcomes in the game of poker -- that's a one followed by 161 zeros, potential outcomes in a game of poker. Libratus grouped similar hands, like a King-high flush and a Queen-high flush, and similar bet sizes to cut down that number. Libratus then created a detailed strategy for how it would play the early rounds of the game and a less-refined strategy for the final rounds. As the game nears the end, Libratus refined the second strategy based on how the game had gone.

A third strategy was at work as well. In real-time, Libratus created another model based on how its play stacked up against the play of the humans. If the humans did something unexpected to Libratus, the AI accounted for it and built it into the strategy. Instead of trying to exploit weaknesses in the play of the human, Libratus focused on improving its play.

The AI was created by a computer science professor at Carnegie Mellon University and his Ph.D. student, who argue in a new paper that "The techniques that we developed are largely domain independent and can thus be applied to other strategic imperfect-information interactions, including non-recreational applications."

"Due to the ubiquity of hidden information in real-world strategic interactions, we believe the paradigm introduced in Libratus will be critical to the future growth and widespread application of AI."
Space

Astronomers Have Come Up With a Better Way To Weigh Millions of Solitary Stars (vanderbilt.edu) 43

Science_afficionado writes: By measuring the flicker pattern of light from distant stars, astronomers have developed a new and improved method for measuring the masses of millions of solitary stars, especially those hosting exoplanets. Stevenson Professor of Physics and Astronomy Keivan Stassun says, "First, we use the total light from the star and its parallax to infer its diameter. Next, we analyze the way in which the light from the star flickers, which provides us with a measure of its surface gravity. Then we combine the two to get the star's total mass." Stassun and his colleagues describe the method and demonstrate its accuracy using 675 stars of known mass in an article titled "Empirical, accurate masses and radii of single stars with TESS and GAIA" accepted for publication in the Astronomical Journal.

David Salisbury via Vanderbilt University explains the other methods of determining the mass of distant stars, and why they aren't always the most accurate: "Traditionally, the most accurate method for determining the mass of distant stars is to measure the orbits of double star systems, called binaries. Newton's laws of motion allow astronomers to calculate the masses of both stars by measuring their orbits with considerable accuracy. However, fewer than half of the star systems in the galaxy are binaries, and binaries make up only about one-fifth of red dwarf stars that have become prized hunting grounds for exoplanets, so astronomers have come up with a variety of other methods for estimating the masses of solitary stars. The photometric method that classifies stars by color and brightness is the most general, but it isn't very accurate. Asteroseismology, which measures light fluctuations caused by sound pulses that travel through a star's interior, is highly accurate but only works on several thousand of the closest, brightest stars." Stassun says his method "can measure the mass of a large number of stars with an accuracy of 10 to 25 percent," which is "far more accurate than is possible with other available methods, and importantly it can be applied to solitary stars so we aren't limited to binaries."
Bitcoin

'Bitcoin Could Cost Us Our Clean-Energy Future' (grist.org) 468

An anonymous reader shares an article: Bitcoin wasn't intended to be an investment instrument. Its creators envisioned it as a replacement for money itself -- a decentralized, secure, anonymous method for transferring value between people. But what they might not have accounted for is how much of an energy suck the computer network behind bitcoin could one day become. Simply put, bitcoin is slowing the effort to achieve a rapid transition away from fossil fuels. What's more, this is just the beginning. Given its rapidly growing climate footprint, bitcoin is a malignant development, and it's getting worse. Digital financial transactions come with a real-world price: The tremendous growth of cryptocurrencies has created an exponential demand for computing power. As bitcoin grows, the math problems computers must solve to make more bitcoin (a process called "mining") get more and more difficult -- a wrinkle designed to control the currency's supply. Today, each bitcoin transaction requires the same amount of energy used to power nine homes in the U.S. for one day. And miners are constantly installing more and faster computers. Already, the aggregate computing power of the bitcoin network is nearly 100,000 times larger than the world's 500 fastest supercomputers combined. The total energy use of this web of hardware is huge -- an estimated 31 terawatt-hours per year. More than 150 individual countries in the world consume less energy annually. And that power-hungry network is currently increasing its energy use every day by about 450 gigawatt-hours, roughly the same amount of electricity the entire country of Haiti uses in a year.
Bitcoin

Tesla Owners Are Mining Bitcoins With Free Power From Charging Stations (vice.com) 141

dmoberhaus writes: Someone claimed to use their Tesla to power a cryptocurrency mine to take advantage of the free energy given to Tesla owners. But even with free energy, does this scheme make sense? Motherboard ran the numbers.

From the report: "...If we assume that each of the GPUs in this rig draws around 150 watts, then the 16 GPUs have a total power draw of 2.4 kilowatts per hour or 57.6 kilowatt hours per day if they ran for a full 24 hours. According to Green Car Reports, a Tesla Model S gets about 3 miles per kilowatt hour, meaning that running this mining rig for a full day is the equivalent of driving nearly 173 miles in the Tesla. According to the Federal Highway Administration, the average American drives around 260 miles a week. In other words, running this cryptocurrency mine out of the trunk of a Tesla for a day and a half would use as much energy as driving that Tesla for a full week, on average. Moreover, drivers who are not a part of Tesla's unlimited free energy program are limited to 400 kilowatt hours of free electricity per year, meaning they could only run their rig for a little over 7 days on free energy.

Okay, but how about the cost? Let's assume that this person is mining Ethereum with their GPUs. Out of the box, an average GPU can do about 20 megahashes per second on the Ethereum network (that is, performing a math problem known as hashing 20 million times per second). This Tesla rig, then, would have a total hashrate of about 320 megahashes. According to the Cryptocompare profitability calculator, if the Tesla rig was used to mine Ethereum using free electricity, it would result in about .05 Ether per day -- equivalent to nearly $23, going by current prices at the time of writing. In a month, this would result in $675 in profit, or about the monthly lease for a Tesla Model S. So the Tesla would pay for itself, assuming the owner never drove it or used it for anything other than mining Ethereum, Ethereum doesn't drop in value below $450, and the Tesla owner gets all of their energy for free."
Motherboard also notes that this conclusion "doesn't take into account the price of each of the mining rigs, which likely cost about $1,000 each, depending on the quality of the GPUs used." TL;DR: Mining cryptocurrency out of your electric car is not worth it.
Sci-Fi

Destiny 2 Misrepresented XP Gains To Its Players Until the Developers Got Caught (arstechnica.com) 112

An anonymous reader quotes a report from Ars Technica: Destiny 2, like its predecessor, depends largely on an open-ended "end game" system. Once you beat the game's primary "quest" content, you can return to previously covered ground to find remixed and upgraded battles, meant to be played ad nauseam alone or with friends. To encourage such replay, Bungie dangles a carrot of XP gain, which works more slowly than during the campaign stages. Players are awarded a "bright engram" every time they "level up" past the level cap; the engrams are essentially loot boxes that contain a random assortment of cosmetics and weapon mods. Everything you do in the game, from killing a weak bad guy to completing a major raid-related milestone, is supposed to reward you a fixed XP amount. As series fans gear up for the game's first expansion, slated to launch December 5 on PC, PlayStation 4, and Xbox One, its eagle-eyed fans at r/DestinyTheGame began questioning whether those rewards were really as fixed as claimed. Some players began to suspect that they were actually getting less XP than advertised each time they repeated certain in-game missions and tasks, such as the game's "Public Events."

With stopwatch in hand, a user named EnergiserX tracked the modes he played, keeping an eye on any shifts in XP gain over time. He put enough data together to confirm those suspicions: the XP gained in certain modes would shrink with each repetition. Worse, the game gave no indication of these diminishing returns. The XP-gain numbers that popped up above the game's XP bar didn't reflect the game's hidden scaling system. Thus, there was no way for a player to accurately calculate how their XP gain had been affected or scaled without going through EnergiserX's exhaustive process. With findings in hand, the tester posted on Reddit with calls to the developers for a response, which the community received on Saturday. Bungie confirmed its use of an "XP scaler" and added that it was "not performing the way we'd like it to," which meant the developer would remove that XP-scaling system upon the game's next patch. However, Bungie didn't clarify how the developers actually would have liked for this XP-scaling system to work, nor what factored into it announcing any changes beyond the system simply being discovered.
Bungie issued a patch on Sunday that removed the XP-scaling systems, but it introduced another unannounced change to the XP system. "Bungie decided to tune the speed of XP gain by doubling the required XP needed to 'level up,' from 80,000 points to 160,000," reports Ars Technica. "Patch notes didn't mention this change; Bungie, once again, had to be questioned by its fanbase before confirming the exact amount of this XP-related change."
Math

Devs Working To Stop Go Math Error Bugging Crypto Software (theregister.co.uk) 73

Richard Chirgwin, writing for The Register: Consider this an item for the watch-list, rather than a reason to hit the panic button: a math error in the Go language could potentially affect cryptographic libraries. Security researcher Guido Vranken (who earlier this year fuzzed up some bugs in OpenVPN) found an exponentiation error in the Go math/big package. Big numbers -- particularly big primes -- are the foundation of cryptography. Vranken posted to the oss-sec mailing list that he found the potential issue during testing of a fuzzer he wrote that "compares the results of mathematical operations (addition, subtraction, multiplication, ...) across multiple bignum libraries." Vranken and Go developer Russ Cox agreed that the bug needs specific conditions to be manifest: "it only affects the case e = 1 with m != nil and a pre-allocated non-zero receiver."
Chrome

Firefox vs Chrome: Speed and Memory (laptopmag.com) 160

Mashable aleady reported Firefox Quantum performs better than Chrome on web applications (based on BrowserBench's JetStream tests), but that Chrome performed better on other benchmarks. Now Laptop Mag has run more tests, agreeing that Firefox performs beter on JetStream tests -- and on WebXPRT's six HTML5- and JavaScript-based workload tests. Firefox Quantum was the winner here, with a score of 491 (from an average of five runs, with the highest and lowest results tossed out) to Chrome's 460 -- but that wasn't quite the whole story. Whereas Firefox performed noticeably better on the Organize Album and Explore DNA Sequencing workloads, Chrome proved more adept at Photo Enhancement and Local Notes, demonstrating that the two browsers have different strengths...

You might think that Octane 2.0, which started out as a Google Developers project, would favor Chrome -- and you'd be (slightly) right. This JavaScript benchmark runs 21 individual tests (over such functions as core language features, bit and math operations, strings and arrays, and more) and combines the results into a single score. Chrome's was 35,622 to Firefox's 35,148 -- a win, if only a minuscule one.

In a series RAM-usage tests, Chrome's average score showed it used "marginally" less memory, though the average can be misleading. "In two of our three tests, Firefox did finish leaner, but in no case did it live up to Mozilla's claim that Quantum consumes 'roughly 30 percent less RAM than Chrome,'" reports Laptop Mag.

Both browsers launched within 0.302 seconds, and the article concludes that "no matter which browser you choose, you're getting one that's decently fast and capable when both handle all of the content you're likely to encounter during your regular surfing sessions."
United States

The Disappearing American Grad Student (nytimes.com) 268

There are two very different pictures of the students roaming the hallways and labs at New York University's Tandon School of Engineering. At the undergraduate level, 80 percent of the students are United States residents. But that number, The New York Times reports, falls below the 20 percent mark when you move to the graduate level (Editor's note: the link could be paywalled). From the report: The Tandon School -- a consolidation of N.Y.U.'s science, technology, engineering and math programs on its Brooklyn campus -- is an extreme example of how scarce Americans are in graduate programs in STEM. Overall, these programs have the highest percentage of international students of any broad academic field. In the fall of 2015, about 55 percent of all graduate students in mathematics, computer sciences and engineering were from abroad, according to a survey by the Council of Graduate Schools and the Graduate Record Examinations Board. In arts and humanities, the figure was about 16 percent; in business, a little more than 18 percent. The dearth of Americans is even more pronounced in hot STEM fields like computer science, which serve as talent pipelines for the likes of Google, Amazon, Facebook and Microsoft: About 64 percent of doctoral candidates and almost 68 percent in master's programs last year were international students, according to an annual survey of American and Canadian universities by the Computing Research Association. In comparison, only about 9 percent of undergraduates in computer science were international students (perhaps, deans posit, because families are nervous about sending offspring who are barely adults across the ocean to study).
Math

Scientists Have Mathematical Proof That It's Impossible To Stop Aging (sciencealert.com) 177

An anonymous reader quotes a report from Science Alert: Mathematically speaking, multicellular organisms like us will always have to deal with a cellular competition where only one side will win. And ultimately, that means our vitality will always come out as the loser. We have a pair of researchers from the University of Arizona to blame for this depressing conclusion, who crunched the numbers on a hypothesis involving the weeding out of unfit cells and found it amounted to a catch-22 situation. Aging -- and all of the biological changes that come with it -- is more or less the result of cells slowing down and losing their functions. But what if there was a way to encourage the more active cells to stick around at the expense of their sluggish siblings? Surely if we knocked off those old cells we could keep making pigments and collagen a little longer. Researchers have pinned hopes on reversing the inevitable decay of biochemistry by repairing DNA or extending the shrinking bits of chromosome called telomeres, for example. While it's good in theory, there is a catch. Another feature of aging is a number of cells start to populate like there's no tomorrow, reproducing in uncontrolled ways that look too close to cancer for comfort. According to the researchers, this means we're damned either way.

The way we grow old poses something of a mystery. If replicating biology is good enough to continue for generations, why do our own cells wind down after just a few decades? A simple answer is evolution isn't strong enough to weed out genes that only cause us grief after we've popped out a few offspring. But this model of aging adds a new element to the existing hypothesis -- even if evolution did select for eternal youth, competition inside our own bodies would see us to an inevitable grave. In other words, since multicellular organisms are the cumulative effect of bunches of cooperating cells, we logically can't have it both ways -- if you clear the way for 'younger' cells to keep your skin baby-smooth, you're just asking for the big C.
The findings have been published in the journal Proceedings of the National Academy of Sciences.
Android

A Surge of Sites and Apps Are Exhausting Your CPU To Mine Cryptocurrency (arstechnica.com) 128

Dan Goodin, writing for ArsTechnica: The Internet is awash with covert crypto currency miners that bog down computers and even smartphones with computationally intensive math problems called by hacked or ethically questionable sites. The latest examples came on Monday with the revelation from antivirus provider Trend Micro that at least two Android apps with as many as 50,000 downloads from Google Play were recently caught putting crypto miners inside a hidden browser window. The miners caused phones running the apps to run JavaScript hosted on Coinhive.com, a site that harnesses the CPUs of millions of PCs to mine the Monero crypto currency. In turn, Coinhive gives participating sites a tiny cut of the relatively small proceeds. Google has since removed the apps, which were known as Recitiamo Santo Rosario Free and SafetyNet Wireless App. Last week, researchers from security firm Sucuri warned that at least 500 websites running the WordPress content management system alone had been hacked to run the Coinhive mining scripts. Sucuri said other Web platforms -- including Magento, Joomla, and Drupal -- are also being hacked in large numbers to run the Coinhive programming interface.
Math

How Data Science Powered the Search for MH370 (hpe.com) 133

"In the absence of physical evidence, scientists are employing powerful computational tools to attempt to solve the greatest aviation mystery of our time: the disappearance of flight MH370." Slashdot reader Esther Schindler shared this article from HPE Insights: Satellite communications provider Inmarsat announced it had found recorded signals in its archives that MH370 had sent for another six hours after it disappeared. The plane had been aloft and flying for that whole time -- but where had it gone? As Inmarsat scientists examined the signals, they saw that what they had was not data such as text messages or location information. Rather, the signals contained metadata: information about the signal itself. This was recorded as the satellite automatically contacted the plane's communications system every hour to see if it was still logged on. Bafflingly, whoever had taken the plane hadn't used the satcom system to communicate with the outside world, but had switched it off and then on again, leaving it able to exchange hourly "pings" with the satellite. Some of the metadata related to extremely subtle variations in the frequency of the signal. "We're talking about changes as big as one part in a billion," says Inmarsat scientist Chris Ashton.

Nobody had tried to use this kind of data to try to locate an airplane before. At first, Ashton's team didn't know if the attempt would work. But painstakingly, over the course of weeks, the team figured out how the movement of the plane, the orbital wobble of the satellite, and the electronics within the satcom system all interacted to create the data values that had been received. "We had to create the model from scratch," Ashton says. Their work revealed that the plane had flown into the remote southern Indian Ocean. They didn't know where exactly. But since there are no islands in that part of the world, it was impossible that anyone could have survived. For the first time in history, hundreds of people were declared legally dead based on mathematics alone.

Then mathematician Dr. Neil Gordon led a team from the Defense Science and Technology Group "to extract a path from a subset of the Inmarsat data called the Burst Timing Offset. This measured how quickly the aircraft responded each time the satellite pinged it, and was used to determine the distance between the satellite and the plane." They ultimately generate "a probabilistic 'heat map' of the plane's most likely resting places using a technique called Bayesian analysis. These calculations allowed the DSTG team to draw a box 400 miles long and 70 miles across, which contained about 90 percent of the total probability distribution.
Math

The Geometry of Islamic Art Becomes a Treasure of a Game (arstechnica.com) 213

Sam Machkovech from Ars Technica reviews the game Engare, describing it as a "clever, deceptively simple, and beautiful rumination on geometry and Islamic art-making traditions." The game consists of relatively simple puzzles and a freeform art toy that unlocks its puzzles' tools to allow you to make whatever patterns you please. From the report: The game, made almost entirely by 23-year-old Iranian developer Mahdi Bahrami, starts with a 2D scene of a circle repeatedly traveling along a line. Above this, an instructional card shows a curved-diagonal line. Drop a dot on the moving circle, the game says, and it will generate a bold line, like ink on a page. As the ball (and thus, your dot) rolls, the inked line unfurls; if you put the dot on a different part of the circle, then your inked line may have more curve or angle to it, based on the total motion of the moving, rotating circle. Your object is to recreate this exact curved-diagonal line. If your first ink-drop doesn't do the trick, try again. Each puzzle presents an increasingly complex array of moving and rotating shapes, lines, and dots. You have to watch the repeating patterns and rotations in a particular puzzle to understand where to drop an ink dot and draw the demanded line. At first, you'll have to recreate simple turns, curves, and zig-zags. By the end, you'll be making insane curlicues and rug-like super-patterns.

But even this jaded math wiz-kid couldn't help but drop his jaw, loose his tongue, and bulge his eyes at the first time Engare cracked open its math-rich heart. One early puzzle (shown above) ended with its seemingly simple pattern repeating over and over and over and over. Unlike other puzzles, this pattern kept drawing itself, even after I'd fulfilled a simple line-and-turn pattern. And with each pass of the drawing pattern, driven by a spinning, central circle, Engare drew and filled a new, bright color. This is what the game's creator is trying to shout, I thought. This is his unique, cultural perspective. This looks like the Persian rugs he saw his grandmother weave as a child.

AI

When an AI Tries Writing Slashdot Headlines (tumblr.com) 165

For Slashdot's 20th anniversary, "What could be geekier than celebrating with the help of an open-source neural network?" Neural network hobbyist Janelle Shane has already used machine learning to generate names for paint colors, guinea pigs, heavy metal bands, and even craft beers, she explains on her blog. "Slashdot sent me a list of all the headlines they've ever run, over 162,000 in all, and asked me to train a neural network to try to generate more." Could she distill 20 years of news -- all of humanity's greatest technological advancements -- down to a few quintessential words?

She trained it separately on the first decade of Slashdot headlines -- 1997 through 2007 -- as well as the second decade from 2008 to the present, and then re-ran the entire experiment using the whole collection of every headline from the last 20 years. Among the remarkable machine-generated headlines?
  • Microsoft To Develop Programming Law
  • More Pong Users for Kernel Project
  • New Company Revises Super-Things For Problems
  • Steve Jobs To Be Good

But that was just the beginning...


Slashdot Top Deals