We talked with Peter Wayner about autonomous cars on June 5. He had a lot to say on this topic, to the point where we seem to be doing a whole series of interviews with him because autonomous cars might have a lot of unanticipated effects on our lives and our economy. Heck, Peter has enough to say about driverless cars to fill a book, Future Ride, which we hope he finishes editing soon because we (Tim and Robin) want to read it. While that book is brewing, watch for some thoughts on how autonomous cars (and delivery vans) might affect us in the near future.
Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!
colinneagle writes "Researchers at Chiba University in Japan have developed a robot that could frustrate teenagers worldwide with its impressive air hockey skills. What's remarkable about this air hockey-playing robot, which is not the first of its kind, is that it can sense human opponents' playing styles and adapt to defend against them. The key is how the computer controlling the robot views its opponent — at a speed of 500 frames per second. From there, the robot uses a three-layer control system to determine motion control, when it should hit the puck, defend its goal or stay still, and a third that determines how it should react to its opponent's playing style."
Today's interviewee, Andrew Dougherty, has a Web page that says he is "...an autodidact mathematician and computer scientist specializing in Artificial Intelligence (AI) and Algorithmic Information Theory (AIT). He is the founder of the FRDCSA (Formalized Research Database: Cluster Study & Apply) project, a practical attempt at weak AI aimed primarily at collecting and interrelating existing software with theoretical motivation from AIT. He has made over 90 open source applications, 400 (unofficial) Debian GNU/Linux packages and 800 Perl5 modules (see http://frdcsa.org/frdcsa)." Tim Lord says Andrew's project "brings together a lot of AI algorithms, collects large sets of data for those algorithms to chew on, and writes software to do things like ... guide your whole life." As you might guess, Andrew occupies a pretty far edge of the eccentric programmer world, as you'll see from this video (and transcript). He calls himself "a serious Stallmanite" (his word), and has chosen the GPL for his software in the hopes that it will therefore help the greatest number of people. (Speaking of help, he's looking for interesting data sets and various "life rules" that can be integrated with his planning software, and one of the reasons he presented at the recent YAPC::NA was to solicit help in putting his hundreds of Perl modules onto CPAN.)
aarondubrow writes "For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software, with mixed results. Enabled by supercomputers at the Texas Advanced Computing Center, University of Texas researchers are using new methods to more accurately represent language so computers can interpret it. Recently, they were awarded a grant from DARPA to combine distributional representation of word meanings with Markov logic networks to better capture the human understanding of language."
dcblogs writes "The Senate's immigration bill may force the large offshore outsourcing firms to reduce their use of H-1B visa-holding staff, forcing them to hire more local workers and raising their costs. But one large Indian firm, Infosys, will try to offset cost increases with software robotics. Infosys recently announced a partnership with IPsoft, a New York-based provider of autonomic IT services. With IPsoft's tools, work that is now done by human beings, mostly Level 1 support, could be done by a software machine. Infosys says that IPsoft tools can 'reduce human intervention.' More colorfully, Chandrashekar Kakal, global head of Infosys's business IT services, told the Times of India, that 'what robotics did for the auto assembly line, we are now doing for the IT engineering line.' James Slaby, a research director of HFS Research who has been following the use of autonomics closely, wrote in a recent report that the IPsoft partnership may help Infosys 'reap fatter margins by augmenting and replacing expensive, human IT support engineers with cheaper, more accurate, efficient automated processes,' and by improving service delivery."
kkleiner writes "Rice University professor Moshe Vardi has been evaluating technological progress in computer science and artificial intelligence and has recently concluded that robots will replace most, if not all, human labor by 2045, putting millions out of work. The issue is whether AI enables humans to do more or less. But perhaps the real question about technological unemployment of labor isn't 'How will people do nothing?' but 'What kind of work will they do instead?'"
An anonymous reader sends this excerpt from Wired: "[Henry] Markram was proposing a project that has bedeviled AI researchers for decades, that most had presumed was impossible. He wanted to build a working mind from the ground up. ... The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety — from the molecular level all the way to the mystery of consciousness — is a lack of ambition. If only neuroscience would follow his lead, he insists, his Human Brain Project could simulate the functions of all 86 billion neurons in the human brain, and the 100 trillion connections that link them. And once that's done, once you've built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own."
Google's I/O annual conference is ramping up at San Francisco's Moscone Center. Last year, in the conference keynote, the company took its biggest-yet dive into hardware when it introduced the Nexus 7 tablet, Google Glass, and the ill-fated Nexus Q. The secret is out on Glass, of course: this year, there's a pavilion inside the conference center where I'm sure they'll be showing off applications for it. (Quite a few of the people in the endless lines here are wearing their own, too.) Anticipating the announcements at I/O is practically its own industry, but it's easy to guess that there will be announcements from all the major pots in which Google has its many thousands of (tapping) fingers. Android, search, Chrome, mapping, and all the other ways in which the behemoth of Mountain View is watching what you do. You can watch the keynote talk (talks, really) streamed online from the main conference link above, but this story will be updated with highlights of the announcements, as well with stories that readers contribute. Update: 05/15 16:22 GMT by T : Updates below. Update: 05/15 19:02 GMT by T :Update details: Notes (ongoing) added below on maps, gaming, the Play store, Google+, and more. And, notable, Larry Page is (at this writing) on stage, with an unannounced Q & A session.
A while ago you had the chance to ask mathematician and theoretical physicist Freeman Dyson about his work in quantum electrodynamics, nuclear propulsion, and his thoughts on the past, present, and future of science. Below you'll find his answers to your questions.
An anonymous reader writes "We're seeing a new revolution in artificial intelligence known as deep learning: algorithms modeled after the brain have made amazing strides and have been consistently winning both industrial and academic data competitions with minimal effort. 'Basically, it involves building neural networks — networks that mimic the behavior of the human brain. Much like the brain, these multi-layered computer networks can gather information and react to it. They can build up an understanding of what objects look or sound like. In an effort to recreate human vision, for example, you might build a basic layer of artificial neurons that can detect simple things like the edges of a particular shape. The next layer could then piece together these edges to identify the larger shape, and then the shapes could be strung together to understand an object. The key here is that the software does all this on its own — a big advantage over older AI models, which required engineers to massage the visual or auditory data so that it could be digested by the machine-learning algorithm.' Are we ready to blur the line between hardware and wetware?"
jtogel writes "This New Scientist article describes our AI system that automatically generates card games. The article contains a description of a playable card game generated by our system. But card games are just the beginning... The card game generator is a part of a larger project to automatise all of game development using artificial intelligence methods — we're also working on level generation for a variety of different games, and on rule generation for simple arcade-like games."
kkleiner writes "Willow Garage spinoff IPI has developed a visual system for its line of robotic arms that enable the machines to perceive a specific object in the midst of random ones. On-site videos show the 'sensing' robots analyzing stacks of random boxes, selecting certain ones, and tossing them to a human handler. The software is also used in an automated box unloader that requires no human supervision."
ceview writes "Researchers at Sheffield Centre for Robotics have demonstrated that a swarm of 40 robots can carry out simple fetching and carrying tasks. This is done by grouping around an object and working together to push it across a surface. They believe that this could provide opportunities for us mere humans to harness such power to do all sorts of things like safety — what like catching falling workers perhaps? Youtube action here."
moon_unit2 writes "Tech Review has a story about a garage in Ingolstadt, Germany, where the cars park themselves. The garage is an experiment set up by Audi to explore ways that autonomous technology might practically be introduced; most of the sensor technology is built into the garage and relayed to the cars rather than inside the cars themselves. It seems that carmakers see the technology progressing in a slightly different way to Google, with its fleet of self-driving Prius. From the piece: 'It's actually going to take a while before you get a really, fully autonomous car,' says Annie Lien, a senior engineer at the Electronics Research Lab, a shared facility for Audi, Volkswagen, and other Volkswagen Group brands in Belmont, California, near Silicon Valley. 'People are surprised when I tell them that you're not going to get a car that drives you from A to B, or door to door, in the next 10 years.'"
JimmyQS writes "The Harvard Business Review blog has an invited piece about Innovation Software. Tony McCaffrey at the University of Massachusetts Amherst talks about several pieces of software designed to help engineers augment their innovation process and make them more creative, including one his group has developed called Analogy Finder. The software searches patent databases using natural language processing technology to find analogous solutions in other domains. According to Dr. McCaffrey 'nearly 90% of new solutions are really just adaptations from solutions that already exist — and they're often taken from fields outside the problem solver's expertise.'"
UgLyPuNk writes "Blizzard has revealed its 'something new' at PAX East 2013: Hearthstone: Heroes of Warcraft — a 'charming collectible strategy game set in the Warcraft universe.'" Blizzard says this game is a departure from their normal development process: it was made with a team of just 15, will release this year, and it's free-to-play. Hearthstone is built for Mac OS, Windows, and iPads. There's a deck builder, a match-finder, and AI for those who don't want to play against other people. While it's free to play, and players will earn new packs of cards by playing, there will also be an option to purchase new packs.
sciencehabit writes "Pharmaceuticals often have side effects that go unnoticed until they're already available to the public. Doctors and even the FDA have a hard time predicting what drug combinations will lead to serious problems. But thanks to people scouring the web for the side effects of the drugs they're taking, researchers have now shown that Google and other search engines can be mined for dangerous drug combinations. In a new study, scientists tried the approach out on predicting hypoglycemia, or low blood sugar. They found that the data-mining procedure correctly predicted whether a drug combo did or did not cause hypoglycemia about 81% of the time."
dstates writes "ProPublica, the award winning public interest journalism group and frequently cited Slashdot source, has published an interesting guide to app technology for journalism and a set of data and style guides. Journalism presents unique challenges with potentially enormous but highly variable site traffic, the need to serve a wide variety of information, and most importantly, the need to quickly develop and vet interesting content, and ProPublica serves lots of data sets in addition to the news. They are also doing some cool stuff like using AI to generate specific narratives from tens of thousands of database entries illustrating how school districts and states often don't distribute educational opportunities to rich and poor kids equally. The ProPublica team focuses on some basic practical issues for building a team, rapidly and flexibly deploying technology and insuring that what they serve is correct. A great news app developer needs three key skills: the ability to do journalism, design acumen and the ability to write code quickly — and the last is the easiest to teach. To build a team they look to their own staff rather than competing with Google for CS grads. Most news organizations use either Ruby on Rails or Python/Django, but more important than which specific technology you choose is to just pick a server-side programming language and stick to it. Cloud hosting provides news organizations with incredible flexibility (like increasing your capacity ten-fold for a few days around the election and then scaling back the day after), but they're not as fast as real servers, and cloud costs can scale quickly relative to real servers. Maybe a news app is not the most massive 'big data' application out there, but where else can you find the challenge of millions of users checking in several times a day for the latest news, and all you need to do is sort out which of your many and conflicting sources are providing you with straight information? Oh, and if you screw up, it will be very public."
halls-of-valhalla writes "Using advances in 3D laser mapping technology, Oxford University has developed a car that is able to drive itself along familiar routes. This new self-driving automobile uses lasers and small cameras to memorize everyday trips such as the morning commute. This car is not dependant on GPS because this car is able to tell where it is by recognizing its surroundings. The intent is for this car to be capable of taking over the drive when on routes that it has traveled before. While being driven, the car is capable of developing a 3D model of its environment and learning routes. When driving a particular journey a second time, an iPad on the dashboard informs the driver that it is capable of taking over and finishing the drive. The driver can then touch the screen and the car shifts to 'auto drive' mode. The driver can reclaim control of the car at any time by simply tapping the brakes."
Lucas123 writes "Applying the same technology used for voice recognition and credit card fraud detection to medical treatments could cut healthcare costs and improve patient outcomes by almost 50%, according to new research. Scientists at Indiana University found that using patient data with machine-learning algorithms can drastically improve both the cost and quality of healthcare through simulation modeling.The artificial intelligence models used for diagnosing and treating patients obtained a 30% to 35% increase in positive patient outcomes, the research found. This is not the first time AI has been used to diagnose and suggest treatments. Last year, IBM announced that its Watson supercomputer would be used in evaluating evidence-based cancer treatment options for physicians, driving the decision-making process down to a matter of seconds."