Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Android

Google Rebrands 'Apps for Work' To 'G Suite,' Adds New Features (thenextweb.com) 45

Google has renamed "Apps for Work" to "G Suite" to "help people everywhere work and innovate together, so businesses can move faster and go bigger." They have also added a bunch of new features, such as a "Quick Access" section for Google Drive for Android that uses machine learning to predict what files you're going to need when you open up the app, based off your previous behavior. Calendar will automatically pick times to set up meetings through the use of machine intelligence. Sheets is also using AI "to turn your layman English requests into formulas through its 'Explore' feature," reports The Next Web. "In Slides, Explore uses machine learning to dynamically suggest and apply design ideas, while in Docs, it will suggest backup research and images you can use in your musings, as well as help you insert files from your Drive account. Throughout Docs, Sheets, and Slides, you can now recover deleted files on Android from a new 'Trash' option in the side/hamburger menu." Google's cloud services will now fall under a new "Google Cloud" brand, which includes G Suite, Google Cloud Platform, new machine learning tools and APIs, and Google's various devices that access the cloud. Slashdot reader wjcofkc adds: I just received the following email from Google. When I saw the title, my first thought was that there was malware lying at the end -- further inspection proved it to be real. Is this the dumbest name change in the history of name changes? Google of all companies does not have to try so hard. "Hello Google Apps Customer, We created Google Apps to help people everywhere work and innovate together, so that your organization can move faster and achieve more. Today, we're introducing a new name that better reflects this mission: G Suite. Over the coming weeks, you'll see our new name and logo appear in familiar places, including the Admin console, Help Center, and on your invoice. G Suite is still the same all-in-one solution that you use every day, with the same powerful tools -- Gmail, Docs, Drive, and Calendar. Thanks for being part of the journey that led us to G Suite. We're always improving our technology so it learns and grows with your team. Visit our official blog post to learn more."
AI

Microsoft Forms New AI Research Group Led By Harry Shum (techcrunch.com) 31

An anonymous reader quotes a report from TechCrunch: A day after announcing a new artificial intelligence partnership with IBM, Google, Facebook and Amazon, Microsoft is upping the ante within its own walls. The tech giant announced that it is creating a new AI business unit, the Microsoft AI and Research Group, which will be led by Microsoft Research EVP Harry Shum. Shum will oversee 5,000 computer scientists, engineers and others who will all be "focused on the company's AI product efforts," the company said in an announcement. The unit will be working on all aspects of AI and how it will be applied at the company, covering agents, apps, services and infrastructure. Shum has been involved in some of Microsoft's biggest product efforts at the ground level of research, including the development of its Bing search engine, as well as in its efforts in computer vision and graphics: that is a mark of where Microsoft is placing its own priority for AI in the years to come. Important to note that Microsoft Research unit will no longer be its on discrete unit -- it will be combined with this new AI effort. Research had 1,000 people in it also working on areas like quantum computing, and that will now be rolled into the bigger research and development efforts being announced today. Products that will fall under the new unit will include Information Platform, Cortana and Bing, and Ambient Computing and Robotics teams led by David Ku, Derrick Connell and Vijay Mital, respectively. The Microsoft AI and Research Group will encompass AI product engineering, basic and applied research labs, and New Experiences and Technologies (NExT), Microsoft said.
AI

Facebook, Amazon, Google, IBM, and Microsoft Come Together To Create Historic Partnership On AI (techcrunch.com) 84

An anonymous reader quotes a report from TechCrunch: In an act of self-governance, Facebook, Amazon, Alphabet, IBM, and Microsoft came together today to announce the launch the new Partnership on AI. The group is tasked with conducting research and promoting best practices. Practically, this means that the group of tech companies will come together frequently to discuss advancements in artificial intelligence. The group also opens up a formal structure for communication across company lines. It's important to remember that on a day to day basis, these teams are in constant competition with each other to develop the best products and services powered by machine intelligence. Financial support will be coming from the initial tech companies who are members of the group, but in the future membership and involvement is expected to increase. User activists, non-profits, ethicists, and other stakeholders will be joining the discussion in the coming weeks. The organizational structure has been designed to allow non-corporate groups to have equal leadership side-by-side with large tech companies. As of today's launch, companies like Apple, Twitter, Intel and Baidu are missing from the group. Though Apple is said to be enthusiastic about the project, their absence is still notable because the company has fallen behind in artificial intelligence when compared to its rivals -- many of whom are part of this new group. The new organization really seems to be about promoting change by example. Rather than preach to the tech world, it wants to use a standard open license to publish research on topics including ethics, inclusivity, and privacy.
AI

Why Data Is the New Coal (theguardian.com) 75

An anonymous reader shares a report on The Guardian: "Is data the new oil?" asked proponents of big data back in 2012 in Forbes magazine. By 2016, and the rise of big data's turbo-powered cousin deep learning, we had become more certain: "Data is the new oil," stated Fortune. Amazon's Neil Lawrence has a slightly different analogy: Data, he says, is coal. Not coal today, though, but coal in the early days of the 18th century, when Thomas Newcomen invented the steam engine. A Devonian ironmonger, Newcomen built his device to pump water out of the south west's prolific tin mines. The problem, as Lawrence told the Re-Work conference on Deep Learning in London, was that the pump was rather more useful to those who had a lot of coal than those who didn't: it was good, but not good enough to buy coal in to run it. That was so true that the first of Newcomen's steam engines wasn't built in a tin mine, but in coal works near Dudley. So why is data coal? The problem is similar: there are a lot of Newcomens in the world of deep learning. Startups like London's Magic Pony and SwiftKey are coming up with revolutionary new ways to train machines to do impressive feats of cognition, from reconstructing facial data from grainy images to learning the writing style of an individual user to better predict which word they are going to type in a sentence.
Google

Google Open Sources Its Image-Captioning AI (zdnet.com) 40

An anonymous Slashdot reader quotes ZDNet: Google has open-sourced a model for its machine-learning system, called Show and Tell, which can view an image and generate accurate and original captions... The image-captioning system is available for use with TensorFlow, Google's open machine-learning framework, and boasts a 93.9 percent accuracy rate on the ImageNet classification task, inching up from previous iterations.

The code includes an improved vision model, allowing the image-captioning system to recognize different objects in images and hence generate better descriptions. An improved image model meanwhile aids the captioning system's powers of description, so that it not only identifies a dog, grass and frisbee in an image, but describes the color of grass and more contextual detail.

Microsoft

Microsoft Patents A User-Monitoring AI That Improves Search Results (hothardware.com) 68

Slashdot reader MojoKid quotes a HotHardware article about Microsoft's new patent filing for an OS "mediation component": This is Microsoft's all-seeing-eye that monitors all textual input within apps to intelligently decipher what the user is trying to accomplish. All of this information could be gathered from apps like Word, Skype, or even Notepad by the Mediator and processed. So when the user goes to, for example, the Edge web browser to further research a topic, those contextual concepts are automatically fed into a search query.

The search engine (e.g., Bing and Cortana) uses contextual rankers to adjust the ranking of the default suggested queries to produce more relevant [results]. The operating system...tracks all textual data displayed to the user by any application, and then performs clustering to determine the user intent (contextually).

The article argues this feels "creepy and big brother-esque," and while Microsoft talks of defining a "task continuum," suggests the patent's process "would in essence keep track of everything you type and interact with in the OS and stockpile it in real-time to data-dump into Bing."
Open Source

Ask Slashdot: Who's Building The Open Source Version of Siri? (upon2020.com) 190

We're moving to a world of voice interactions processed by AI. Now Long-time Slashdot reader jernst asks, "Will we ever be able to do that without going through somebody's proprietary silo like Amazon's or Apple's?" A decade ago, we in the free and open-source community could build our own versions of pretty much any proprietary software system out there, and we did... But is this still true...? Where are the free and/or open-source versions of Siri, Alexa and so forth?

The trouble, of course, is not so much the code, but in the training. The best speech recognition code isn't going to be competitive unless it has been trained with about as many millions of hours of example speech as the closed engines from Apple, Google and so forth have been. How can we do that? The same problem exists with AI. There's plenty of open-source AI code, but how good is it unless it gets training and retraining with gigantic data sets?

And even with that data, Siri gets trained with a massive farm of GPUs running 24/7 -- but how can the open source community replicate that? "Who has a plan, and where can I sign up to it?" asks jernst. So leave your best answers in the comments. Who's building the open source version of Siri?
AI

Apple Is Getting Ready To Take On Google and Amazon In a Battle For The Living Room (qz.com) 114

An anonymous reader writes: Siri may soon be making the jump from your pocket to your end table. Apple has been working on a standalone product to control internet-of-things devices for a while, but a new report from Bloomberg suggests that the company has moved the project from a research phase to prototyping. It would theoretically be pitted against other smart-home devices, including Amazon's sleeper hit, the Echo, and Google's forthcoming Home Hub. According to the report, Apple's device would be controlled using its Siri voice assistant technology. It would be able to perform the same functions that it can complete now on iPhones, Macs, and other Apple products, such as being able to tell you when the San Francisco Giants are next playing, or possibly send a poorly transcribed text message. The device would also be able to control other internet-connected devices in the home, such as lights, door locks, and web-enabled appliances, as Google and Amazon's products can. It would also have the same ability to play music through built-in speakers.
Robotics

Robot Snatches Rifle From Barricaded Suspect, Ends Standoff (latimes.com) 129

Slashdot reader schwit1 quotes the L.A. Times: An hours-long standoff in the darkness of the high desert came to a novel end when Los Angeles County sheriff's deputies used a robot to stealthily snatch a rifle from an attempted murder suspect, authorities said Thursday. Officials said the use of the robot to disarm a violent suspect was unprecedented for the Sheriff's Department, and comes as law enforcement agencies increasingly rely on military-grade technology to reduce the risk of injury during confrontations with civilians.

"The robot was a game changer here," said Capt. Jack Ewell, a tactical expert with the Sheriff's Department -- the largest sheriff's department in the nation. "We didn't have to risk a deputy's life to disarm a very violent man."

It was only later when the robot came back to also pull down a wire barricade that the 51-year-old suspect realized his gun was gone.
AI

Hacker George Hotz Unveils $999 Self-Driving Add-On (pcmag.com) 80

An anonymous reader quotes a report from PC Magazine: Hacker George Hotz is gearing up to launch his automotive AI start-up's first official product. In December, the 26-year-old -- known for infiltrating Apple's iPhone and Sony's PlayStation 3J -- moved on to bigger things: turning a 2016 Acura ILX into an autonomous vehicle. According to Bloomberg, Hotz outfitted the car with a laser-based radar (lidar) system, a camera, a 21.5-inch screen, a "tangle of electronics," and a joystick attached to a wooden board. Nine months later, the famed hacker this week unveiled the Comma One. As described by TechCrunch, the $999 add-on comes with a $24 monthly subscription fee for software that can pilot a car for miles without a driver touching the wheel, brake, or gas. But unlike systems currently under development by Google, Tesla, and nearly every major vehicle manufacturer, Comma.ai's "shippable" Comma One does not require users to buy a new car. "It's fully functional. It's about on par with Tesla Autopilot," Hotz said during this week's TechCrunch Disrupt in San Francisco.
AI

Robots Will Eliminate 6% of All US Jobs By 2021, Says Report (theguardian.com) 400

An anonymous reader quotes a report from The Guardian: By 2021, robots will have eliminated 6% of all jobs in the U.S., starting with customer service representatives and eventually truck and taxi drivers. That's just one cheery takeaway from a report released by market research company Forrester this week. These robots, or intelligent agents, represent a set of AI-powered systems that can understand human behavior and make decisions on our behalf. Current technologies in this field include virtual assistants like Alexa, Cortana, Siri and Google Now as well as chatbots and automated robotic systems. For now, they are quite simple, but over the next five years they will become much better at making decisions on our behalf in more complex scenarios, which will enable mass adoption of breakthroughs like self-driving cars. The Inevitable Robot Uprising has already started, with at least 45% of U.S. online adults saying they use at least one of the aforementioned digital concierges. Intelligent agents can access calendars, email accounts, browsing history, playlists, purchases and media viewing history to create a detailed view of any given individual. With this knowledge, virtual agents can provide highly customized assistance, which is valuable to shops or banks trying to deliver better customer service. The report predicts there will be a net loss of 7% of U.S. jobs by 2025 -- 16% of U.S. jobs will be replaced, while the equivalent of 9% jobs will be created. The report forecasts 8.9 million new jobs in the U.S. by 2025, some of which include robot monitoring professionals, data scientists, automation specialists, and content curators.
AI

Video Games Are So Realistic That They Can Teach AI What the World Looks Like (vice.com) 87

Jordan Pearson, reporting for Motherboard:Thanks to the modern gaming industry, we can now spend our evenings wandering around photorealistic game worlds, like the post-apocalyptic Boston of Fallout 4 or Grand Theft Auto V's Los Santos, instead of doing things like "seeing people" and "engaging in human interaction of any kind." Games these days are so realistic, in fact, that artificial intelligence researchers are using them to teach computers how to recognize objects in real life. Not only that, but commercial video games could kick artificial intelligence research into high gear by dramatically lessening the time and money required to train AI. "If you go back to the original Doom, the walls all look exactly the same and it's very easy to predict what a wall looks like, given that data," said Mark Schmidt, a computer science professor at the University of British Columbia (UBC). "But if you go into the real world, where every wall looks different, it might not work anymore." Schmidt works with machine learning, a technique that allows computers to "train" on a large set of labelled data -- photographs of streets, for example -- so that when let loose in the real world, they can recognize, or "predict," what they're looking at. Schmidt and Alireza Shafaei, a PhD student at UBC, recently studied Grand Theft Auto V and found that self-learning software trained on images from the game performed just as well, and in some cases even better, than software trained on real photos from publicly available datasets.
AI

Should We Seed Life On Alien Worlds? (sciencemag.org) 231

Slashdot reader sciencehabit quotes an article from Science magazine: Astronomers have detected more than 3000 planets beyond our solar system, and just a couple of weeks ago they discovered an Earth-like planet in the solar system next door. Most -- if not all -- of these worlds are unlikely to harbor life, but what if we put it there?

Science chatted with theoretical physicist Claudius Gros about his proposed Genesis Project, which would send artificially intelligent probes to lifeless worlds to seed them with microbes. Over millions of years, they might evolve into multicellular organisms, and, perhaps eventually, plants and animals. In the interview, Gros talks artificial intelligence, searching for habitable planets, and what kind of organisms he'd like to see evolve.

"The robots will have to decide if a certain planet should receive microbes and the chance to evolve life," the physicist explains -- adding that it's very important to avoid introducing new microbes on planets where life already exists.
AI

Google's DeepMind Develops New Speech Synthesis AI Algorithm Called WaveNet (qz.com) 46

Artem Tashkinov writes: Researchers behind Google's DeepMind company have been creating AI algorithms which could hardly be applied in real life aside from pure entertainment purposes -- the Go game being the most recent example. However, their most recent development, a speech synthesis AI algorithm called WaveNet, beats the two existing methods of generating human speech by a long shot -- at least 50% by Google's own estimates. The only problem with this new approach is that it's very computationally expensive. The results are even more impressive considering the fact that WaveNet can easily learn different voices and generate artificial breaths, mouth movements, intonation and other features of human speech. It can also be easily trained to generate any voice using a very small sample database. Quartz has a voice demo of Google's current method in its report, which uses recurrent neural networks, and WaveNet's method, which "uses convolutional neural networks, where previously generated data is considered when producing the next bit of information." The report adds, "Researchers also found that if they fed the algorithm classical music instead of speech, the algorithm would compose its own songs."
Robotics

An Algorithm May Soon Cover Your Local Sports Team (vice.com) 53

Sam Edwards, writing for Motherboard: A Spanish startup is promising to revolutionize readers' access to often unreported news. The unreported news in question, however, is not overlooked disasters or under-reported tragedies in far-flung countries, but minor league sporting events. David Llorente, co-founder of Narrativa, said was inspired to develop an AI-powered content generation system after he tried fruitlessly to find coverage of minor league soccer games from other countries in his native Spanish. "There are people interested in these things, in these leagues, in these kind of sports," he told Motherboard. "The idea was to focus on regional sports. I wanted to write about football, but about Japanese football in Spanish, to cover this niche." Sevilla won with a resounding 20 against Athletic in Nervion, where the sum up eight straight wins at home. Gameiro scored the first one for the locals and closed the scoreboard by converting a penalty kick after Kychowiak was fouled. Athletic was unlucky despite controlling ball possession and wasn't able to finish any of the numerous chances that they had. -- Narrativa game summary.
Narrativa is part of the booming automatic content generation industry which uses algorithms to convert data sets into narratives.
Related: How a robot wrote for Engadget.
IBM

IBM Watson Created The First-Ever AI-Made Movie Trailer For 'Morgan' (popsci.com) 58

An anonymous reader shares a Popular Science article: For a film about the risks of pushing the limits of technology too far, it only makes sense to advertise for it using artificial intelligence. Morgan, staring Kate Mara and Paul Giamatti, is a sci-fi thriller about scientists who've created a synthetic humanoid whose potential has grown dangerously beyond their control. Fitting, then, that they'd employ the help of America's AI sweetheart IBM Watson to build the film's trailer. IBM used machine learning and experimental Watson APIs, parsing out the trailers of 100 horror movies. It did visual, audio, and composition analysis of individual scenes, finding what makes each moment eerie, how the score and actors' tone of voice changed the mood--framing and lighting came together to make a complete trailer. Watson was then fed the full film, and it chose scenes for the trailer. A human -- in this case, the "resident IBM filmmaker" -- still needed to step in to edit for creativity. Even so, a process that would normally take weeks was reduced to hours.
AI

Baidu Open-Sources Its Deep Learning Tools (theverge.com) 27

An anonymous reader quotes a report from The Verge: Microsoft, Google, Facebook, and Amazon have all done it -- and now Baidu's doing it, too. The Chinese tech giant has open sourced one of its key machine learning tools, PaddlePaddle, offering the software up to the global community of AI researchers. Baidu's big claim for PaddlePaddle is that it's easier to use than rival programs. Like Amazon's DSSTNE and Microsoft's CNTK, PaddlePaddle offers a toolkit for deep learning, but Baidu says comparable software is designed to work in too many different situations, making it less approachable to newcomers. Xu Wei, the leader of Baidu's PaddlePaddle development, tells The Verge that a machine translation program written with Baidu's software needs only a quarter of the amount of code demanded by other deep learning tools. Baidu is hoping this ease of use will make PaddlePaddle more attractive to computer scientists, and draw attention away from machine learning tools released by Google and Facebook. Baidu says PaddlePaddle is already being used by more than 30 of its offline and online products and services, covering sectors from search to finance to health. Xu said that if one of its machine learning tools became too monopolistic, it would be like "trying to use one programming language to code all applications." Xu doesn't believe that any one company will dominate this area. "Different tools have different strengths," he said. "The deep learning ecosystem will end up having different tools optimized for different uses. Just like no programming language truly dominates software development."
AI

Google's DeepMind To Apply AI In Head and Neck Cancer Treatments (thestack.com) 17

An anonymous reader quotes a report from The Stack: Google's DeepMind team has partnered with British hospital doctors on an oral cancer program hoping to cut planning times for radiotherapy treatments. After recently announcing a partnership with London's Moorfields Eye Hospital to use its machine learning technologies to speed up the diagnoses of eye conditions, DeepMind has confirmed a new initiative at the University College London Hospitals (UCLH) NHS Foundation Trust. According to Google's artificial intelligence unit, cancer treatments including radiotherapy involve complicated design and planning, especially when they involve the head and neck. Treatments need to obliterate cancerous cells while avoiding any healthy surrounding cells, nerves, and organs. UCLH plans to work with DeepMind to explore whether machine learning can reduce planning time for these treatments, particularly for the image segmentation process which involves clinicians taking CT and MRI scans to build a detailed map of the areas to be treated. The report adds: "DeepMind algorithms will be set to work on an anonymized collection of 700 radiology scans from former oral cancer patients, learning from the historical data in order to draw its own conclusions without human support."
AI

Amazon, NVIDIA and The CIA Want To Teach AI To Watch Us From Space (technologyreview.com) 60

An anonymous reader quotes a report from MIT Technology Review: Satellite operator DigitalGlobe is teaming up with Amazon, the venture arm of the CIA, and NVIDIA to make computers watch the Earth from above and automatically map our roads, buildings, and piles of trash. MIT Technology Review reports: "In a joint project, DigitalGlobe today released satellite imagery depicting the whole of Rio de Janeiro to a resolution of 50 centimeters. The outlines of 200,000 buildings inside the city's roughly 1,900 square kilometers have been manually marked on the photos. The SpaceNet data set, as it is called, is intended to spark efforts to train machine-learning algorithms to interpret high-resolution satellite photos by themselves. DigitalGlobe says the SpaceNet data set should eventually include high-resolution images of half a million square kilometers of Earth, and that it will add annotations beyond just buildings. DigitalGlobe's data is much more detailed than publicly available satellite data such as NASA's, which typically has a resolution of tens of meters. Amazon will make the SpaceNet data available via its cloud computing service. Nvidia will provide tools to help machine-learning researchers train and test algorithms on the data, and CosmiQ Works, a division of the CIA's venture arm In-Q-Tel focused on space, is also supporting the project." "We need to develop new algorithms for this data," says senior vice president at DigitalGlobe, Tony Frazier. He goes on to say that health and aid programs are to benefit from software that is able to map roads, bridges and various other infrastructure. The CEO of Descartes Labs, Mark Johnson, a "startup that predicts crop yields from public satellite images," says the data that is collected "should be welcome to startups and researchers," according to MIT Technology Review. "Potential applications could include estimated economic output from activity in urban areas, or guiding city governments on how to improve services such as trash collections, he says."
AI

Microsoft Buys AI-Powered Scheduling App Genee (thestack.com) 28

An anonymous reader quotes a report from The Stack: Microsoft has announced that it has completed its acquisition of artificial intelligence-based scheduling app Genee for an undisclosed amount. The app, which was launched in beta last year, uses natural language processing tools and decision-making algorithms to allow users to schedule appointments without having to consult a calendar. Prior to the acquisition, Genee supported scheduling across Facebook, Twitter, Skype, email, and via SMS. From September 1, Genee will close its own service and will officially join Microsoft, supposedly the Office 365 team. Microsoft believes the addition will help it "further [its] ambition to bring intelligence into every digital experience."

Slashdot Top Deals