Hallie Siegel writes: Telemedicine can let doctors and nurses check in on patients who might be recovering at home, or monitor people in remote locations where it's hard to access physician services. This article gives an overview of the different systems that are out there, what are some of the legal obstacles, and how various countries are investing in the technology. From the article: "The Japanese government has allocated about $23M USD to the core technology market in an effort to develop products for its aging population. Toyota, for example, is focusing on home living assistance robots that will allow those with limited mobility the opportunity to live at home. While Japan might have the largest market in the world of 65+ citizens (over 30 million as of 2014), South Korea is estimated to be allocating nearly $6B USD to their own robotics research. The Koreans are taking a different approach, using robots for mundane tasks of delivering food, allowing humans to provide care."
Slashdot Deals: Cyber Monday Sale! Courses ranging from coding to project management - all eLearning deals 25% off with coupon code "CYBERMONDAY25". ×
An anonymous reader writes: Scientists at Georgia Tech are developing silent speech systems that can enable fast and hands-free communication with wearable devices, controlled by the user's tongue and ears. As seen with open source project Eyedrivomatic, the researchers want to apply the technology to provide a device control solution for people who are disabled. They suggest it could also be used by those working in a loud environment in need of a quiet way to communicate with their wearable devices. The prototype involves a combination of tongue control with earphone-like pieces each installed with proximity sensors to map the changing shape of the ear canal. Every word manipulates the canal in a different way, allowing for accurate recognition.
szczys writes: The inventor of the Eyedriveomatic has ALS. This prevents him from controlling his electric wheelchair, but it didn't prevent him from teaming up with two other people (one also a quadriplegic) to design a way around the limitation. Eyegaze hardware is what lets people speak through a computer using only their eyes. Eyedrivomatic is an open source project that uses common materials to connect the Eyegaze to the joystick of the wheelchair without altering the chair (which is rented equipment in most cases). A 3D-printed gimbal is strapped over the existing joystick, but does not prevent it from still being used normally by caregivers. The gimbal's servo motors actuate the joystick with commands from the Eyegaze.
An anonymous reader writes: In August of last year, a Boeing 737 operated by Qantas experienced a tailstrike while taking off — the thrust wasn't great enough for the tail to clear the runway, so it clipped the ground. The investigation into the incident (PDF) has finally been completed, and it found the cause of the accident: the co-pilot accidentally entered the wrong plane weight data into the iPad used to make calculations about the takeoff thrust. "First, when working out the plane's takeoff weight on a notepad, the captain forgot to carry the "1," resulting in an erroneous weight of 66,400kg rather than 76,400kg. Second, the co-pilot made a "transposition error" when carrying out the same calculation on the Qantas on-board performance tool (OPT)—an iPad app for calculating takeoff speed, amongst other things. "Transposition error" is an investigatory euphemism for "he accidentally hit 6 on the keyboard rather than 7." This caused the problem: "For a weight of 76,400kg and temperature of 35C, the engine thrust should've been set at 93.1 percent with a takeoff speed of 157 knots; instead, due to the errors, the thrust was set to 88.4 percent and takeoff speed was 146 knots."
An anonymous reader writes: The old Conficker worm was found on new police body cameras that were taken out of the box by security researchers from iPower Technologies. The worm is detected by almost all security vendors, but it seems that it is still being used because modern day IoT devices can't yet run security products. This allows the worm to spread, and propagate to computers when connected to an unprotected workstation. One police computer is enough to allow attackers to steal government data. The source of the infection is yet unknown. It is highly unlikely that the manufacturer would do this. Middleman involved in the shipping are probably the cause.
msm1267 writes: Researchers at this week's PacSec 2015 conference in Tokyo demonstrated how they were able to inject special control characters into a barcode, so that a barcode reader will 'press' host system hotkeys, and activate a particular function. The attacks, called BadBarcode, can be used against any keyboard wedge barcode scanner that supports ASCII control characters--many do. An attacker than then use control commands to open or save files, launch a browser or execute commands. Here are the presentation slides.
An anonymous reader writes: The Bluetooth Special Interest Group (SIG) has announced its roadmap for Bluetooth Smart in 2016, promising a fourfold range increase in the low-energy, IoT-oriented version of the protocol, along with dedicated mesh networking, a 100% increase in speed and no extra consumption of energy. The last set of upgrades to the protocol offered direct access to the internet and security enhancements. Since Bluetooth must currently contend with attacks on everything from cars to toilets, the increased range means that developers may not be able to rely on 'fleeting contact' as a security feature quite as much.
New submitter charliehotel writes: The Irish Aviation Authority announced that it will have its drone registry up and running by December 21st this year. This registry will be the first of its kind in Europe, and the Irish Aviation Authority will require all RPA / drones that weigh over 1kg to be registered; this includes model aircraft. I hope that the U.S.'s gathering storm of regulation doesn't start quite that small.
New submitter David Rothman writes: Scan a 300-page book in just five minutes or so? For a mere $199 and shipping — the current price on Indiegogo — a Chinese company says you can buy a device to do just that. And a related video is most convincing. The Czur scanner from CzurTek uses a speedy 32-bit MIPS CPU and fast software for scanning and correction. It comes with a foot pedal and even offers WiFi support. Create a book cloud for your DIY digital library? Imagine the possibilities for Project Gutenberg-style efforts, schools, libraries and the print-challenged as well as for booklovers eager to digitize their paper libraries for convenient reading on cellphones, e-readers and tablets. Even at the $400 expected retail price, this could be quite a bargain if the claims are true. I myself have ordered one at the $199 price.
An anonymous reader writes: VR is easy for video games, but hard for live action: you don't know where the viewer will be in the virtual world, so you can't put the camera in the right place in the real world. Light field cameras are perfect for VR though, because they're essentially holographic, and capture lots of positions at once. And Lytro has announced the first system that's both 'light field' and 'holographic', which changes everything. Wired seems similarly excited.
theodp writes: With its sweeping vistas and narration by the late Robin Williams, Apple's 2014 'Your Verse' ad dramatically showcased the many ways iPads might help people create, from making movies to calibrating wind turbines. So it's interesting that Microsoft's first ad for its new Surface Book (YouTube) bears a striking resemblance to the earlier Apple ad (YouTubeDoubler comparison). Which is probably only fair, since Apple's soon-to-be-released iPad Pro bears more than a passing resemblance to the Microsoft Surface. Hey, good artists copy, great artists steal, right? By the way, between the release of Microsoft's Surface Pro 4, Apple's iPad Pro, and Google's Pixel C, is the keyboard+touch interface poised to be a four-decade "overnight success"?
PC World reports that even as Microsoft is pushing voice input on the desktop (in the form of an expanded role for its Cortana digital assistant), Google is responding to user (dis)interest in searching by voice from the desktop, by dropping "OK Google"-based voice commands in the latest iteration of Chrome. This seems too bad to me, so I wish they'd at least leave the voice input as an option; I've only lately been getting comfortable with search by voice on my phone, and though I've found the results to be hit or miss (my phone responds a bit too often to "OK," and seems to stumble even on some common words, spoken clearly), when it works I really like it.
the_newsbeagle writes: With electrodes implanted in their neural tissue and a new brain-computer interface, two paralyzed people with ALS used their thoughts to control a computer cursor with unprecedented accuracy and speed. They showed off their skills by using a predictive text-entering program to type sentences, achieving a rate of 6 words per minute. While paralyzed people can type faster using other assistive technologies that are already on the market, like eye-gaze trackers and air-puff controllers, a brain implant could be the only option for paralyzed people who can't reliably control their eyes or mouth muscles.
hypnosec writes: A YouTuber named Tom Scott has built a 1,000-key keyboard with each key representing an emoji! Scott made the emoji keyboard using 14 keyboards and over 1,000 individually placed stickers. While he himself admits that it is one of the craziest things he has built, the work he has put in does warrant appreciation. On the keyboard are individually placed emojis for food items, animals, plants, transport, national flags, and time among others.
An anonymous reader writes: Pokemon Go marks Nintendo's biggest move into mobile yet: the augmented reality mobile game makes use of your location as well as your phone's camera to let you interact with pocket monsters in the real world. It's an audacious idea — with an accompanying trailer — but as one writer points out it will have to nail a lot of different systems to build up an active community in the same that developer Niantic has done for its previous game, Ingress. The author looks at Ingress to see where Nintendo and Niantic may draw inspiration, pointing out that the game's portal modding system could prove a great mechanism for allowing Pokemon evolutions. Expect plenty more Pokemon amiibo to interact with the upcoming wristband, too.
An anonymous reader writes: Microsoft CEO Satya Nadella was a little embarrassed at a Salesforce conference today when he tested the company's personal virtual assistant during a presentation. Slightly fluffing the question 'Show me my most at-risk opportunities', Nadella was dismayed to find Cortana offering him a Bing page with the search term 'Show me to buy milk at this opportunity'. Two further efforts to discover the exposure of his shares failed to achieve their aim, and eventually the CEO of Microsoft gave up. The fact that he stumbled over his first attempt at the question seemed to floor Cortana, which uses the 'Einstein' AI engine, and which has been more praised for its accurate speech recognition than its ability to understand what an array of interpreted words actually mean.
New submitter mutherhacker writes: A group from Osaka University in Japan and McMaster University in Canada have presented a method to control a virtual 3D object using a smartphone [video]. The method was primarily designed for presentations but also applies to virtual reality using a head mounted display, gaming or even quadrocopter control. There is an open paper online as well as a git repository for both the client and the server. The client smartphone communicates with the main computer over the network with TUIO for touch and Google protocol buffers for orientation sensor data.
An anonymous reader writes: I have a decent video camera, but it lacks a terminal for using an external mic. However, I have a comparatively good audio recorder. What I'd like to do is "automagically" synchronize sound recorded on the audio recorder with video taken on the video camera, using Free / Open Source software on Linux, so I can dump in the files from each, hit "Go," and in the end I get my video, synched with the separately recorded audio, in some sane file format. This seems simple, but maybe it isn't: the 800-pound gorilla in the room is PluralEyes, which evidently lots of people pay $200 for --and which doesn't have a Linux version. Partly this is that I'm cheap, partly it's that I like open source software for being open source, and partly it's that I already use Linux as my usual desktop, and resent needing to switch OS to do what seems intuitively to be a simple task. (It seems like something VLC would do, considering its Swiss-Army-Knife approach, but after pulling down all the menus I could find, I don't think that's the case.) I don't see this feature in any of the Open Source video editing programs, so as a fallback question for anyone who's using LiVES, KDEnlive, or other free/Free option, do you have a useful workflow for synching up externally recorded sound? I'd be happy even to find a simple solution that's merely gratis rather than Free, as long as it runs on Ubuntu.
An anonymous reader writes: The Department of Energy has approved the construction of the Large Synoptic Survey Telecscope's 3.2-gigapixel digital camera, which will be the most advanced in the world. When complete the camera will weigh more than three tons and take such high resolution pictures that it would take 1,500 high-definition televisions to display one of them. According to SLAC: "Starting in 2022, LSST will take digital images of the entire visible southern sky every few nights from atop a mountain called Cerro Pachón in Chile. It will produce a wide, deep and fast survey of the night sky, cataloging by far the largest number of stars and galaxies ever observed. During a 10-year time frame, LSST will detect tens of billions of objects—the first time a telescope will observe more galaxies than there are people on Earth – and will create movies of the sky with unprecedented details. Funding for the camera comes from the DOE, while financial support for the telescope and site facilities, the data management system, and the education and public outreach infrastructure of LSST comes primarily from the National Science Foundation (NSF)."
An anonymous reader writes: Wired reports on the 'Sensel Morph' input device, which launched on Kickstarter yesterday and blew past its funding goal almost immediately. It's a tablet-sized touchpad, but the key feature is the ability to place custom overlays on it. For example, you can snap on a flexible keyboard and the device starts behaving like a normal keyboard. Other overlays can imitate a game controller or a musical instrument. It's sensitive enough to detect paintbrushes, or you can put a simple overlay on it and use pencil or pen. The magnetic connectors in these overlays tell the device how to process the input, and they're making an open source API so developers can create their own. The touchpad has 20,000 individual sensors, with pressure sensitivity ranging from 5g to 5kg.