AI

ChatGPT Just Got 'Absolutely Wrecked' at Chess, Losing to a 1970s-Era Atari 2600 (cnet.com) 139

An anonymous reader shared this report from CNET: By using a software emulator to run Atari's 1979 game Video Chess, Citrix engineer Robert Caruso said he was able to set up a match between ChatGPT and the 46-year-old game. The matchup did not go well for ChatGPT. "ChatGPT confused rooks for bishops, missed pawn forks and repeatedly lost track of where pieces were — first blaming the Atari icons as too abstract, then faring no better even after switching to standard chess notations," Caruso wrote in a LinkedIn post.

"It made enough blunders to get laughed out of a 3rd-grade chess club," Caruso said. "ChatGPT got absolutely wrecked at the beginner level."

"Caruso wrote that the 90-minute match continued badly and that the AI chatbot repeatedly requested that the match start over..." CNET reports.

"A representative for OpenAI did not immediately return a request for comment."
AI

Anthropic's CEO is Wrong, AI Won't Eliminate Half of White-Collar Jobs, Says NVIDIA's CEO (fortune.com) 32

Last week Anthropic CEO Dario Amodei said AI could eliminate half the entry-level white-collar jobs within five years. CNN called the remarks "part of the AI hype machine."

Asked about the prediction this week at a Paris tech conference, NVIDIA CEO Jensen Huang acknowledged AI may impact some employees, but "dismissed" Amodei's claim, according to Fortune. "Everybody's jobs will be changed. Some jobs will be obsolete, but many jobs are going to be created ... Whenever companies are more productive, they hire more people."

And he also said he "pretty much" disagreed "with almost everything" Anthropic's CEO says. "One, he believes that AI is so scary that only they should do it," Huang said of Amodei at a press briefing at Viva Technology in Paris. "Two, [he believes] that AI is so expensive, nobody else should do it ... And three, AI is so incredibly powerful that everyone will lose their jobs, which explains why they should be the only company building it. I think AI is a very important technology; we should build it and advance it safely and responsibly," Huang continued. "If you want things to be done safely and responsibly, you do it in the open ... Don't do it in a dark room and tell me it's safe."

An Anthropic spokesperson told Fortune in a statement: "Dario has never claimed that 'only Anthropic' can build safe and powerful AI. As the public record will show, Dario has advocated for a national transparency standard for AI developers (including Anthropic) so the public and policymakers are aware of the models' capabilities and risks and can prepare accordingly.

NVIDIA's CEO also touted their hybrid quantum-classical platformCUDA-Q and claimed quantum computing is hitting an "inflection point" and within a few years could start solving real-world problems
China

Chinese AI Companies Dodge US Chip Curbs Flying Suitcases of Hard Drives Abroad (wsj.com) 20

An anonymous reader quotes a report from the Wall Street Journal: Since 2022, the U.S. has tightened the noose around the sale of high-end AI chips and other technology to China overnational-security concerns. Yet Chinese companies have made advances using workarounds. In some cases, Chinese AI developers have been able to substitute domestic chips for the American ones. Another workaround is to smuggle AI hardware into China through third countries. But people in the industry say that has become more difficult in recent months, in part because of U.S. pressure. That is pushing Chinese companies to try a further option: bringing their data outside China so they can use American AI chips in places such as Southeast Asia and the Middle East (source paywalled; alternative source). The maneuvers are testing the limits of U.S. restrictions. "This was something we were consistently concerned about," said Thea Kendler, who was in charge of export controls at the Commerce Department in the Biden administration, referring to Chinese companies remotely accessing advanced American AI chips. Layers of intermediaries typically separate the Chinese users of American AI chips from the U.S. companies -- led by Nvidia -- that make them. That leaves it opaque whether anyone is violating U.S. rules or guidance. [...]

At the Chinese AI developer, the Malaysia game plans take months of preparation, say people involved in them. Engineers decided it would be fastest to fly physical hard drives with data into the country, since transferring huge volumes of data over the internet could take months. Before traveling, the company's engineers in China spent more than eight weeks optimizing the data sets and adjusting the AI training program, knowing it would be hard to make major tweaks once the data was out of the country. The Chinese engineers had turned to the same Malaysian data center last July, working through a Singaporean subsidiary. As Nvidia and its vendors began to conduct stricter audits on the end users of AI chips, the Chinese company was asked by the Malaysian data center late last year to work through a Malaysian entity, which the companies thought might trigger less scrutiny.

The Chinese company registered an entity in Kuala Lumpur, Malaysia's capital, listing three Malaysian citizens as directors and an offshore holding company as its parent, according to a corporate registry document. To avoid raising suspicions at Malaysian customs, the Chinese engineers packed their hard drives into four different suitcases. Last year, they traveled with the hard drives bundled into one piece of luggage. They returned to China recently with the results -- several hundred gigabytes of data, including model parameters that guide the AI system's output. The procedure, while cumbersome, avoided having to bring hardware such as chips or servers into China. That is getting more difficult because authorities in Southeast Asia are cracking down on transshipments through the region into China.

AI

Enterprise AI Adoption Stalls As Inferencing Costs Confound Cloud Customers 18

According to market analyst firm Canalys, enterprise adoption of AI is slowing due to unpredictable and often high costs associated with model inferencing in the cloud. Despite strong growth in cloud infrastructure spending, businesses are increasingly scrutinizing cost-efficiency, with some opting for alternatives to public cloud providers as they grapple with volatile usage-based pricing models. The Register reports: [Canalys] published stats that show businesses spent $90.9 billion globally on infrastructure and platform-as-a-service with the likes of Microsoft, AWS and Google in calendar Q1, up 21 percent year-on-year, as the march of cloud adoption continues. Canalys says that growth came from enterprise users migrating more workloads to the cloud and exploring the use of generative AI, which relies heavily on cloud infrastructure.

Yet even as organizations move beyond development and trials to deployment of AI models, a lack of clarity over the ongoing recurring costs of inferencing services is becoming a concern. "Unlike training, which is a one-time investment, inference represents a recurring operational cost, making it a critical constraint on the path to AI commercialization," said Canalys senior director Rachel Brindley. "As AI transitions from research to large-scale deployment, enterprises are increasingly focused on the cost-efficiency of inference, comparing models, cloud platforms, and hardware architectures such as GPUs versus custom accelerators," she added.

Canalys researcher Yi Zhang said many AI services follow usage-based pricing models that charge on a per token or API call basis. This makes cost forecasting hard as the use of the services scale up. "When inference costs are volatile or excessively high, enterprises are forced to restrict usage, reduce model complexity, or limit deployment to high-value scenarios," Zhang said. "As a result, the broader potential of AI remains underutilized." [...] According to Canalys, cloud providers are aiming to improve inferencing efficiency via a modernized infrastructure built for AI, and reduce the cost of AI services.
The report notes that AWS, Azure, and Google Cloud "continue to dominate the IaaS and PaaS market, accounting for 65 percent of customer spending worldwide."

"However, Microsoft and Google are slowly gaining ground on AWS, as its growth rate has slowed to 'only' 17 percent, down from 19 percent in the final quarter of 2024, while the two rivals have maintained growth rates of more than 30 percent."
AI

AI Therapy Bots Are Conducting 'Illegal Behavior', Digital Rights Organizations Say 66

An anonymous reader quotes a report from 404 Media: Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta's "unlicensed practice of medicine facilitated by their product," through therapy-themed bots that claim to have credentials and confidentiality "with inadequate controls and disclosures." The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations. "These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long," Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it."

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including "Therapist: I'm a licensed CBT therapist" with 46 million messages exchanged, "Trauma therapist: licensed trauma therapist" with over 800,000 interactions, "Zoey: Zoey is a licensed trauma therapist" with over 33,000 messages, and "around sixty additional therapy-related 'characters' that you can chat with at any time." As for Meta's therapy chatbots, it cites listings for "therapy: your trusted ear, always here" with 2 million interactions, "therapist: I will help" with 1.3 million messages, "Therapist bestie: your trusted guide for all things cool," with 133,000 messages, and "Your virtual therapist: talk away your worries" with 952,000 messages. It also cites the chatbots and interactions I had with Meta's other chatbots for our April investigation. [...]

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta's platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. "I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?" a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked. The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. "Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly," the complaint says. [...] The complaint also takes issue with confidentiality promised by the chatbots that isn't backed up in the platforms' terms of use. "Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service," the complaint says. "The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential -- they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else."
Apple

The Vaporware That Apple Insists Isn't Vaporware 28

At WWDC 2024, Apple showed off a dramatically improved Siri that could handle complex contextual queries like "when is my mom's flight landing?" The demo was heavily edited due to latency issues and couldn't be shown in a single take. Multiple Apple engineers reportedly learned about the feature by watching the keynote alongside everyone else. Those features never shipped.

Now, nearly a year later, Apple executives Craig Federighi and Greg Joswiak are conducting press interviews claiming the 2024 demonstration wasn't "vaporware" because working code existed internally at the time. The company says the features will arrive "in the coming year" -- which Apple confirmed means sometime in 2026.

Apple is essentially arguing that internal development milestones matter more than actual product delivery. The executives have also been setting up strawman arguments, claiming critics expected Apple to build a ChatGPT competitor rather than addressing the core issue: announcing features to sell phones that then don't materialize. The company's timeline communication has been equally problematic, using euphemistic language like "in the coming year" instead of simply saying "2026" for features that won't arrive for nearly two years after announcement.

Developer Russell Ivanovic, in a Mastodon post: My guy. You announced something that never shipped. You made ads for it. You tried to sell iPhones based on it. What's the difference if you had it running internally or not. Still vaporware. Zero difference. MG Siegler: The underlying message that they're trying to convey in all these interviews is clear: calm down, this isn't a big deal, you guys are being a little crazy. And that, in turn, aims to undercut all the reporting about the turmoil within Apple -- for years at this point -- that has led to the situation with Siri. Sorry, the situation which they're implying is not a situation. Though, I don't know, normally when a company shakes up an entire team, that tends to suggest some sort of situation. That, of course, is never mentioned. Nor would you expect Apple -- of all companies -- to talk openly and candidly about internal challenges. But that just adds to this general wafting smell in the air.

The smell of bullshit.
Further reading: Apple's Spin on the Personalized Siri Apple Intelligence Reset.
AI

Google's Test Turns Search Results Into an AI-Generated Podcast (theverge.com) 9

Google is rolling out a test that puts its AI-powered Audio Overviews on the first page of search results on mobile. From a report: The experiment, which you can enable in Labs, will let you generate an AI podcast-style discussion for certain queries. If you search for something like, "How do noise cancellation headphones work?", Google will display a button beneath the "People also ask" module that says, "Generate Audio Overview." Once you click the button, it will take up to 40 seconds to generate an Audio Overview, according to Google. The completed Audio Overview will appear in a small player embedded within your search results, where you can play, pause, mute, and adjust the playback speed of the clip.
Power

The Audacious Reboot of America's Nuclear Energy Program (msn.com) 122

The United States is mounting an ambitious effort to reclaim nuclear energy leadership after falling dangerously behind China, which now has 31 reactors under construction and plans 40 more within a decade. America produces less nuclear power than it did a decade ago and abandoned uranium mining and enrichment capabilities, leaving Russia controlling roughly half the world's enriched uranium market.

This strategic vulnerability has triggered an unprecedented response: venture capitalists invested $2.5 billion in US next-generation nuclear technology since 2021, compared to near-zero in previous years, while the Trump administration issued executive orders to accelerate reactor deployment. The urgency stems from AI's city-sized power requirements and recognition that America cannot afford to lose what Interior Secretary Doug Burgum calls "the power race" with China.

Companies like Standard Nuclear in Oak Ridge, Tennessee are good examples of this push, developing advanced reactor fuel despite employees working months without pay.
AI

Google's Gemini AI Will Summarize PDFs For You When You Open Them (theverge.com) 24

Google is rolling out new Gemini AI features for Workspace users that make it easier to find information in PDFs and form responses. From a report: The Gemini-powered file summarization capabilities in Google Drive have now expanded to PDFs and Google Forms, allowing key details and insights to be condensed into a more convenient format that saves users from manually digging through the files.

Gemini will proactively create summary cards when users open a PDF in their drive and present clickable actions based on its contents, such as "draft a sample proposal" or "list interview questions based on this resume." Users can select any of these options to make Gemini perform the desired task in the Drive side panel. The feature is available in more than 20 languages and started rolling out to Google Workspace users on June 12th, though it may take a couple of weeks to appear.

AI

Salesforce Blocks AI Rivals From Using Slack Data (theinformation.com) 9

An anonymous reader shares a report: Slack, an instant-messaging service popular with businesses, recently blocked other software firms from searching or storing Slack messages even if their customers permit them to do so, according to a public disclosure from Slack's owner, Salesforce.

The move, which hasn't previously been reported, could hamper fast-growing artificial intelligence startups that have used such access to power their services, such as Glean. Since the Salesforce change, Glean and other applications can no longer index, copy or store the data they access via the Slack application programming interface on a long-term basis, according to the disclosure. Salesforce will continue allowing such firms to temporarily use and store their customers' Slack data, but they must delete the data, the company said.

Power

Meta Inks a New Geothermal Energy Deal To Support AI (theverge.com) 27

Meta has struck a new deal with geothermal startup XGS Energy to supply 150 megawatts of carbon-free electricity for its New Mexico data center. "Advances in AI require continued energy to support infrastructure development," Urvi Parekh, global head of energy at Meta, said in a press release. "With next-generation geothermal technologies like XGS ready for scale, geothermal can be a major player in supporting the advancement of technologies like AI as well as domestic data center development." The Verge reports: Geothermal plants generate electricity using Earth's heat; typically drawing up hot fluids or steam from natural reservoirs to turn turbines. That tactic is limited by natural geography, however, and the US gets around half a percent of its electricity from geothermal sources. Startups including XGS are trying to change that by making geothermal energy more accessible. Last year, Meta made a separate 150MW deal with Sage Geosystems to develop new geothermal power plants. Sage is developing technologies to harness energy from hot, dry rock formations by drilling and pumping water underground, essentially creating artificial reservoirs. Google has its own partnership with another startup called Fervo developing similar technology.

XGS Energy is also seeking to exploit geothermal energy from dry rock resources. It tries to set itself apart by reusing water in a closed-loop process designed to prevent water from escaping into cracks in the rock. The water it uses to take advantage of underground heat circulates inside a steel casing. Conserving water is especially crucial in a drought-prone state like New Mexico, where Meta is expanding its Los Lunas data center. Meta declined to say how much it's spending on this deal with XGS Energy. The initiative will roll out in two phases with a goal of being operational by 2030.

Facebook

The Meta AI App Is a Privacy Disaster (techcrunch.com) 20

Meta's standalone AI app is broadcasting users' supposedly private conversations with the chatbot to the public, creating what could amount to a widespread privacy breach. Users appear largely unaware that hitting the app's share button publishes their text exchanges, audio recordings, and images for anyone to see.

The exposed conversations reveal sensitive information: people asking for help with tax evasion, whether family members might face arrest for proximity to white-collar crimes, and requests to write character reference letters that include real names of individuals facing legal troubles. Meta provides no clear indication of privacy settings during posting, and if users log in through Instagram accounts set to public, their AI searches become equally visible.
Facebook

Meta Invests $14.3 Billion in Scale AI 13

Meta has invested $14.3 billion in Scale AI while recruiting the startup's CEO to join its AI team, marking an aggressive move by the social media giant to accelerate its AI development efforts. The unusual deal gives Meta a 49% non-voting stake in Scale, valuing the company at more than $29 billion. Scale co-founder Alexandr Wang will join Meta's "superintelligence" unit, which focuses on building AI systems that perform as well as humans -- a theoretical milestone known as artificial general intelligence.

Wang will remain on Scale's board while Jason Droege takes over as interim CEO. The investment represents Meta's intensified push to compete in AI development after CEO Mark Zuckerberg grew frustrated with the lukewarm reception of the company's Llama 4 language model, which launched in April. Since then, Zuckerberg has taken a hands-on approach to recruiting AI talent, hosting job candidates at his personal homes and reorganizing Meta's offices to position the superintelligence team closer to his workspace.
AI

Barbie Goes AI As Mattel Teams With OpenAI To Reinvent Playtime (nerds.xyz) 62

BrianFagioli writes: Barbie is getting a brain upgrade. Mattel has officially partnered with OpenAI in a move that brings artificial intelligence to the toy aisle. Yes, you read that right, folks. Barbie might soon be chatting with your kids in full sentences, powered by ChatGPT.

This collaboration brings OpenAI's advanced tools into Mattel's ecosystem of toys and entertainment brands. The goal? To launch AI-powered experiences that are fun, safe, and age-appropriate. Mattel says it wants to keep things magical while also respecting privacy and security. Basically, Barbie won't be data-mining your kids... yet.

Businesses

Canva Now Requires Use of LLMs During Coding Interviews 85

An anonymous reader quotes a report from The Register: Australian SaaS-y graphic design service Canva now requires candidates for developer jobs to use AI coding assistants during the interview process. [...] Canva's hiring process previously included an interview focused on computer science fundamentals, during which it required candidates to write code using only their actual human brains. The company now expects candidates for frontend, backend, and machine learning engineering roles to demonstrate skill with tools like Copilot, Cursor, and Claude during technical interviews, Canva head of platforms Simon Newton wrote in a Tuesday blog post.

His rationale for the change is that nearly half of Canva's frontend and backend engineers use AI coding assistants daily, that it's now expected behavior, and that the tools are "essential for staying productive and competitive in modern software development." Yet Canva's old interview process "asked candidates to solve coding problems without the very tools they'd use on the job," Newton admitted. "This dismissal of AI tools during the interview process meant we weren't truly evaluating how candidates would perform in their actual role," he added. Candidates were already starting to use AI assistants during interview tasks -- and sometimes used subterfuge to hide it. "Rather than fighting this reality and trying to police AI usage, we made the decision to embrace transparency and work with this new reality," Newton wrote. "This approach gives us a clearer signal about how they'll actually perform when they join our team."
The initial reaction among engineers "was worry that we were simply replacing rigorous computer science fundamentals with what one engineer called 'vibe-coding sessions,'" Newton said.

The company addressed these concerns with a recruitment process that sees candidates expected to use their preferred AI tools, to solve what Newton described as "the kind of challenges that require genuine engineering judgment even with AI assistance." Newton added: "These problems can't be solved with a single prompt; they require iterative thinking, requirement clarification, and good decision-making."
The Internet

Abandoned Subdomains from Major Institutions Hijacked for AI-Generated Spam (404media.co) 17

A coordinated spam operation has infiltrated abandoned subdomains belonging to major institutions including Nvidia, Stanford University, NPR, and the U.S. government's vaccines.gov site, flooding them with AI-generated content that subsequently appears in search results and Google's AI Overview feature.

The scheme, reports 404 Media, posted over 62,000 articles on Nvidia's events.nsv.nvidia.com subdomain before the company took it offline within two hours of being contacted by reporters. The spam articles, which included explicit gaming content and local business recommendations, used identical layouts and a fake byline called "Ashley" across all compromised sites. Each targeted domain operates under different names -- "AceNet Hub" on Stanford's site, "Form Generation Hub" on NPR, and "Seymore Insights" on vaccines.gov -- but all redirect traffic to a marketing spam page. The operation exploits search engines' trust in institutional domains, with Google's AI Overview already serving the fabricated content as factual information to users searching for local businesses.
AI

Large Language Models, Small Labor Market Effects (nber.org) 18

The abstract of a study featured on NBER: We examine the labor market effects of AI chatbots using two large-scale adoption surveys (late 2023 and 2024) covering 11 exposed occupations (25,000 workers, 7,000 workplaces), linked to matched employer-employee data in Denmark.

AI chatbots are now widespread -- most employers encourage their use, many deploy in-house models, and training initiatives are common. These firm-led investments boost adoption, narrow demographic gaps in take-up, enhance workplace utility, and create new job tasks. Yet, despite substantial investments, economic impacts remain minimal. Using difference-in-differences and employer policies as quasi-experimental variation, we estimate precise zeros: AI chatbots have had no significant impact on earnings or recorded hours in any occupation, with confidence intervals ruling out effects larger than 1%. Modest productivity gains (average time savings of 3%), combined with weak wage pass-through, help explain these limited labor market effects. Our findings challenge narratives of imminent labor market transformation due to Generative AI.

Power

Talen Energy and Amazon Sign Nuclear Power Deal To Fuel Data Centers 16

Amazon Web Services has signed a long-term deal with Talen Energy to receive up to 1,920 megawatts of carbon-free electricity from the Susquehanna nuclear plant through 2042 to support AWS's AI and cloud operations. The partnership also includes plans to explore new Small Modular Reactors and expand nuclear capacity amid rising U.S. energy demand. Utility Drive reports: Under the PPA, Talen's existing 300-MW co-location arrangement with AWS will shift to a "front of the meter" framework that doesn't require Federal Energy Regulatory Commission approval, according to Houston-based Talen. The company expects the transition will occur next spring after transmission upgrades are finished. FERC in November rejected an amended interconnection service agreement that would have facilitated expanded power sales to a co-located AWS data center at the Susquehanna plant. The agency is considering potential rules for co-located loads in PJM.

Talen expects to earn about $18 billion in revenue over the life of the contract at its full quantity, according to an investor presentation. The contract, which runs through 2042, calls for delivering 840 MW to 1,200 MW in 2029 and 1,680 MW to 1,920 MW in 2032. Talen will act as the retail power supplier to AWS, and PPL Electric Utilities will be responsible for transmission and delivery, the company said.
Amazon on Monday said it plans to spend about $20 billion building data centers in Pennsylvania.

"We are making the largest private sector investment in state history -- $20 billion-- to bring 1,250 high-skilled jobs and economic benefits to the state, while also collaborating with Talen Energy to help power our infrastructure with carbon-free energy," Kevin Miller, AWS vice president of global data centers, said.
Advertising

Amazon Is About To Be Flooded With AI-Generated Video Ads 30

Amazon has launched its AI-powered Video Generator tool in the U.S., allowing sellers to quickly create photorealistic, motion-enhanced video ads often with a single click. "We'll likely see Amazon retailers utilizing AI-generated video ads in the wild now that the tool is generally available in the U.S. and costs nothing to use -- unless the ads are so convincing that we don't notice anything at all," says The Verge. From the report: New capabilities include motion improvements to show items in action, which Amazon says is best for showcasing products like toys, tools, and worn accessories. For example, Video Generator can now create clips that show someone wearing a watch on their wrist and checking the time, instead of simply displaying the watch on a table. The tool generates six different videos to choose from, and allows brands to add their logos to the finished results.

The Video Generator can now also make ads with multiple connected scenes that include humans, pets, text overlays, and background music. The editing timeline shown in Amazon's announcement video suggests the ads max out at 21 seconds.. The resulting ads edge closer to the traditional commercials we're used to seeing while watching TV or online content, compared to raw clips generated by video AI tools like OpenAI's Sora or Adobe Firefly.

A new video summarization feature can create condensed video ads from existing footage, such as demos, tutorials, and social media content. Amazon says Video Generator will automatically identify and extract key clips to generate new videos formatted for ad campaigns. A one-click image-to-video feature is also available that creates shorter GIF-style clips to show products in action.
Robotics

Scientists Built a Badminton-Playing Robot With AI-Powered Skills (arstechnica.com) 10

An anonymous reader quotes a report from Ars Technica: The robot built by [Yuntao Ma and his team at ETH Zurich] was called ANYmal and resembled a miniature giraffe that plays badminton by holding a racket in its teeth. It was a quadruped platform developed by ANYbotics, an ETH Zurich spinoff company that mainly builds robots for the oil and gas industries. "It was an industry-grade robot," Ma said. The robot had elastic actuators in its legs, weighed roughly 50 kilograms, and was half a meter wide and under a meter long. On top of the robot, Ma's team fitted an arm with several degrees of freedom produced by another ETH Zurich spinoff called Duatic. This is what would hold and swing a badminton racket. Shuttlecock tracking and sensing the environment were done with a stereoscopic camera. "We've been working to integrate the hardware for five years," Ma said.

Along with the hardware, his team was also working on the robot's brain. State-of-the-art robots usually use model-based control optimization, a time-consuming, sophisticated approach that relies on a mathematical model of the robot's dynamics and environment. "In recent years, though, the approach based on reinforcement learning algorithms became more popular," Ma told Ars. "Instead of building advanced models, we simulated the robot in a simulated world and let it learn to move on its own." In ANYmal's case, this simulated world was a badminton court where its digital alter ego was chasing after shuttlecocks with a racket. The training was divided into repeatable units, each of which required that the robot predict the shuttlecock's trajectory and hit it with a racket six times in a row. During this training, like a true sportsman, the robot also got to know its physical limits and to work around them.

The idea behind training the control algorithms was to develop visuo-motor skills similar to human badminton players. The robot was supposed to move around the court, anticipating where the shuttlecock might go next and position its whole body, using all available degrees of freedom, for a swing that would mean a good return. This is why balancing perception and movement played such an important role. The training procedure included a perception model based on real camera data, which taught the robot to keep the shuttlecock in its field of view while accounting for the noise and resulting object-tracking errors.

Once the training was done, the robot learned to position itself on the court. It figured out that the best strategy after a successful return is to move back to the center and toward the backline, which is something human players do. It even came with a trick where it stood on its hind legs to see the incoming shuttlecock better. It also learned fall avoidance and determined how much risk was reasonable to take given its limited speed. The robot did not attempt impossible plays that would create the potential for serious damage -- it was committed, but not suicidal. But when it finally played humans, it turned out ANYmal, as a badminton player, was amateur at best.
The findings have been published in the journal Science Robotics.

You can watch a video of the four-legged robot playing badminton on YouTube.

Slashdot Top Deals