The Courts

Judge Blocks Pentagon's Effort To 'Punish' Anthropic With Supply Chain Risk Label 82

An anonymous reader quotes a report from CNN: A federal judge in California has indefinitely blocked the Pentagon's effort to "punish" Anthropic by labeling it a supply chain risk and attempting to sever government ties with the AI company, ruling that those measures ran roughshod over its constitutional rights. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," US District Judge Rita Lin wrote in a stinging 43-page ruling.

Lin, an appointee of former President Joe Biden, said she would delay implementation of her ruling for one week to allow the government to appeal. But in her ruling, she made it clear she disapproved of the government's actions, which she said violated the company's First Amendment and due process rights. [...] "These broad measures do not appear to be directed at the government's stated national security interests," she wrote. "The Department of War's records show that it designated Anthropic as a supply chain risk because of its 'hostile manner through the press.'" "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," she added.
"We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits," an Anthropic spokesperson said after the ruling. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."
AI

OpenAI Abandons ChatGPT's Erotic Mode (techcrunch.com) 80

OpenAI has indefinitely paused plans for an erotic mode in ChatGPT as part of a broader strategy shift away from side projects and toward business and coding tools. TechCrunch reports: The proposed "adult mode," which CEO Sam Altman first floated in October, had inspired considerable controversy from tech watchdog groups as well as from OpenAI's own staff. In January, a meeting between company executives and its council of advisers got heated, with one of the advisers cautioning that OpenAI could be in the process of developing a "sexy suicide coach," The Wall Street Journal previously reported.

Amidst all of the criticism, the release of the feature was delayed multiple times. FT notes that the erotic feature now has no timeline for release. When reached for comment by TechCrunch, an OpenAI spokesperson said the company had "nothing further to add."

Government

Senators Demand to Know How Much Energy Data Centers Use (wired.com) 51

Elizabeth Warren and Josh Hawley are pressing the Energy Information Administration (EIA) to provide better information on how much electricity data centers actually use. In a joint letter sent to the EIA on Thursday, the two senators press the agency to publicly collect "comprehensive, annual energy-use disclosures" on data centers, saying it's "essential for accurate grid planning and will support policymaking to prevent large companies from increasing electricity costs for American families." Wired reports: In December, EIA administrator Tristan Abbey said at a roundtable that he expects the EIA "is going to be an essential player in providing objective data and analysis to policymakers" with respect to data centers. The agency announced on Wednesday that it would be conducting a voluntary pilot program to collect energy consumption information from nearly 200 companies operating data centers in Texas, Washington, and Virginia, which will cover "energy sources, electricity consumption, site characteristics, server metrics, and cooling systems."

While the senators praise the EIA pilot program, their letter includes several questions about how the agency plans to move forward with more data collection, such as whether or not the energy surveys will be mandatory and whether or not the EIA will collect information on behind-the-meter power. This information will be especially crucial, the senators say, to make sure that big tech companies that signed the agreement at the White House earlier this month pledging that consumers won't bear the costs of data center electricity use will stick to their promises. "Without this data, policymakers, utility companies, and local communities are operating in the dark," the senators write.

The EIA mandates that other industries, including oil and gas and manufacturing, provide regular data to the agency; Hawley and Warren assert that the EIA should be able to collect similar information from data centers under the same provision. The provision is broad enough, Peskoe says, that it could absolutely be interpreted to encompass data centers.
Yesterday, Senator Bernie Sanders and Rep. Alexandria Ocasio-Cortez announced a bill that would "enact a reasonable pause to the development of AI to ensure the safety of humanity." It calls for a federal moratorium on AI data centers until stronger national safeguards are in place around safety, jobs, privacy, energy costs, and environmental impact.
Mozilla

Mozilla and Mila Team Up On Open Source AI Push 31

BrianFagioli writes: Mozilla just teamed up with Mila, the Quebec Artificial Intelligence Institute, to push open source AI -- and it feels like a direct response to Big Tech tightening its grip on the space. Instead of relying on closed models, the goal here is to build "sovereign AI" that's more transparent, privacy-focused, and actually under the control of developers and even governments. They're starting with things like private memory for AI agents, which sounds niche but matters if you care about where your data goes. Big question is whether open source can realistically keep up with the billions being poured into proprietary AI, but at least someone's trying to give folks an alternative. "Canada has what it takes to lead on frontier AI that the world can actually trust: the research depth, the values, and the will to do it differently. The next frontier in AI isn't just capability, it is trustworthiness, and Canada is uniquely positioned to lead on both. This partnership is a concrete step in that direction. Open, trustworthy AI isn't a compromise on ambition. It's the higher bar," said Valerie Pisano, president and CEO of Mila.
Wikipedia

Wikipedia Bans Use of Generative AI 32

Wikipedia has banned the use of generative AI to write or rewrite articles, saying it "often violates several of Wikipedia's core content policies." That said, editors may still use it for translation or light refinements as long as a human carefully checks the copy for accuracy. Engadget reports: Editors can use large language models (LLMs) to refine their own writing, but only if the copy is checked for accuracy. The policy states that this is because LLMs "can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited." Editors can also use LLMs to assist with language translation. However, they must be fluent enough in both languages to catch errors. Once again, the information must be checked for inaccuracies.

"My genuine hope is that this can spark a broader change. Empower communities on other platforms, and see this become a grassroots movement of users deciding whether AI should be welcome in their communities, and to what extent," Wikipedia administrator Chaotic Enby wrote. The administrator also called the policy a "pushback against enshittification and the forceful push of AI by so many companies in these last few years."
Businesses

China Reviews $2 Billion Manus Sale To Meta As Founders Barred From Leaving Country (ft.com) 33

Chinese authorities have barred two Manus executives from leaving the country while investigating whether Meta's reported $2 billion acquisition of the Singapore-based AI startup violated foreign investment reporting rules. "Manus was founded in China but last year relocated its headquarters and core team to Singapore," notes the Financial Times. "Meta acquired it for $2 billion at the end of last year." The Financial Times reports: Manus's chief executive Xiao Hong and chief scientist Ji Yichao were summoned to a meeting in Beijing with the National Development and Reform Commission this month, according to three people with knowledge of the matter. They said Xiao and Ji were questioned on potential violations of foreign direct investment rules related to its onshore Chinese entities.

After the meeting, the Singapore-based executives were told they were not allowed to leave China because of a regulatory review, while they remain free to travel within the country, two of the people said. No formal investigation has been opened and no charges have been brought. Manus is actively seeking law firms and consultancies to help resolve the matter, said a person with knowledge of the move.

Privacy

Reddit Takes On Bots With 'Human Verification' Requirements (techcrunch.com) 75

Reddit is rolling out human-verification checks for accounts that show signs of bot-like behavior, while also labeling approved automated accounts that provide useful services. The social media company stressed that these checks will only happen if something appears "fishy," and that it is "not conducting sitewide human verification." TechCrunch reports: To identify potential bots, Reddit is using specialized tooling that looks at account-level signals and other factors -- like how quickly the account is attempting to write or post content. Using AI to write posts or comments, however, is not against its policies (though community moderators may set their own rules).

To verify an account is human, Reddit will leverage third-party tools like passkeys from Apple, Google, YubiKey, and other third-party biometric services, like Face ID or even Sam Altman's World ID -- or, in some countries, the use of government IDs. Reddit notes this last category may be required in some countries like the U.K. and Australia and some U.S. states, because of local regulations on age verification, but it's not the company's preferred method.
"If we need to verify an account is human, we'll do it in a privacy-first way," Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. "Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn't have to sacrifice one for the other."
Robotics

Melania Trump Welcomes Humanoid Robot At White House Summit 94

Longtime Slashdot reader theodp writes: In Melania and the Robot, the New York Times reports on First Lady Melania Trump's inaugural Fostering the Future Together Coalition Summit, which brought together international leaders, First Spouses from around the world, tech leaders, educators, and nonprofits to collaborate on practical solutions that expand access to educational tools while strengthening protections for children in digital environments (Day 2 WH summary). The Times begins:

"On Wednesday, Mrs. Trump appeared at the White House alongside Figure 3, a humanoid, A.I.-powered robot whose uses, according to the company that makes it, include fetching towels, carrying groceries and serving champagne. But Mrs. Trump joins tech executives and some researchers in envisioning a world beyond robot butlery. She is interested in how these robots could cut it as educators. Both clad in shades of white, the first lady and the visiting robot walked into a gathering of first spouses from around the world, a group that included Sara Netanyahu of Israel, Olena Zelenska of Ukraine, and Brigitte Macron of France. The dulcet tones from a (presumably human) military orchestra played as the first lady and her guest entered the event. Both lady and robot extolled the virtues of further integrating robots into the educational and social lives of children. In the history of modern first-lady initiatives, which have included building a national book festival (Laura Bush), reshuffling the food pyramid (Michelle Obama) and advocating for free community college (Jill Biden), Mrs. Trump's involvement of a humanoid robot in education policy was a first."

"Figure 3 delivered brief remarks and delivered salutations in several languages. With its sleek black-and-white appearance, Figure 3 would fit right in with the first lady's branding aesthetic, which includes a self-titled coffee table book and movie, not least because the name "MELANIA" was emblazoned on the side of its glossy plastic head. After Figure 3 teetered gingerly away, Mrs. Trump looked around the room and told them that the future looked a lot like what they had just witnessed. 'The future of A.I. is personified,' she told her audience. 'It will be formed in the shape of humans. Very soon artificial intelligence will move from our mobile phones to humanoids that deliver utility.' She invited her guests to envision a future in which a robot philosopher educated children."
AI

Canada's Immigration Rejected Applicant Based On AI-Invented Job Duties (thestar.com) 73

New submitter haroldbasset writes: Canada's Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department's AI assistant had invented that work experience. She has been working in Canada as a health scientist -- she has a Ph.D. in the immunology of aging -- but the AI genius instead described her as "wiring and assembling control circuits, building control and robot panels, programming and troubleshooting." "It's believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals," reports the Toronto Star. "The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision."

The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here."

The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.
AI

Apple Can Create Smaller On-Device AI Models From Google's Gemini 10

Apple reportedly has full access to customize Google's Gemini model, allowing it to distill smaller on-device AI models for Siri and other features that can run locally without an internet connection. MacRumors reports: The Information explains that Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results, with a rundown of the reasoning process. Apple can feed the answers and reasoning information that it gets from Gemini to train smaller, cheaper models. With this process, the smaller models are able to learn the internal computations used by Gemini, producing efficient models that have Gemini-like performance but require less computing power.

Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn't always meet Apple's needs.
Facebook

Meta Loses Trial After Arguing Child Exploitation Was 'Inevitable' (arstechnica.com) 45

Meta lost a child safety trial in New Mexico after a court found that its platforms failed to adequately protect children from exploitation and misled parents about app safety. According to Ars Technica, the jury on Tuesday "deliberated for only one day before agreeing that Meta should pay $375 million in civil damages..." While the jury declined to impose the maximum penalty New Mexico sought, which could have cost the company $2.2 billion, Meta may still face additional financial penalties and could be forced to make changes to its apps. From the report: The trial followed a 2023 lawsuit filed by New Mexico Attorney General Raul Torrez after The Guardian published a two-year investigation exposing child sex trafficking markets on Facebook and Instagram. Torrez's office then conducted an undercover investigation codenamed "Operation MetaPhile," in which officers posed as children on Facebook, Instagram, and WhatsApp. The jury heard that these fake profiles were "simply inundated with images and targeted solicitations" from child abusers, Torrez told CNBC in 2024. Ultimately, three men were arrested amid the sting for attempting to use Meta's social networks to prey on children. At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that "harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company's platforms due to their vast user bases," The Guardian reported. Internal messages and documents, as well as testimony from child safety experts within and outside the company, showed that Meta repeatedly ignored warnings and failed to fix platforms to protect kids, New Mexico's AG successfully argued.

Perhaps most troubling to the jury, law enforcement and the National Center for Missing and Exploited Children also testified that Meta's reporting of crimes to children on its apps -- including child sexual abuse materials (CSAM) -- was "deficient," The Guardian reported. Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta "generated high volumes of 'junk' reports by overly relying on AI to moderate its platforms." This made its reporting "useless" and "meant crimes could not be investigated," The Guardian reported.

Celebrating the win as a "historic victory," Torrez told CNBC that families had previously paid the price for "Meta's choice to put profits over kids' safety." "Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew," Torrez said. "Today the jury joined families, educators, and child safety experts in saying enough is enough."
Meta said the company plans to appeal the verdict. "We respectfully disagree with the verdict and will appeal," Meta's spokesperson said. "We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online."
AI

AI Economy Is a 'Ponzi Scheme,' Says AI Doc Director 58

An anonymous reader quotes a report from Vanity Fair: Focus Features is releasing The AI Doc: Or How I Became an Apocaloptimist in theaters on March 27. If you're even slightly interested in what's going on with AI, it's required viewing: The film touches on all aspects of the technology, from how it's currently being used to how it will be used in the near future, when we potentially reach the age of artificial general intelligence, or AGI. AGI is a theoretical form of AI that supposedly would be able to perform complex tasks without each step being prompted by a human user -- the point at which machines become autonomous, like Skynet in the Terminator franchise. [...]

[Director Daniel Roher] interviews nearly all the major players in the AI space: Sam Altman of OpenAI; the Amodei siblings of Anthropic; Demis Hassabis of DeepMind (Google's AI arm); theorists and reporters covering the subject. Notably absent are Elon Musk and Mark Zuckerberg. "Have you seen that guy speak? He's like a lizard man," Roher says regarding Zuckerberg. "Musk said yes initially, but it was right when he was doing all the stuff with Trump, and we just got ghosted after a while," adds [codirector Charlie Tyrell]. Altman, arguably AI's greatest mascot, is prominently featured in the documentary. But Roher wasn't buying it. "That guy doesn't know what genuine means," he says. "Every single thing he says and does is calculated. He is a machine. He's like AI, and it's in the service of growth, growth, growth. You can be disingenuous and media savvy." [...]

How, exactly, is Roher an apocaloptimist? "We are preaching a worldview," he says, "in a world that's asking you to either see this as the apocalypse or embrace it with this unbridled optimism." He and his film are taking a stance that rests between those two poles. "It's both at the same time. We have to try and embrace a middle ground so this technology doesn't consume us, so we can stay in the driver's seat," says Roher -- meaning, it's up to all of us to chart the course. "You have to speak up," says Tyrell. "Things like AI should disclose themselves. If your doctor's office is using an AI bot, you have to say, I don't like that." The driving message behind the film is that resistance starts with the people. That position is shared by The AI Doc producer Daniel Kwan, who won an Oscar for directing Everything Everywhere All at Once and has been at the forefront of discussions about AI in the entertainment industry. [...]

Roher and Tyrell both use AI in their everyday lives and openly admit to it being a helpful tool. They also agree that this technology can make daily tasks easier for the average consumer. But at the end of our conversation, we get into the economics of AI and how Wall Street is propping up the industry through huge evaluations of these companies -- and Roher gets going yet again. "This is all smoke and mirrors. The entire economy of AI is being propped up by a Ponzi scheme. The hype of this technology is unlike any hype we've seen," he says. "I feel like I could announce in a press release that Academy Award winner Daniel Roher is starting an AI film company, and I could sell it the next day for $20 million. It's fucking crazy." [...] "These people are prospectors, and they are going up to the Yukon because it's the gold rush."
AI

OpenAI Discontinues Sora Video Platform App 46

OpenAI is shutting down Sora, its generative-AI video creation platform it launched in December 2024. "The move is one of a number of steps OpenAI is taking to refocus on business and coding functions ahead of a potential initial public offering as soon as the fourth quarter of this year," reports the Wall Street Journal.

CEO Sam Altman announced the changes to staff on Tuesday. "We're saying goodbye to Sora," the Sora Team said in a post on X. "To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We'll share more soon, including timelines for the app and API and details on preserving your work."

Last week, OpenAI announced plans to combine its Atlas web browser, ChatGPT app, and Codex coding app into a singular desktop "superapp." "We realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts," said CEO of Applications, Fidji Simo. "That fragmentation has been slowing us down and making it harder to hit the quality bar we want." This could behind the decision to kill Sora as the company redirects its resources and top talent towards productivity tools that benefit both enterprises and individual users.
AI

Arm Unveils New AGI CPU With Meta As Debut Customer 29

Arm unveiled its first self-developed data center chip, the AGI CPU, designed for handling agentic AI workloads. The new chip was built in partnership with Meta and manufactured by TSMC. Other customers for the new chip include OpenAI, Cloudflare, SAP, and SK Telecom. Reuters reports: The new chip, called the AGI CPU, will address data-crunching needed for a specific type of AI that is able to act on behalf of users with minimal oversight, instead of responding to queries as part of a chatbot. For years, Arm, majority-owned by Japan's SoftBank Group has relied only on intellectual property for revenue, licensing its designs to companies such as Qualcomm and Nvidia and then collecting a royalty payment based on the number of units sold.

"It's a very pivotal moment for the company," CEO Rene Haas said in an interview with Reuters. The new chip will be overseen by Mohamed Awad, head of the company's cloud AI business, and Arm has additional designs in the works that it plans to release at 12- to 18-month intervals. TSMC is fabricating the device on its 3-nanometer technology and is made from two distinct pieces of silicon that operate as a single chip. Arm plans to put it into volume production in the second half of this year but has received test chips that function as expected. In addition to the chip itself, Arm is working with server makers such as Lenovo and Quanta Computer to offer complete systems.
AI

Anthropic's Claude Can Now Use Your Computer To Finish Tasks 42

Anthropic is testing a new Claude feature that lets users send a request from their phone and have the AI carry it out directly on their computer, such as opening apps, using a browser, or editing files. The move follows the viral spread of OpenClaw earlier this year, which has gained cult popularity among devs for the ability to run local, 24/7 personal workflows. CNBC reports: Users can now message Claude a task from a phone, and the AI agent will then complete that task, Anthropic announced Monday. After being prompted, Claude can open apps on your computer, navigate a web browser and fill in spreadsheets, Anthropic said. One prompt Anthropic demonstrated in a video posted Monday is a user running late for a meeting. The user asks Claude to export a pitch deck as a PDF file and attach it to a meeting invite. The video shows Claude carrying out the task. [...]

Anthropic cautioned that computer use "is still early compared to Claude's ability to code or interact with text." "Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving," Anthropic warned. The company added that it has built the computer use capability "with safeguards that minimize risk," and that Claude will always request permission before accessing new apps. Users can use Dispatch, a feature it released last week in Claude Cowork. That lets users have a continuous conversation with Claude from a phone or desktop and assign the agent tasks.
Businesses

Epic Games To Cut More Than 1,000 Jobs As Fortnite Usage Falls (reuters.com) 42

Epic Games is cutting more than 1,000 jobs as usage of its flagship title, Fortnite, falls. "The layoffs aren't related to AI," CEO Tim Sweeney noted. Reuters reports: The cuts, along with more than $500 million in savings from lower contracting and marketing spending and unfilled roles would put the company in "a more stable place," Sweeney said in a note to employees. [...]

"We've had challenges delivering consistent Fortnite magic," Sweeney said, adding "market conditions today are the most extreme" since the early days of the company founded in 1991.

The move marks Epic's second major round of layoffs in three years. In September 2023, the company cut about 830 jobs, or roughly 16% of its workforce. It was not immediately clear what percentage of staff would be impacted by Tuesday's announcement.

Graphics

Nvidia CEO Says He's 'Empathetic' To DLSS 5 Concerns 107

Nvidia CEO Jensen Huang says he understands the concerns about "AI slop" with DLSS 5 but insists the feature preserves a game's underlying geometry and artistic intent. "I think their perspective makes sense, " said Huang during a recent appearance on the Lex Fridman podcast. "And I could see where they're coming from because I don't love AI slop myself. You know, all of the AI-generated content increasingly looks similar, and they're all beautiful... so I'm empathic toward what they're thinking. That's just not what DLSS 5 is trying to do." Tom's Hardware reports: Although Huang is striking a more conciliatory tone, much of his response is similar to what we heard at GTC [where Huang said gamers were "completely wrong."] The artist determines the geometry, we are completely truthful to the geometry... so every single frame, it enhances, but it doesn't change anything." There was some confusion about how DLSS 5 worked when it was first announced, and although the inner workings of it still aren't clear on a technical level, Huang has said that it isn't a general-purpose generative AI model. He describes it as "content-controlled generative AI." On the other end of the spectrum, Huang also said that it isn't a post-processing filter. The technical details of DLSS 5 live somewhere between that space, and we likely won't know them until later this year when the feature is set to release.

"The question about enhancing, DLSS 5... in the future, you could even prompt it. You know, I want it to be a toon shader. I want it to look like this, kind of. You could even give it an example and it would generate in the style of that, all consistent with the artistry, the style, the intent of the artist," Huang continued. "All of that is done for the artist so they can create something that is more beautiful but still in the style that they want." Although the talking points about DLSS 5 remain unchanged, it seems that Huang has at least heard the criticism. "I think that they got the impression that the games are going to come out the way the games are... and then we're going to post-process it. That's not what DLSS is intended to do."

Huang also made assertions that DLSS is "integrated" with the artist, and suggested that it would put the power of generative AI in the hands of artists working in game development [...]. Although DLSS 5 looks like it's doing a lot, Huang said that it's just another tool, not an essential feature. "The gamers might also appreciate that, in the last couple of years, we introduced skin shaders to game developers, and many of those games have skin shaders that include sub-surface scattering that makes skin look more skin-like... [DLSS 5] is just one more tool. They can decide what to use," Huang ended the conversation about DLSS 5. Immediately after, without missing a beat, he said 1993's Doom was the most influential video game ever made.
Transportation

Wing Expands Its Drone Delivery Service To the Bay Area (engadget.com) 26

Wing is expanding its drone delivery service to the San Francisco Bay Area. "The drone delivery startup has been rapidly expanding to metro areas across the US, but is now targeting the tech-friendly Silicon Valley region," reports Engadget. From the report: Going back to its inaugural deliveries, Wing ferried office supplies across Google's Mountain View campus in the Bay Area with its automated drones. It was still a startup out of Google's X, The Moonshot Factory incubator at the time, but early users were already asking for home delivery services, according to Wing. Now, Wing's latest delivery drones can deliver groceries, food, or whatever else fits in a small package weighing up to five pounds in 30 minutes or less to Bay Area residents. Earlier this year, Wing expanded its service to an additional 150 Walmart stores across the U.S. Service began recently in Atlanta and Charlotte, and it's coming soon to Los Angeles, Houston, Cincinnati, St. Louis, Miami and other major U.S. cities to be announced later. "By 2027, Walmart and Wing say they'll have a network of more than 270 drone delivery locations nationwide."
Facebook

Mark Zuckerberg Is Building an AI Agent To Help Him Be CEO (the-independent.com) 48

An anonymous reader quotes a report from the Wall Street Journal: Mark Zuckerberg wants everyone inside and outside his company to eventually have his or her own personal artificial-intelligence agent. He is starting with himself. Zuckerberg, the chief executive of Meta Platforms, is building a CEO agent to help him do his job (source paywalled; alternative source), according to a person familiar with the project. The agent, which is still in development, is currently helping Zuckerberg get information faster -- for instance, by retrieving answers for him that he would typically have to go through layers of people to get, the person familiar with the project said.

[...] Use of AI tools has spread quickly through the ranks at Meta -- in part because it is now a factor in employees' performance reviews. Meta's internal message board is filled with posts from employees sharing new AI use cases they have found and new tools they have built using AI, according to people familiar with the matter. [...] Employees have started using personal agent tools such as My Claw that have access to their chat logs and work files and can go talk to colleagues -- or their colleagues' own personal agents -- on their behalf, the people said. Another AI tool called Second Brain that is somewhere between a chatbot and an agent is also gaining momentum internally, according to people familiar with the matter. Second Brain was built by a Meta employee on top of Claude and can index and query documents for projects, among other uses. On the internal post announcing it to staff, the employee said it is "meant to be like an AI chief of staff."

There is even a group on the internal messaging board where employees' personal agents talk to each other, some of the people said. (Separately, Meta acquired Moltbook, the social-media site for AI agents, and hired its founders in a deal earlier this month.) Meta also recently acquired Manus, a Singapore-based startup that makes personal agents that can execute tasks for its users, and is using the tool internally, some of the people said. Meta recently established a new applied AI engineering organization that is tasked with using AI to help speed up development of the company's large language models. Those teams will have an ultraflat structure of as many as 50 individual contributors reporting to one manager, The Wall Street Journal previously reported. [...] Employees across the company said they have been encouraged to attend AI tutorial meetings several times a week and frequent AI hackathons, and to create their own AI tools to speed up their work.

Transportation

Uber's Deal Blitz To Stop a Robotaxi Monopoly (businessinsider.com) 17

Uber is aggressively partnering with multiple robotaxi companies to avoid a future dominated by Waymo or Tesla. The ride-hailing giant has struck deals with at least a dozen autonomous vehicle players in recent years. Just last week, it announced a $1.25 billion partnership with Rivian, with plans to deploy up to 50,000 driverless vehicles over the next decade. Business Insider reports: Uber announced three new robotaxi partnerships in the past few weeks with Zoox, Wayve-Nissan, and Rivian. In less than half a decade, the company has secured at least a dozen deals, including with WeRide, AVride, May Mobility, Momenta, Pony.AI, Wayve, Baidu's Apollo Go, Motional, and Lucid-Nuro. Still, less than a half-dozen of Uber's partners have deployed fully driverless, paid robotaxi operations, and only one, Waymo, operates in the US. Uber has a joint deployment with Waymo in Atlanta, Austin, and Phoenix, but in other cities, Waymo is a competitor.

Uber's partnership spree is less about seeking the singular, dominant player of autonomous driving. Instead, analysts told Business Insider that Uber is ensuring multiple vendors can participate in the expensive business of robotaxis -- fending off the real risk of a Waymo or Tesla scaling on its own -- and giving itself a stake in the robotaxi economy by being the aggregator of choice. "The more diversified the supplier base, the better for the network in the middle, which is Uber," Mark Mahaney, an Uber analyst for Evercore ISI, told Business Insider.

Slashdot Top Deals