Software

LibreOffice Says Its UI Is Way Better Than Microsoft Office's (neowin.net) 235

darwinmac writes: While many users choose Microsoft Office over LibreOffice because of its support for the proprietary formats (.docx, .xlsx, and .pptx), others prefer Office for its "better" ribbon interface. These users often criticize LibreOffice for having a "clunky" UI instead of the "standard" ribbon interface you would find in Word, Excel, and other Office apps.

Now, Neowin reports that LibreOffice is fighting back, arguing that its UI is actually superior because it is customizable, with several modes such as the classic toolbar interface, an Office-inspired ribbon layout, a sidebar-focused design, and more. Furthermore, it argues that there is no evidence that the ribbon offers "superior usability" over other interface modes.
LibreOffice says in a blog post: Incidentally, the characterization of ribbon-style interfaces as "modern" or "standard," used by several users, is not based on any objective usability parameter or design principle, but is the result of Microsoft's dominance in the market and the huge investments made when the ribbon was introduced in Office 2007 as a new paradigm for productivity software. The idea that "modern" equals "similar to a ribbon" is a normalization effect: the Microsoft interface has become a benchmark because of its ubiquity, not because of its proven advantages in terms of usability. Added to this is the fact that many users evaluate office software through the lens of familiarity with Microsoft Office and consider deviation from it as a problem rather than a design choice. Before this, LibreOffice had also criticized its competitor OnlyOffice, accusing it of being "fake open source" because it believes OnlyOffice is working with Microsoft to lock users into the Office ecosystem by prioritizing the formats mentioned earlier instead of LibreOffice's own OpenDocument Format (ODF).
Displays

Apple Launches New M5 Chips, MacBook Pro, and First New Monitors In Years (apple.com) 47

Today, Apple updated the MacBook Pro and MacBook Air with support for its new M5 chips. It also unveiled a pair of all-new Studio Display XDR monitors. Longtime Slashdot reader jizmonkey shares details about the M5 Pro and M5 Max chips, which look to be fairly major updates from the previous generation: Apple announced its newest CPUs today, which it claims has the fastest single-threaded performance in the world. Both the M5 Pro and M5 Max have eighteen-core designs, versus twelve or fourteen in the M4 Pro and fourteen or sixteen in the M4 Max. However, the number of higher-performing cores has been reduced significantly. In the older M4 designs, the chips had eight, ten, or twelve "performance" cores and four "efficiency" cores. In the M5 design, there are now only six higher-performing cores (now called "super" cores) and twelve lower-performing cores (now called "performance" cores). [Apple positions this "reduction" as a redesigned architecture with new core types.] The maximum amount of RAM remains the same at 128GB for the M5 Max (64GB for the M5 Pro), and GPU performance has increased. [The M5 Pro features up to a 20-core GPU, while the M5 Max scales up to 40 cores, each equipped with a Neural Accelerator. Apple also says the new architecture delivers over 4x peak GPU compute for AI compared to the previous generation, along with up to 35 percent faster performance in ray-traced graphics workloads.] Laptops with the new chips are available to order starting tomorrow and will be delivered starting March 11. As for the new XDR monitors, MacRumors highlights some of the key features in its reporting: Apple today introduced an all-new Studio Display XDR monitor with a 27-inch screen, mini-LED backlighting, 5K resolution, peak brightness of 2,000 nits for HDR content, up to a 120Hz refresh rate, Thunderbolt 5, and more. The new Studio Display XDR replaces Apple's former Pro Display XDR, which has been discontinued. Going forward, there are now two Studio Display models.

Both new Studio Display models have the same overall design as the original model. Both models have a 12-megapixel Center Stage camera, but it now supports Desk View on the new models. Both models also feature an upgraded six-speaker system, with Apple advertising "30 percent deeper bass" compared to the previous model. Only the higher-end Studio Display XDR received a 120Hz refresh rate, mini-LED backlighting, increased brightness, and faster 140W pass-through charging. The regular Studio Display still has a 60Hz refresh rate and up to 600 nits of brightness. Both models have 27-inch displays with a 5K resolution.

The new Studio Displays can be pre-ordered starting Wednesday, March 4, ahead of a Wednesday, March 11 launch. In the U.S., the regular Studio Display continues to start at $1,599, while the Studio Display XDR starts at $3,299.

The Military

Hacked Tehran Traffic Cameras Fed Israeli Intelligence Before Strike On Khamenei (calcalistech.com) 197

An anonymous reader shares a CTech article with the caption: "A brilliantly executed operation." From the report: Years before the air strike that killed Ayatollah Ali Khamenei, Israeli intelligence had been quietly mapping the daily rhythms of Tehran. According to reporting by the Financial Times (paywalled), nearly all of the Iranian capital's traffic cameras had been hacked years earlier, their footage encrypted and transmitted to Israeli servers. One camera angle near Pasteur Street, close to Khamenei's compound, allowed analysts to observe the routines of bodyguards and drivers: where they parked, when they arrived and whom they escorted. That data was fed into complex algorithms that built what intelligence officials call a "pattern of life," detailed profiles including addresses, work schedules and, crucially, which senior officials were being protected and transported. The surveillance stream was one of hundreds feeding Israel's intelligence system, which combines signals interception from Unit 8200, human assets recruited by the Mossad and large-scale data analysis by military intelligence.

When US and Israeli intelligence determined that Khamenei would attend a Saturday morning meeting at his compound, the opportunity was judged unusually favorable. Two people familiar with the operation told the FT that US intelligence provided confirmation from a human source that the meeting was proceeding as planned, a level of certainty required for a target of such magnitude. Israeli aircraft, reportedly airborne for hours, fired as many as 30 precision munitions. The strike was carried out in daylight, which the Israeli military said created tactical surprise despite heightened Iranian alertness. The Financial Times reports that the assassination was a political decision as much as a technological feat. Even during last year's 12-day war, when Israeli strikes killed more than a dozen Iranian nuclear scientists and senior military officials and disabled air defences through cyber operations and drones, Israel did not attempt to kill Khamenei.

The capability to do so, however, had been built over decades. Former Mossad official Sima Shine told the FT that Israel's strategic focus on Iran dates back to a 2001 directive from then-prime minister Ariel Sharon instructing intelligence chief Meir Dagan to make the Islamic Republic the priority target. What distinguishes the latest operation, according to the FT, is the scale of automation. Target tracking that once required painstaking visual confirmation has increasingly been handled by algorithm-driven systems parsing billions of data points. One person familiar with the process described it as an "assembly line with a single product: targets."
Further reading: America Used Anthropic's AI for Its Attack On Iran, One Day After Banning It
Cloud

Amazon Cloud Unit's Data Centers In UAE, Bahrain Damaged In Drone Strikes (reuters.com) 55

sizzlinkitty shares a Reuters report detailing how drone strikes in the Middle East conflict with Iran damaged AWS data centers in the UAE and Bahrain, disrupting core cloud services and causing "prolonged" outages. Following the initial report, where Reuters said "objects" had triggered a fire at the data centers, the article was updated with additional information: A strike on the UAE facility marks the first time a major U.S. tech company's data center has been disrupted by military action. It raises questions around Big Tech's pace of expansion in the region. "In the UAE, two of our facilities were directly struck, while in Bahrain, a drone strike in close proximity to one of our facilities caused physical impact to our infrastructure," Amazon's cloud unit Amazon Web Services (AWS) said in an update on its status page. "These strikes have caused structural damage, disrupted power delivery to our infrastructure, and in some cases required fire suppression activities that resulted in additional water damage," AWS said. "We are working to restore full service availability as quickly as possible, though we expect recovery to be prolonged given the nature of the physical damage involved," it added.

Financial institutions that use AWS services have been affected by the outage, one person with direct knowledge of the situation told Reuters, requesting anonymity because of the sensitivity of the matter. "Even as we work to restore these facilities, the ongoing conflict in the region means that the broader operating environment in the Middle East remains unpredictable," AWS said. The AWS outage disrupted a dozen core cloud services and the company advised customers to back up critical data and shift operations to servers in unaffected AWS regions. Abu Dhabi Commercial Bank said its platforms and mobile app were unavailable due to a region-wide IT disruption, although it did not directly link the outage to the AWS incident.
"In previous conflicts, regional adversaries such as Iran and its proxies targeted pipelines, refineries, and oil fields in Gulf partner states. In the compute era, these actors could also target data centers, energy infrastructure supporting compute, and fiber chokepoints," Washington-based think tank Center for Strategic and International Studies said last week.
AI

Apple Might Use Google Servers To Store Data For Its Upgraded AI Siri 21

Apple has reportedly asked Google to look into "seting up servers" for a Gemini-powered upgrade to Siri that meets Apple's privacy standards. The Verge reports: Apple had already announced in January that Google's Gemini AI models would help power the upgraded version of Siri it delayed last year, but The Information's report indicates Apple might lean even more on Google so it can catch up in AI.

The original partnership announcement said that "the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology," and that the models would "help power future Apple Intelligence features," including "a more personalized Siri." While the announcement noted that Apple Intelligence would "continue to run on Apple devices and Private Cloud Compute," it didn't specify if the new Siri would run on Google's cloud.
Apple's Private Cloud Compute is not only underpowered but it's also underutilized in its current state, notes 9to5Mac, "with the company only using about 10% of its capacity on average, leading to some already-manufactured Apple servers to be sitting dormant on warehouse shelves."
Businesses

Charter Gets FCC Permission To Buy Cox, Become Largest ISP In the US (arstechnica.com) 59

An anonymous reader quotes a report from Ars Technica: Charter Communications, operator of the Spectrum cable brand, has obtained Federal Communications Commission permission to buy Cox and surpass Comcast as the country's largest home Internet service provider. Charter has 29.7 million residential and business Internet customers compared to Comcast's 31.26 million. Buying Cox will give Charter another 5.9 million Internet customers. The FCC approved the deal on Friday, but the companies still need Justice Department approval and sign-offs from states including California and New York.

Opponents of Charter's $34.5 billion acquisition told the FCC that eliminating Cox as an independent entity will make it easier for Charter and Comcast to raise prices. But the FCC dismissed those concerns on the grounds that Charter and Cox don't compete directly against each other in the vast majority of their territories.

FCC Chairman Brendan Carr's primary demand from companies seeking to merge has been to eliminate diversity, equity, and inclusion (DEI) programs and policies. In a press release (PDF), the Carr-led FCC said that "Charter has committed to new safeguards to protect against DEI discrimination," and that Charter's network-expansion plans will bring "faster broadband and lower prices" to rural areas. The merger was approved one day after Charter sent a letter to Carr outlining its actions to end DEI. Charter offers broadband and cable service in 41 states, while Cox does so in 18 states.

Windows

Microsoft Bans 'Microslop' On Its Discord, Then Locks the Server (windowslatest.com) 82

Over the weekend, Windows Latest noticed that Microsoft's official Copilot Discord server began automatically blocking the term "Microslop." As shown in a screenshot, any message containing the word is automatically prevented from posting, and users receive a moderation notice explaining that the message includes language deemed inappropriate under the server's rules. From the report: Windows Latest found that sending a message with the word "Microslop" inside the official Copilot Discord server immediately triggers an automated moderation response. The message does not appear publicly in the channel, and instead, only the sender sees the notice stating that the content is blocked by the server because it contains a phrase deemed inappropriate.

Of course, the internet rarely leaves things there. Shortly after Windows Latest posted about Copilot Discord server blocking Microslop on X, users began experimenting in the server with variations such as "Microsl0p" using a zero instead of the letter "o." Predictably, those versions slipped past the filter. Keyword moderation has always been something of a cat-and-mouse game, and this isn't any different.

What started as a simple keyword filter quickly snowballed into users deliberately testing the restriction and posting variations of the blocked term. Accounts that included "Microslop" in their messages first got banned from messaging again. Not long after, access to parts of the server was restricted, with message history hidden and posting permissions disabled for many users.

Android

Motorola Partners With GrapheneOS 72

At MWC 2026, Motorola announced a partnership with the GrapheneOS Foundation to bring the hardened, Google-free Android variant to future devices. Until now, the OS had been designed exclusively for Google Pixel phones. "We are thrilled to be partnering with Motorola to bring GrapheneOS's industry-leading privacy and security-focused mobile operating system to their next-generation smartphone," a GrapheneOS statement reads. "This collaboration marks a significant milestone in expanding the reach of GrapheneOS, and we applaud Motorola for taking this meaningful step towards advancing mobile security."

GrapheneOS is a privacy and security focused mobile OS with Android app compatibility developed as a non-profit open source project. It's often referred to as the "de-Googled OS" because Google apps are not available by default. However, users can install them via a sandboxed version of Google Play Services.
Software

What's Driving the SaaSpocalypse (techcrunch.com) 69

An anonymous reader quotes a report from TechCrunch: One day not long ago, a founder texted his investor with an update: he was replacing his entire customer service team with Claude Code, an AI tool that can write and deploy software on its own. To Lex Zhao, an investor at One Way Ventures, the message indicated something bigger -- the moment when companies like Salesforce stopped being the automatic default. "The barriers to entry for creating software are so low now thanks to coding agents, that the build versus buy decision is shifting toward build in so many cases," Zhao told TechCrunch.

The build versus buy shift is only part of the problem. The whole idea of using AI agents instead of people to perform work throws into question the SaaS business model itself. SaaS companies currently price their software per seat -- meaning by how many employees log in to use it. "SaaS has long been regarded as one of the most attractive business models due to its highly predictable recurring revenue, immense scalability, and 70-90% gross margins," Abdul Abdirahman, an investor at the venture firm F-Prime, told TechCrunch. When one, or a handful, of AI agents can do that work -- when employees simply ask their AI of choice to pull the data from the system -- that per-seat model starts to break down.

The rapid pace of AI development also means that new tools, like Claude Code or OpenAI's Codex, can replicate not just the core functions of SaaS products but also the add-on tools a SaaS vendor would sell to grow revenue from existing customers. On top of that, customers now have the ultimate contract negotiation tool in their pockets: If they don't like a SaaS vendor's prices, they can, more easily than ever before, build their own alternative. "Even if they do not take the build route, this creates downward pressure on contracts that SaaS vendors can secure during renewals," Abdirahman continued.

We saw this as early as late 2024, when Klarna announced that it had ditched Salesforce's flagship CRM product in favor of its own homegrown AI system. The realization that a growing number of other companies can do the same is spooking public markets, where the stock prices of SaaS giants like Salesforce and Workday have been sliding. In early February, an investor sell-off wiped nearly $1 trillion in market value from software and services stocks, followed by another billion later in the month. Experts are calling it the SaaSpocalypse, with one analyst dubbing it FOBO investing -- or fear of becoming obsolete. Yet the venture investors TechCrunch spoke with believe such fears are only temporary. "This isn't the death of SaaS," Aaron Holiday, a managing partner at 645 Ventures, told TechCrunch. Rather, it's the beginning of an old snake shedding its skin, he said.

Programming

Stack Overflow Adds New Features (Including AI Assist), Rethinks 'Look and Feel' (stackoverflow.blog) 32

"At its peak in early 2014, Stack Overflow received more than 200,000 questions per month," notes the site DevClass.com. But in December they'd just 3,862 questions were asked — a 78 percent drop from the previous year.

But Stack Overflow's blog announced a beta of "a redesigned Stack Overflow" this week, noting that at July's WeAreDevelopers conference they'd "committed to pushing ourselves to experiment and evolve..." Over the past year, on the public platform, we introduced new features, including AI Assist, support for open-ended questions, enhancements to Chat, launched Coding Challenges, created an MCP server [granted limited access to AI agents and tools], expanded access to voting and comments, and more.

However, these launches are not standalone features. We have also been rethinking our look and feel, how people engage with Stack Overflow, and how content is created and shared. These new features, along with the redesign, represent how we are bringing Stack Overflow's new vision to life and delivering value that developers cannot find elsewhere.

Our goal is to build the space for every technical conversation, centered on real human-to-human connection and powered by AI when it helps most. To support this, we are introducing a redesigned Stack Overflow to best reflect this direction... During the beta period, users can visit the beta site at beta.stackoverflow.com and share feedback as we build towards a new experience on Stack Overflow.

They've updated their library of reusable UI components (buttons, forms, etc.), and are promising "More ways to share knowledge and ask any technical question." ("Alongside looking for the single right answer to your question, you can now find and share experience-based insights and peer recommendations...")

They're launching all the planned features and functionality in April, when "More users will automatically redirect to the new site." (Starting in April users "can continue to toggle back to the classic site for a limited time.")
Transportation

Does a Gas-Guzzler Revival Risk Dead-End Futures for US Automakers? (thedailynewsonline.com) 384

If U.S. automakers turn their backs on electric vehicles, "their sales outside the U.S. will shrivel," warns Bloomberg. [Alternate URL.] They're already falling behind on the technology, relying on a 100% U.S. tariff on Chinese EVs to keep surging rivals like BYD Co. at bay.... While the American automakers "mostly understand the challenge in front of them, they don't have full plans" to confront it [said Mark Wakefield, head of the global automotive practice at consultant AlixPartners]...

"Now is a great time for the V-8 engine," said Ryan Shaughnessy, the Mustang's brand manager. "We've done extensive customer research in multiple cities, looking at a variety of powertrains, and the V-8 is always the number-one choice." It isn't just customers. U.S. automakers have long been run by "car guys:" enthusiasts who live for the bone-shaking rumble of a big engine. For them, quiet and smooth EVs — even the absurdly fast ones — can't satisfy that craving. They're convinced many American car buyers share the same enthusiasm for what Shaughnessy described as "the sound and roar of the V-8."

Wall Street couldn't be happier with the new direction... Ford's fortunes are also on the rise, as it's predicting operating profits could grow by as much as 47% this year to $10 billion. Ford's stock has risen nearly 50% over the last 12 months. Under the previous environmental rules, automakers effectively had to sell zero-emission vehicles in growing numbers to offset their gas-guzzlers. When they fell short, they had to buy regulatory credits from EV companies such as Tesla Inc. or face penalties. GM spent $3.5 billion on credits from 2022 to the middle of 2025. Now, according to JPMorgan Chase & Co. analyst Ryan Brinkman, GM and Ford each have "billion dollar tailwinds"...

[T]he hangover from all that new horsepower could leave US automakers lagging their Chinese rivals who already build the world's most advanced — and lowest priced — electric cars. Indeed, there is much talk in Detroit about the competitive tsunami that will be unleashed on American automakers once Chinese car companies find a way to break through trade barriers now protecting the US market. [Ford Chief Executive Officer Jim] Farley even calls it an "existential threat"... "They're going to build as many V-8 engines and big trucks as they can get out the factory doors," said Sam Fiorani, vice president of vehicle forecasting for consultant Auto Forecast Solutions. "And as the rest of the world develops modern drivetrains, newer batteries and better electric vehicles, GM and Ford in particular are going to find themselves falling even further behind."

The article notes GM "continues to develop battery-powered vehicles, and CEO Mary Barra said the automaker would begin offering a 'handful' of hybrids soon," while Ford and Stellantis "have plans to launch extended-range electric vehicles, or EREVs, a new kind of plug-in hybrid with an internal combustion engine that recharges the battery as the vehicle drives down the road." But while automakers may be investing in future EV vehicles, they're also "leaning into the lucre that comes from selling millions of fossil-fuel vehicles in a rare moment of loosened regulation."
The Military

America Used Anthropic's AI for Its Attack On Iran, One Day After Banning It (engadget.com) 64

Engadget reports: In a lengthy post on Truth Social on February 27, President Trump ordered all federal agencies to "immediately cease all use of Anthropic's technology" following strong disagreements between the Department of Defense and the AI company. A few hours later, the U.S. conducted a major air attack on Iran with the help of Anthropic's AI tools, according to a report from The Wall Street Journal.
Even Trump's post noted there would be a six-month phase-out for Anthropic's technology (adding that Anthropic "better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.")

Anthropic's Claude technology was also used by the U.S. military less than two months ago in its operation in Venezuela — reportedly making them the first AI developer known to be used in a classified U.S. War Department operation. The Wall Street Journal reported Anthropic's technology found its way into the mission through Anthropic's contract with Palintir.
Communications

Americans Listen to Podcasts More Than Talk Radio Now, Study Shows (techcrunch.com) 36

"Podcasts have officially overtaken AM/FM talk radio as the more popular medium for spoken-word audio in the United States," reports TechCrunch, citing Edison Research's Share of Ear survey: The researchers have tracked these statistics over the last decade, and almost always, the percentage of time people spent listening to podcasts increased, while their time with spoken radio broadcasts decreased. For the first time this year, podcasts eclipsed spoken-word radio with 40% of listening time, as opposed to 39% for radio...

We checked with Edison to see if these statistics include video podcasts, and they do. But the need to clarify that question points to the undeniable growing prevalence of video podcasts, hosted on platforms like Spotify and YouTube, which marks another key trend in podcasting... YouTube said that viewers watched 700 million hours of podcasts each month in 2025 on living room devices, like TVs, up from 400 million the previous year.

The Military

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 42

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety — building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

AI

US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal (anthropic.com) 51

It started Friday when all U.S. federal agencies were ordered to "immediately cease" using Anthropic's AI technology after contract negotiations stalled when Anthropic requested prohibitions against mass domestic surveillance or fully autonomous weapons. But later Friday there were even more repercussions...

In a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable...

In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

Meanwhile, Anthrophic said on Friday that "no amount of intimidation or punishment from the Department of War will change our position." (And "We will challenge any supply chain risk designation in court.") Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement.
Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."

Also Friday, OpenAI announced that "we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. " We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
The Internet

After US-Israel Attacks, 90 Million Iranians Lose Internet Connectivity (cnn.com) 240

CNN reports that images from Iran's capital "have shown cars jammed along Tehran's street, with heavy traffic on major roads after today's wave of attacks by the US and Israel." And though Iran has a population of 93 million, the attacks suddenly plunged Iran into "a near-total internet blackout with national connectivity at 4% of ordinary levels," according to internet monitoring experts at NetBlocks.

CNN reports: Since Iran's brutal crackdown earlier this year, the regime has made progress to allow only a subset of people with security clearance to access the international web, experts said. After previous internet shutdowns, some platforms never returned. The Iranian government blocked Instagram after the internet shutdown and protests in 2022, and the popular messaging app Telegram following protests in 2018.
The International Atomic Energy Agency announced an hour ago that they're "closely monitoring developments" — keeping in contact with countries in the region and so far seeing "no evidence of any radiological impact." They're also urging "restraint to avoid any nuclear safety risks to people in the region."

UPDATE (1 PM PST): Qatar, Bahrain and Kuwait "are shifting to remote learning starting Sunday until further notice following Iranâ(TM)s retaliatory strikes on Saturday," reports CNN.
The Internet

Google Quantum-Proofs HTTPS (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Google on Friday unveiled its plan for its Chrome browser to secure HTTPS certificates against quantum computer attacks without breaking the Internet. The objective is a tall order. The quantum-resistant cryptographic data needed to transparently publish TLS certificates is roughly 40 times bigger than the classical cryptographic material used today. Today's X.509 certificates are about 64 bytes in size, and comprise six elliptic curve signatures and two EC public keys. This material can be cracked through the quantum-enabled Shor's algorithm. Certificates containing the equivalent quantum-resistant cryptographic material are roughly 2.5 kilobytes. All this data must be transmitted when a browser connects to a site.

To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure. Merkle Tree Certificates, "replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs," members of Google's Chrome Secure Web and Networking Team wrote Friday. "In this model, a Certification Authority (CA) signs a single 'Tree Head' representing potentially millions of certificates, and the 'certificate' sent to the browser is merely a lightweight proof of inclusion in that tree."

[...] Google is [also] adding cryptographic material from quantum-resistant algorithms such as ML-DSA (PDF). This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022. The [Merkle Tree Certificates] MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 64-byte length they are now [...]. The new system has already been implemented in Chrome.

Google

South Korea Set To Get a Fully Functioning Google Maps (reuters.com) 14

South Korea has reversed a two-decade policy and approved the export of high-precision map data, paving the way for a fully functional Google Maps in the country. Reuters reports: The approval was made "on the condition that strict security requirements are met," the Ministry of Land, Infrastructure and Transport said in a statement. Those conditions include blurring military and other sensitive security-related facilities, as well as restricting longitude and latitude coordinates for South Korean territory on products such as Google Maps and Google Earth, it said.

The decision is expected to hurt Naver and Kakao -- local internet giants which currently dominate the country's market for digital map services. But it will appease Washington, which has urged Seoul to tackle what it says is discrimination against U.S. tech companies. South Korea, still technically at war with North Korea, had shot down Google's previous bids in 2007 and 2016 to be allowed to export the data, citing the risks that information about sensitive military and security facilities could be exposed.
"Google can now come in, slash usage fees, and take the market," said Choi Jin-mu, a geography professor at Kyung Hee University. "If Naver and Kakao are weakened or pushed out and Google later raises prices, that becomes a monopoly. Then, even companies that rely on map services -- logistics firms, for example -- become dependent, and in the long run, even government GIS (geographic information) systems could end up dependent on Google or Apple. That's the biggest concern."
AI

Trump Orders Federal Agencies To Stop Using Anthropic AI Tech 'Immediately' 135

President Donald Trump has ordered all U.S. federal agencies to "immediately cease" using Anthropic's AI technology, escalating a standoff after the company sought limits on Pentagon use of its models. CNBC reports: The company, which in July signed a $200 million contract with Pentagon, wants assurances that the Defense Department will not use its AI models will not be used for fully autonomous weapons or mass domestic surveillance of Americans. The Pentagon had set a deadline of 5:01 p.m. ET Friday for Anthropic to agree to its demands to allow the Pentagon to use the technology for all lawful purposes. If Anthropic did not meet that deadline, Pete Hegseth threatened to label the company a "supply chain risk" or force it to comply by invoking the Defense Production Act.

"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump said in a post on Truth Social. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY."

"Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," Trump wrote. "We don't need it, we don't want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels," Trump said.
On Friday, OpenAI said it would also draw the same red lines as Anthropic: no AI for mass surveillance or autonomous lethal weapons.
The Military

US Military Accidentally Shoots Down Border Protection Drone With Laser (apnews.com) 39

An anonymous reader quotes a report from the Associated Press: The U.S. military used a laser Thursday to shoot down a "seemingly threatening" drone flying near the U.S.-Mexico border. It turned out the drone belonged to Customs and Border Protection, lawmakers said. The case of mistaken identity prompted the Federal Aviation Administration to close additional airspace around Fort Hancock, about 50 miles (80 kilometers) southeast of El Paso. The military is required to formally notify the FAA when it takes any counter-drone action inside U.S. airspace.

It was the second time in two weeks that a laser was fired in the area. The last time it was CBP that used the weapon and nothing was hit. That incident occurred near Fort Bliss and prompted the FAA to shut down air traffic at El Paso airport and the surrounding area. This time, the closure was smaller and commercial flights were not affected.
The FAA, CBP and the Pentagon confirmed the incident in a joint statement, saying the military "employed counter-unmanned aircraft system authorities to mitigate a seemingly threatening unmanned aerial system operating within military airspace."

"At President Trump's direction, the Department of War, FAA, and Customs and Border Patrol are working together in an unprecedented fashion to mitigate drone threats by Mexican cartels and foreign terrorist organizations at the U.S.-Mexico Border," the statement said. The report notes that 27,000 drones were detected within 1,600 feet of the southern border in the last six months of 2024.

Illinois Democratic U.S. Sen. Tammy Duckworth, the ranking member on the Senate's Aviation Subcommittee, is calling for an independent investigation to look into the matter. "The Trump administration's incompetence continues to cause chaos in our skies," Duckworth said.

Slashdot Top Deals