Businesses

SpaceX Strikes Deal With Coding Startup Cursor For $60 Billion (nytimes.com) 69

An anonymous reader quotes a report from the New York Times: SpaceX, Elon Musk's rocket and satellite company, said on Tuesday that it had struck a deal with the artificial intelligence start-up Cursor that could result in its acquiring the young company for $60 billion. SpaceX is making the deal just as it prepares to go public in what is likely to be one of the largest initial public offerings ever. In a social media post, SpaceX said the combination with Cursor, which makes code-writing software, would "allow us to build the world's most useful" A.I. models.

SpaceX added that the agreement gave it the option "to acquire Cursor later this year for $60 billion or pay $10 billion for our work together." It is unclear if the companies plan to consummate the deal before or after SpaceX's I.P.O., which could happen as early as June. [...] Cursor, which has raised more than $3 billion in funding, was founded in 2022 and made waves as a fast-growing A.I. start-up. It was under pressure in recent months after OpenAI and Anthropic announced competing code-writing products that were embraced by tech companies. Cursor had been in talks to raise funding in recent weeks.

Google

Google's Internal Politics Leave It Playing Catch-Up On AI Coding (bloomberg.com) 24

An anonymous reader quotes a report from Bloomberg: At Google, leaders are anxious about falling behind in the race to offer AI coding tools, especially as rivals like Anthropic PBC offer more effective and popular tools to businesses, according to people familiar with the matter. The search giant is now working to unite some of its coding initiatives under one banner to speed progress and take advantage of a surge in customer interest. In some corners of Alphabet's Google, particularly AI lab DeepMind, concerns about the company's position are mounting, according to current and former employees and executives, who declined to be named because they weren't authorized to speak publicly.

Businesses are just starting to realize that AI coding tools can enable anyone to build products by prompting a chatbot. But Google doesn't have a clear solution for them. Its Gemini model's capabilities are sprinkled across half a dozen different coding products with different branding, indicating how the company's lack of focus and competing internal efforts have hampered success, the people said. Even internally, some Google engineers prefer to use Anthropic's Claude Code, they said. More concerning, the people said, are the engineers who are struggling to adopt AI coding at all. [...] Google's emphasis on its own technology has also complicated the push to catch up. Most employees are banned from using competing tools such as Claude Code or Codex due to security concerns, but Googlers can request exceptions if they can demonstrate they have a business case, one former employee said. Some teams at DeepMind, including those working on the Gemini model, internal applications, and open source models, use Claude Code, according to three former employees. "You want the best people to use the best tool, even inside Google," one of the former employees said. [...]

In recent years, DeepMind has tried to tighten control over how its AI breakthroughs are woven into Google products. Last year, Google appointed Kavukcuoglu to a new position as chief AI architect, a role in which he is charged with folding generative AI into Google products. Yet confusion about who is leading the charge on AI coding persists. Along with DeepMind, Google Cloud, Google Core, Google Labs and Android are all pushing AI coding in different ways, one of the people said. [...] Within the Googleplex, there is a philosophical clash between AI researchers who want to move as quickly as possible and more traditional senior engineers who have exacting standards for code quality, former employees say. AI usage is factored into performance reviews, according to a former employee. But engineers who try to use internal AI coding tools often hit capacity constraints due to competition for computing power, the former employee said.

Programming

Fewer US College Students Major in CS. More Choose Data Science, Engineering (yahoo.com) 26

"From 2008 to 2024, the number of four-year computer science degrees granted rose about fivefold..." reports the Washington Post. Then in 2025 CS suddenly dropped from the fourth-largest undergraduate major to sixth, they report (citing data from the nonprofit National Student Clearinghouse, which compiles numbers from 97% of U.S. universities.

The 54,000-student drop was "the biggest one-year drop of any major discipline going back to at least 2020." But what major are they choosing instead? Sarah Karamarkovich, a research associate with the National Student Clearinghouse, pointed to an explanation from the data that we had overlooked. Enrollments in two interdisciplinary majors, data analytics and data science, topped a combined 35,000 in the fall of 2025. That was up from a few hundred when those disciplines were broken out into their own majors in 2020. Those relatively new categories reflect colleges' zeal to create specialized majors, including in AI, data science, robotics and cybersecurity. Some of those disciplines may be counted in the national enrollment data as computer science. Others are not.

The numbers suggest that some of the disappearing computer science majors didn't flee so much as they splintered into related disciplines.... The 8 percent decline in computer science majors last fall was nearly mirrored by a 7.3 percent increase in engineering majors, according to the National Student Clearinghouse data. Within engineering, mechanical and electrical engineering major enrollments increased by the largest absolute amounts — a jump of 11 percent and 14 percent, respectively.

Television

Amazon's New Fire TV Sticks No Longer Support Sideloading (cordcuttersnews.com) 47

Amazon's newest Fire TV Sticks are dropping support for normal sideloading, blocking apps from outside the Amazon Appstore unless the device is registered with developers. Cord Cutters News reports: This week, Amazon announced the upcoming launch of a new Fire TV Stick HD. The new model will run on Amazon's Vega OS, rather than Android, so most streaming apps will be supported, but users won't be add third party apps. Now, on the product page to preorder the new Fire Stick, some Amazon customers are getting a message warning them that the new model won't allow sideloading. Interestingly, not all customers are getting the message, whether signed in to an Amazon account or not.

The message, shown in a screenshot below, says: "For enhanced security, this device prevents sideloading or installing apps from unknown sources. Only apps from the Amazon Appstore are available for download." [...] The Fire TV Stick Select, announced in September 2025, also runs on Vega and some customers will see the same message about sideloading on that product page. [...] While Amazon continues to be a "multi-OS company," we should expect that future Fire TV models will also be built with Vega OS, limiting the apps users can access with their streaming devices to those from the Amazon Appstore.

AI

OpenAI's Big Codex Update Is a Direct Shot At Claude Code (theverge.com) 5

OpenAI is updating Codex with more agent-like capabilities, positioning it as a more direct rival to Anthropic's Claude Code. Some of the new features include the ability to operate macOS desktop apps, browse the web inside the app, generate images, use new workplace plug-ins, and remember useful context from past tasks. The Verge reports: Codex will now be able to operate desktop apps on your computer, OpenAI says in a blog post announcing the update. It can work in the background, meaning it won't interfere with your own work in other apps, and multiple agents can work in parallel. For developers, OpenAI says "this is helpful for testing and iterating on frontend changes, testing apps, or working in apps that don't expose an API." The feature will start rolling out to Codex desktop app users signed in with ChatGPT today and will initially be limited to macOS. OpenAI did not indicate a timeline for when use will expand to other operating systems. EU users will also have to wait, it said, adding that the update will roll out to users there "soon."

Codex is also getting the ability to generate and iterate on images with gpt-image-1.5, new plug-ins for tools like GitLab, Atlassian Rovo, and Microsoft Suite, and native web browsing through an in-app browser, "where you can comment directly on pages to provide precise instructions to the agent." OpenAI also said it will also be easier to automate tasks, with users able to re-use existing conversation threads and Codex now able to schedule future work for itself and wake up automatically to continue on a long-term task. Codex will also be getting a memory feature allowing it to remember useful context from past experience, such as personal preferences, corrections, and information that took time to gather. OpenAI said it hopes the opt-in feature, which will be released as a preview, will help future tasks complete faster and to a quality that previously required detailed custom instructions. The personalization features will roll out to Enterprise, Edu, and EU users "soon."

Operating Systems

Is Linux Mint In Trouble? (nerds.xyz) 50

BrianFagioli writes: The developers behind Linux Mint say the project is rethinking its release strategy and moving toward a longer development cycle, with the next version now expected around Christmas 2026. In a monthly update, project lead Clement Lefebvre said the team reached a "crossroads" and needs more flexibility to fix bugs, improve the desktop, and adapt to rapid changes across the Linux ecosystem. The upcoming development build, temporarily called Mint 23 "Alfa," is currently based on Ubuntu 26.04 LTS and includes Linux kernel 7.0, an unstable build of Cinnamon 6.7, and early Wayland related work.

Mint is also replacing the long used Ubiquity installer with "live-installer," the same tool used by Linux Mint Debian Edition, allowing the project to unify installation infrastructure across its Ubuntu based and Debian based variants. While the team frames the changes as an opportunity to improve quality and reduce maintenance overhead, the shift has raised questions about the project's long term direction and whether Linux Mint may eventually lean more heavily on its Debian roots rather than its traditional Ubuntu base.

Programming

Will Some Programmers Become 'AI Babysitters'? (linkedin.com) 150

Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google: "AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert.

"While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs."

The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.

Programming

Has the Rust Programming Language's Popularity Reached Its Plateau? (tiobe.com) 178

"Rust's rise shows signs of slowing," argues the CEO of TIOBE.

Back in 2020 Rust first entered the top 20 of his "TIOBE Index," which ranks programming language popularity using search engine results. Rust "was widely expected to break into the top 10," he remembers today. But it never happened, and "That was nearly six years ago...." Since then, Rust has steadily improved its ranking, even reaching its highest position ever (#13) at the beginning of this year. However, just three months later, it has dropped back to position #16. This suggests that Rust's adoption rate may be plateauing.

One possible explanation is that, despite its ability to produce highly efficient and safe code, Rust remains difficult to learn for non-expert programmers. While specialists in performance-critical domains are willing to invest in mastering the language, broader mainstream adoption appears more challenging. As a result, Rust's growth in popularity seems to be leveling off, and a top 10 position now appears more distant than before.

Or, could Rust's sudden drop in the rankings just reflect flaws in TIOBE's ranking system? In January GitHub's senior director for developer advocacy argued AI was pushing developers toward typed languages, since types "catch the exact class of surprises that AI-generated code can sometimes introduce... A 2025 academic study found that a whopping 94% of LLM-generated compilation errors were type-check failures." And last month Forbes even described Rust as "the the safety harness for vibe coding."

A year ago Rust was ranked #18 on TIOBE's index — so it still rose by two positions over the last 12 months, hitting that all-time high in January. Could the rankings just be fluctuating due to anomalous variations in each month's search engine results? Since January Java has fallen to the #4 spot, overtaken by C++ (which moved up one rank to take Java's place in the #3 position).

Here's TIOBE's current estimate for the 10 most popularity programming languages:
  1. Python
  2. C
  3. C++
  4. Java
  5. C#
  6. JavaScript
  7. Visual Basic
  8. SQL
  9. R
  10. Delphi/Object Pascal

TIOBE estimates that the next five most popular programming languages are Scratch, Perl, Fortran, PHP, and Go.


AI

Internet Bug Bounty Pauses Payouts, Citing 'Expanding Discovery' From AI-Assisted Research (infoworld.com) 15

The Internet Bug Bounty program "has been paused for new submissions," they announced last week.

Running since 2012, the program is funded by "a number of leading software companies," reports InfoWorld, "and has awarded more than $1.5m to researchers who have reported bugs " Up to now, 80% of its payouts have been for discoveries of new flaws, and 20% to support remediation efforts. But as artificial intelligence makes it easier to find bugs, that balance needs to change, HackerOne said in a statement. "AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed. The balance between findings and remediation capacity in open source has substantively shifted," said HackerOne.

Among the first programs to be affected is the Node.js project, a server-side JavaScript platform for web applications known for its extensive ecosystem. While the project team will continue to accept and triage bug reports through HackerOne, without funding from the Internet Bug Bounty program it will no longer pay out rewards, according to an announcement on its website...

[J]ust last month, Google also put a halt to AI-generated submissions provided to its Open Source Software Vulnerability Reward Program.

The Internet Bug Bounty stressed that "We have a responsibility to the community to ensure this program effectively accomplishes its ambitious dual purpose: discovery and remediation. Accordingly, we are pausing submissions while we consider the structure and incentives needed to further these goals..."

"We remain committed to strengthening open source security. Working with project maintainers and researchers, we're actively evaluating solutions to better align incentives with open source ecosystem realities and ensure vulnerability discoveries translate into durable remediation outcomes."
AI

Claude Code Leak Reveals a 'Stealth' Mode for GenAI Code Contributions - and a 'Frustration Words' Regex (pcworld.com) 38

That leak of Claude Code's source code "revealed all kinds of juicy details," writes PC World.

The more than 500,000 lines of code included:

- An 'undercover mode' for Claude that allows it to make 'stealth' contributions to public code bases
- An 'always-on' agent for Claude Code
- A Tamagotchi-style 'Buddy' for Claude

"But one of the stranger bits discovered in the leak is that Claude Code is actively watching our chat messages for words and phrases — including f-bombs and other curses — that serve as signs of user frustration." Specifically, Claude Code includes a file called "userPromptKeywords.ts" with a simple pattern-matching tool called regex, which sweeps each and every message submitted to Claude for certain text matches. In this particular case, the regex pattern is watching for "wtf," "wth," "omfg," "dumbass," "horrible," "awful," "piece of — -" (insert your favorite four-letter word for that one), "f — you," "screw this," "this sucks," and several other colorful metaphors... While the Claude Code leak revealed the existence of the "frustration words" regex, it doesn't give any indication of why Claude Code is scouring messages for these words or what it's doing with them.
Medicine

Python Blood Could Hold the Secret To Healthy Weight Loss (colorado.edu) 129

Longtime Slashdot reader fahrbot-bot writes: CU Boulder researchers are reporting that they have discovered an appetite-suppressing compound in python blood that helps the snakes consume enormous meals and go months without eating yet remain metabolically healthy. The findings were published in the journal Natural Metabolism on March 19, 2026.

Pythons can grow as big as a telephone pole, swallow an antelope whole, and go months or even years without eating -- all while maintaining a healthy heart and plenty of muscle mass. In the hours after they eat, research has shown, their heart expands 25% and their metabolism speeds up 4,000-fold to help them digest their meal. The team measured blood samples from ball pythons and Burmese pythons, fed once every 28 days, immediately after they ate a meal. In all, they found 208 metabolites that increased significantly after the pythons ate. One molecule, called para-tyramine-O-sulfate (pTOS) soared 1,000-fold.

Further studies, done with Baylor University researchers, showed that when they gave high doses of pTOS to obese or lean mice, it acted on the hypothalamus, the appetite center of the brain, prompting weight loss without causing gastrointestinal problems, muscle loss or declines in energy. The study found that pTOS, which is produced by the snake's gut bacteria, is not present in mice naturally. It is present in human urine at low levels and does increase somewhat after a meal. But because most research is done in mice or rats, pTOS has been overlooked.
"We've basically discovered an appetite suppressant that works in mice without some of the side-effects that GLP-1 drugs have," said senior author Leslie Leinwand, a distinguished professor of Molecular, Cellular and Developmental Biology who has been studying pythons in her lab for two decades. Drugs like Ozempic and Wegovy act on the hormone glucagon-like petide-1 (GLP-1).
AI

Anthropic Issues Copyright Takedown Requests To Remove 8,000+ Copies of Claude Code Source Code 69

Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions for its Claude Code AI agent. According to the Wall Street Journal, "Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions ... that developers had shared on programming platform GitHub." From the report: Programmers combing through the source code so far have marveled on social media at some of Anthropic's tricks for getting its Claude AI models to operate as Claude Code. One feature asks the models to go back periodically through tasks and consolidate their memories -- a process it calls dreaming. Another appears to instruct Claude Code in some cases to go "undercover" and not reveal that it is an AI when publishing code to platforms like GitHub. Others found tags in the code that appeared pointed at future product releases. The code even included a Tamagotchi-style pet called "Buddy" that users could interact with.

After Anthropic requested that GitHub remove copies of its proprietary code, another programmer used other AI tools to rewrite the Claude Code functionality in other programming languages. Writing on GitHub, the programmer said the effort was aimed at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.
Businesses

Oracle Cuts Thousands of Jobs Across Sales, Engineering, Security (theregister.com) 46

bobthesungeek76036 shares a report from the Register: Oracle laid off thousands of employees on Tuesday as it ramps spending on AI infrastructure projects internally and with major technology partners. The layoffs were carried out via email, according to copies of the message viewed by Business Insider. The email told affected workers they would be terminated immediately and to provide a personal email for follow-up.

The cuts echo a TD Cowen forecast earlier this year, when the investment bank questioned how Oracle would finance its expanding AI datacenter buildout and suggested headcount reductions could reach 20,000 to 30,000. It is not clear how many employees were notified on Tuesday, but one screenshot that purports to show the number of internal Slack users showed a drop of 10,000 overnight.

[...] Oracle employs about 162,000 people, with 58,000 of those in the US and approximately 104,000 internationally. If the rumored cuts of 30,000 are correct, it would amount to 18 percent of the company's workforce. According to posts from Oracle workers on LinkedIn, the cuts were spread through multiple departments around the country, with employees in Kansas, Tennessee, and Texas taking to social media to say they were among those chopped.
"This news didn't seem to affect stock price," adds bobthesungeek76036. "ORCL is up 6% for the day."
Programming

Claude Code's Source Code Leaks Via npm Source Maps (dev.to) 65

Grady Martin writes: A security researcher has leaked a complete repository of source code for Anthropic's flagship command-line tool. The file listing was exposed via a Node Package Manager (npm) mapping, with every target publicly accessible on a Cloudflare R2 storage bucket. There's been a number of discoveries as people continue to pore over the code. The DEV Community outlines some of the leak's most notable architectural elements and the key technical choices:

Architecture Highlights
The Tool System (~40 tools): Claude Code uses a plugin-like tool architecture. Each capability (file read, bash execution, web fetch, LSP integration) is a discrete, permission-gated tool. The base tool definition alone is 29,000 lines of TypeScript.
The Query Engine (46K lines): This is the brain of the operation. It handles all LLM API calls, streaming, caching, and orchestration. It's by far the largest single module in the codebase.
Multi-Agent Orchestration: Claude Code can spawn sub-agents (they call them "swarms") to handle complex, parallelizable tasks. Each agent runs in its own context with specific tool permissions.
IDE Bridge System: A bidirectional communication layer connects IDE extensions (VS Code, JetBrains) to the CLI via JWT-authenticated channels. This is how the "Claude in your editor" experience works.
Persistent Memory System: A file-based memory directory where Claude stores context about you, your project, and your preferences across sessions.

Key Technical Decisions Worth Noting
Bun over Node: They chose Bun as the JavaScript runtime, leveraging its dead code elimination for feature flags and its faster startup times.
React for CLI: Using Ink (React for terminals) is bold. It means their terminal UI is component-based with state management, just like a web app.
Zod v4 for validation: Schema validation is everywhere. Every tool input, every API response, every config file.
~50 slash commands: From /commit to /review-pr to memory management -- there's a command system as rich as any IDE.
Lazy-loaded modules: Heavy dependencies like OpenTelemetry and gRPC are lazy-loaded to keep startup fast.
AI

AI Data Centers Can Warm Surrounding Areas By Up To 9.1C 71

An anonymous reader quotes a report from New Scientist: Andrea Marinoni at the University of Cambridge, UK, and his colleagues saw that the amount of energy needed to run a data centre had been steadily increasing of late and was likely to "explode" in the coming years, so wanted to quantify the impact. The researchers took satellite measurements of land surface temperatures over the past 20 years and cross-referenced them against the geographical coordinates of more than 8400 AI data centers. Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas.

They discovered that land surface temperatures increased by an average of 2C (3.6F) in the months after an AI data center started operations. In the most extreme cases, the increase in temperature was 9.1C (16.4F). The effect wasn't limited to the immediate surroundings of the data centers: the team found increased temperatures up to 10 kilometers away. Seven kilometers away, there was only a 30 percent reduction in the intensity. "The results we had were quite surprising," says Marinoni. "This could become a huge problem."

Using population data, the researchers estimate that more than 340 million people live within 10 kilometers of data centers, so live in a place that is warmer than it would be if the data centre hadn't been built there. Marinoni says that areas including the Bajio region in Mexico and the Aragon province in Spain saw a 2C (3.6F) temperature increase in the 20 years between 2004 and 2024 that couldn't otherwise be explained.
University of Bristol researcher Chris Preist said the findings may be more complicated than they look. "It would be worth doing follow-up research to understand to what extent it's the heat generated from computation versus the heat generated from the building itself," he says. For example, the building being heated by sunlight may be part of the effect.

The findings of the study, which has not yet been peer-reviewed, can be found on arXiv.

Slashdot Top Deals