Software

Backseat Software (mikeswanson.com) 98

Mike Swanson, commenting on modern software's intrusive, attention-seeking behavior: What if your car worked like so many apps? You're driving somewhere important...maybe running a little bit late. A few minutes into the drive, your car pulls over to the side of the road and asks:

"How are you enjoying your drive so far?"

Annoyed by the interruption, and even more behind schedule, you dismiss the prompt and merge back into traffic.

A minute later it does it again.

"Did you know I have a new feature? Tap here to learn more."

It blocks your speedometer with an overlay tutorial about the turn signal. It highlights the wiper controls and refuses to go away until you demonstrate mastery.

Ridiculous, of course.

And yet, this is how a lot of modern software behaves. Not because it's broken, but because we've normalized an interruption model that would be unacceptable almost anywhere else.

Programming

'Just Because Linus Torvalds Vibe Codes Doesn't Mean It's a Good Idea' (theregister.com) 61

In an opinion piece for The Register, Steven J. Vaughan-Nichols argues that while "vibe coding" can be fun and occasionally useful for small, throwaway projects, it produces brittle, low-quality code that doesn't scale and ultimately burdens real developers with cleanup and maintenance. An anonymous reader shares an excerpt: Vibe coding got a big boost when everyone's favorite open source programmer, Linux's Linus Torvalds, said he'd been using Google's Antigravity LLM on his toy program AudioNoise, which he uses to create "random digital audio effects" using his "random guitar pedal board design." This is not exactly Linux or even Git, his other famous project, in terms of the level of work. Still, many people reacted to Torvalds' vibe coding as "wow!" It's certainly noteworthy, but has the case for vibe coding really changed?

[...] It's fun, and for small projects, it's productive. However, today's programs are complex and call upon numerous frameworks and resources. Even if your vibe code works, how do you maintain it? Do you know what's going on inside the code? Chances are you don't. Besides, the LLM you used two weeks ago has been replaced with a new version. The exact same prompts that worked then yield different results today. Come to think of it, it's an LLM. The same prompts and the same LLM will give you different results every time you run it. This is asking for disaster.

Just ask Jason Lemkin. He was the guy who used the vibe coding platform Replit, which went "rogue during a code freeze, shut down, and deleted our entire database." Whoops! Yes, Replit and other dedicated vibe programming AIs, such as Cursor and Windsurf, are improving. I'm not at all sure, though, that they've been able to help with those fundamental problems of being fragile and still cannot scale successfully to the demands of production software. It's much worse than that. Just because a program runs doesn't mean it's good. As Ruth Suehle, President of the Apache Software Foundation, commented recently on LinkedIn, naive vibe coders "only know whether the output works or doesn't and don't have the skills to evaluate it past that. The potential results are horrifying."

Why? In another LinkedIn post, Craig McLuckie, co-founder and CEO of Stacklok, wrote: "Today, when we file something as 'good first issue' and in less than 24 hours get absolutely inundated with low-quality vibe-coded slop that takes time away from doing real work. This pattern of 'turning slop into quality code' through the review process hurts productivity and hurts morale." McLuckie continued: "Code volume is going up, but tensions rise as engineers do the fun work with AI, then push responsibilities onto their team to turn slop into production code through structured review."

Programming

Ruby on Rails Creator Says AI Coding Tools Still Can't Match Most Junior Programmers (youtube.com) 44

AI still can't produce code as well as most junior programmers he's worked with, David Heinemeier Hansson, the creator of Ruby on Rails and co-founder of 37 Signals, said on a recent podcast [video link], which is why he continues to write most of his code by hand. Hansson compared AI's current coding capabilities to "a flickering light bulb" -- total darkness punctuated by moments of clarity before going pitch black again.

At his company, humans wrote 95% of the code for Fizzy, 37 Signals' Kanban-inspired organization product, he said. The team experimented with AI-powered features, but those ended up on the cutting room floor. "I'm not feeling that we're falling behind at 37 Signals in terms of our ability to produce, in terms of our ability to launch things or improve the products," Hansson said.

Hansson said he remains skeptical of claims that businesses can fire half their programmers and still move faster. Despite his measured skepticism, Hansson said he marvels at the scale of bets the U.S. economy is placing on AI reaching AGI. "The entire American economy right now is one big bet that that's going to happen," he said.
Businesses

Oracle Trying To Lure Workers To Nashville For New 'Global' HQ (bloomberg.com) 56

An anonymous reader quotes a report from Bloomberg: Oracle is trying -- and sometimes struggling -- to attract workers to Nashville, where it is developing a massive riverfront headquarters. The company is hiring for more roles in Nashville than any other US city, with a special focus on jobs in its crucial cloud infrastructure unit. Oracle cloud workers based elsewhere say they've been offered tens of thousands of dollars in incentives to move. Chairman Larry Ellison made a splash in April 2024 when he said Oracle would make Nashville its "world headquarters" just a few years after moving the software company from Redwood City, California, to Austin. His proclamation followed a 2021 tax incentive deal in which Oracle pledged to create 8,500 jobs in Nashville by 2031, paying an average salary above six figures.

"We're creating a world leading cloud and AI hub in Nashville that is attracting top talent locally, regionally, and from across the country," Oracle Senior Vice President Scott Twaddle said in a statement. "We've seen great success recruiting engineering and technical positions locally and will continue to hire aggressively for the next several years." Still, Oracle has a long way to go in its hiring goals. Today, it has about 800 workers assigned to offices in Nashville, according to documents seen by Bloomberg. That trails far behind the number of company employees in locations including Redwood City, Austin and Kansas City, the center of health records company Cerner, which Oracle acquired in 2022.

A lack of state income tax and the city's thriving music scene are touted by Oracle's promotional materials to attract talent to Nashville. Some new hires note they moved because in a tough tech job market, the Tennessee city was the only place with an Oracle position offered. To fit all of these workers, Oracle is planning a massive campus along the Cumberland River. It will feature over 2 million square feet of office space, a new cross-river bridge and a branch of the ultra high-end sushi chain Nobu, which has locations on many properties connected to Ellison, including the Hawaiian island of Lanai. [...] Oracle has been running recruitment events for the new hub. But a common concern for employees weighing a move is that Nashville is classified by Oracle in a lower geographic pay band than California or Seattle, meaning that future salary growth is likely limited, according to multiple workers who asked not to be identified discussing private information.

A weaker local tech job market also gives pause to some considering relocation. In addition, many of the roles in Nashville require five days a week in the office, which is a shift for Oracle, where a significant number of roles are remote. For a global company like Oracle, the exact meaning of "headquarters" can be a bit unclear. Austin remains the address included on company SEC filings and its executives are scattered across the country. The city where Oracle is hiring for the most positions globally is Bengaluru, the southern Indian tech hub. Still, Oracle is positioning Nashville to be at the center of its future. "We're developing our Nashville location to stand alongside Austin, Redwood Shores, and Seattle as a major innovation hub," Oracle writes on its recruitment site. "This is your chance to be part of it."

Python

Anthropic Invests $1.5 Million in the Python Software Foundation and Open Source Security (blogspot.com) 10

Python Software Foundation: We are thrilled to announce that Anthropic has entered into a two-year partnership with the Python Software Foundation (PSF) to contribute a landmark total of $1.5 million to support the foundation's work, with an emphasis on Python ecosystem security. This investment will enable the PSF to make crucial security advances to CPython and the Python Package Index (PyPI) benefiting all users, and it will also sustain the foundation's core work supporting the Python language, ecosystem, and global community.

Anthropic's funds will enable the PSF to make progress on our security roadmap, including work designed to protect millions of PyPI users from attempted supply-chain attacks. Planned projects include creating new tools for automated proactive review of all packages uploaded to PyPI, improving on the current process of reactive-only review. We intend to create a new dataset of known malware that will allow us to design these novel tools, relying on capability analysis. One of the advantages of this project is that we expect the outputs we develop to be transferable to all open source package repositories. As a result, this work has the potential to ultimately improve security across multiple open source ecosystems, starting with the Python ecosystem.

AI

Even Linus Torvalds Is Vibe Coding Now 54

Linus Torvalds has started experimenting with vibe coding, using Google's Antigravity AI to generate parts of a small hobby project called AudioNoise. "In doing so, he has become the highest-profile programmer yet to adopt this rapidly spreading, and often mocked, AI-driven programming," writes ZDNet's Steven Vaughan-Nichols. Fro the report: [I]t's a trivial program called AudioNoise -- a recent side project focused on digital audio effects and signal processing. He started it after building physical guitar pedals, GuitarPedal, to learn about audio circuits. He now gives them as gifts to kernel developers and, recently, to Bill Gates.

While Torvalds hand-coded the C components, he turned to Antigravity for a Python-based audio sample visualizer. He openly acknowledges that he leans on online snippets when working in languages he knows less well. Who doesn't? [...] In the project's README file, Torvalds wrote that "the Python visualizer tool has been basically written by vibe-coding," describing how he "cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualiser." The remark underlines that the AI-generated code met his expectations well enough that he did not feel the need to manually re-implement it.
Further reading: Linus Torvalds Says Vibe Coding is Fine For Getting Started, 'Horrible Idea' For Maintenance
Programming

C# (and C) Grew in Popularity in 2025, Says TIOBE (tiobe.com) 187

For a quarter century, the TIOBE Index has attempted to rank the popularity of programming languages by the number of search engine results they bring up — and this week they had an announcement.

Over the last year the language showing the largest increase in its share of TIOBE's results was C#.

TIOBE founder/CEO Paul Jansen looks back at how C++ evolved: From a language-design perspective, C# has often been an early adopter of new trends among mainstream languages. At the same time, it successfully made two major paradigm shifts: from Windows-only to cross-platform, and from Microsoft-owned to open source. C# has consistently evolved at the right moment.

For many years now, there has been a direct battle between Java and C# for dominance in the business software market. I always assumed Java would eventually prevail, but after all this time the contest remains undecided. It is an open question whether Java — with its verbose, boilerplate-heavy style and Oracle ownership — can continue to keep C# at bay.

While C# remains stuck in the same #5 position it was in a year ago, its share of TIOBE's results rose 2.94% — the largest increase of the 100 languages in their rankngs.

But TIOBE's CEO notes that his rankings for the top 10 highest-scoring languages delivered "some interesting movements" in 2025: C and C++ swapped positions. [C rose to the #2 position — behind Python — while C++ dropped from #2 to the #4 rank that C held in January of 2025]. Although C++ is evolving faster than ever, some of its more radical changes — such as the modules concept — have yet to see widespread industry adoption. Meanwhile, C remains simple, fast, and extremely well suited to the ever-growing market of small embedded systems. Even Rust has struggled to penetrate this space, despite reaching an all-time high of position #13 this month.

So who were the other winners of 2025, besides C#? Perl made a surprising comeback, jumping from position #32 to #11 and re-entering the top 20. Another language returning to the top 10 is R, driven largely by continued growth in data science and statistical computing.

Of course, where there are winners, there are also losers. Go appears to have permanently lost its place in the top 10 during 2025. The same seems true for Ruby, which fell out of the top 20 and is unlikely to return anytime soon.

What can we expect from 2026? I have a long history of making incorrect predictions, but I suspect that TypeScript will finally break into the top 20. Additionally, Zig, which climbed from position #61 to #42 in 2025, looks like a strong candidate to enter the TIOBE top 30.

Here's how TIOBE estimated the 10 most popularity programming languages at the end of 2025
  1. Python
  2. C
  3. Java
  4. C++
  5. C#
  6. JavaScript
  7. Visual Basic
  8. SQL
  9. Delphi/Object Pascal
  10. R

Programming

Creator of Claude Code Reveals His Workflow 54

Boris Cherny, the creator of Claude Code at Anthropic, revealed a deceptively simple workflow that uses parallel AI agents, verification loops, and shared memory to let one developer operate with the output of an entire engineering team. "I run 5 Claudes in parallel in my terminal," Cherny wrote. "I number my tabs 1-5, and use system notifications to know when a Claude needs input." He also runs "5-10 Claudes on claude.ai" in his browser, using a "teleport" command to hand off work between the web and his local machine. This validates the "do more with less" strategy Anthropic's President Daniela Amodei recently pitched during an interview with CNBC. VentureBeat reports: For the past week, the engineering community has been dissecting a thread on X from Boris Cherny, the creator and head of Claude Code at Anthropic. What began as a casual sharing of his personal terminal setup has spiraled into a viral manifesto on the future of software development, with industry insiders calling it a watershed moment for the startup.

"If you're not reading the Claude Code best practices straight from its creator, you're behind as a programmer," wrote Jeff Tang, a prominent voice in the developer community. Kyle McNease, another industry observer, went further, declaring that with Cherny's "game-changing updates," Anthropic is "on fire," potentially facing "their ChatGPT moment."

The excitement stems from a paradox: Cherny's workflow is surprisingly simple, yet it allows a single human to operate with the output capacity of a small engineering department. As one user noted on X after implementing Cherny's setup, the experience "feels more like Starcraft" than traditional coding -- a shift from typing syntax to commanding autonomous units.
Programming

Stack Overflow Went From 200,000 Monthly Questions To Nearly Zero (stackexchange.com) 125

Stack Overflow's monthly question volume has collapsed to about 300 -- levels not seen since the site launched in 2009, according to data from the Stack Overflow Data Explorer that tracks the platform's activity over its sixteen-year history.

Questions peaked around 2014 at roughly 200,000 per month, then began a gradual decline that accelerated dramatically after ChatGPT's November 2022 launch. By May 2025, monthly questions had fallen to early-2009 levels, and the latest data through early 2026 shows the collapse has only continued -- the line now sits near the bottom of the chart, barely registering.

The decline predates LLMs. Questions began dropping around 2014 when Stack Overflow improved moderator efficiency and closed questions more aggressively. In mid-2021, Prosus acquired Stack Overflow for $1.8 billion. The founders, Jeff Atwood and Joel Spolsky, exited before the terminal decline became apparent. ChatGPT accelerated what was already underway. The chatbot answers programming questions faster, draws on Stack Overflow's own corpus for training data, and doesn't close questions for being duplicates.
Programming

'Memory is Running Out, and So Are Excuses For Software Bloat' (theregister.com) 152

The relentless climb in memory prices driven by the AI boom's insatiable demand for datacenter hardware has renewed an old debate about whether modern software has grown inexcusably fat, a column by the Register argues. The piece points to Windows Task Manager as a case study: the current executable occupies 6MB on disk and demands nearly 70MB of RAM just to display system information, compared to the original's 85KB footprint.

"Its successor is not orders of magnitude more functional," the column notes. The author draws a parallel to the 1970s fuel crisis, when energy shortages spurred efficiency gains, and argues that today's memory crunch could force similar discipline. "Developers should consider precisely how much of a framework they really need and devote effort to efficiency," the column adds. "Managers must ensure they also have the space to do so."

The article acknowledges that "reversing decades of application growth will not happen overnight" but calls for toolchains to be rethought and rewards given "for compactness, both at rest and in operation."
Programming

Cursor CEO Warns Vibe Coding Builds 'Shaky Foundations' That Eventually Crumble (fortune.com) 54

Michael Truell, the 25-year-old CEO and cofounder of Cursor, is drawing a sharp distinction between careful AI-assisted development and the more hands-off approach commonly known as "vibe coding." Speaking at a conference, Truell described vibe coding as a method where users "close your eyes and you don't look at the code at all and you just ask the AI to go build the thing for you." He compared it to constructing a house by putting up four walls and a roof without understanding the underlying wiring or floorboards. The approach might work for quickly mocking up a game or website, but more advanced projects face real risks.

"If you close your eyes and you don't look at the code and you have AIs build things with shaky foundations as you add another floor, and another floor, and another floor, and another floor, things start to kind of crumble," Truell said. Truell and three fellow MIT graduates created Cursor in 2022. The tool embeds AI directly into the integrated development environment and uses the context of existing code to predict the next line, generate functions, and debug errors. The difference, as Truell frames it, is that programmers stay engaged with what's happening under the hood rather than flying blind.
Programming

Apple's App Course Runs $20,000 a Student. Is It Really Worth It? (wired.com) 14

Apple's Developer Academy in Detroit has spent roughly $30 million over four years training hundreds of people to build iPhone apps, but not everyone lands coding jobs right away, according to a WIRED story published this week.

The program launched in 2021 as part of Apple's $200 million response to the Black Lives Matter protests and costs an estimated $20,000 per student -- nearly twice what state and local governments budget for community colleges. About 600 students have completed the 10-month course at Michigan State University. Academy officials say 71% of graduates from the past two years found full-time jobs across various industries.

The program provides iPhones, MacBooks and stipends ranging from $800 to $1,500 per month, though one former student said many participants relied on food stamps. Apple contributed $11.6 million to the academy. Michigan taxpayers and the university's regular students covered about $8.6 million -- nearly 30% of total funding. Two graduates said their lack of proficiency in Android hurt their job prospects. Apple's own US tech workforce went from 6% Black before the academy opened to about 3% this year.
Education

Apple's App Course Runs $20,000 a Student. Is It Really Worth It? (wired.com) 37

An anonymous reader quotes a report from Wired: Two years ago, Lizmary Fernandez took a detour from studying to be an immigration attorney to join a free Apple course for making iPhone apps. The Apple Developer Academy in Detroit launched as part of the company's $200 million response to the Black Lives Matter protests and aims to expand opportunities for people of color in the country's poorest big city. But Fernandez found the program's cost-of-living stipend lacking -- "A lot of us got on food stamps," she says -- and the coursework insufficient for landing a coding job. "I didn't have the experience or portfolio," says the 25-year-old, who is now a flight attendant and preparing to apply to law school. "Coding is not something I got back to."

Since 2021, the academy has welcomed over 1,700 students, a racially diverse mix with varying levels of tech literacy and financial flexibility. About 600 students, including Fernandez, have completed its 10-month course of half-days at Michigan State University, which cosponsors the Apple-branded and Apple-focused program. WIRED reviewed contracts and budgets and spoke with officials and graduates for the first in-depth examination of the nearly $30 million invested in the academy over the past four years -- almost 30 percent of which came from Michigan taxpayers and the university's regular students. As tech giants begin pouring billions of dollars into AI-related job training courses across the country, the Apple academy offers lessons on the challenges of uplifting diverse communities.

[...] The program gives out iPhones and MacBooks and spends an estimated $20,000 per student, nearly twice as much as state and local governments budget for community colleges. [...] About 70 percent of students graduate, which [Sarah Gretter, the academy leader for Michigan State] describes as higher than typical for adult education. She says the goal is for them to take "a next step," whether a job or more courses. Roughly a third of participants are under 25, and virtually all of them pursue further schooling. [...] About 71 percent of graduates from the last two years went onto full-time jobs across a variety of industries, according to academy officials. Amy J. Ko, a University of Washington computer scientist who researches computing education, calls under 80 percent typical for the coding schools she has studied but notes that one of her department's own undergraduate programs has a 95 percent job placement rate.

Windows

Microsoft Says It's Not Planning To Use AI To Rewrite Windows From C To Rust 41

Microsoft has denied any plans to rewrite Windows 11 using AI and Rust after a LinkedIn post from one of its top-level engineers sparked a wave of online backlash by claiming the company's goal was to "eliminate every line of C and C++ from Microsoft by 2030."

Galen Hunt, a principal software engineer responsible for several large-scale research projects at Microsoft, made the claim in what was originally a hiring post for his team. His original wording described a "North Star" of "1 engineer, 1 month, 1 million lines of code" and outlined a strategy to "combine AI and Algorithms to rewrite Microsoft's largest codebases." The repeated use of "our" in the post led many to interpret it as an official company direction rather than a personal research ambition.

Frank X. Shaw, Microsoft's head of communications, told Windows Latest that the company has no such plans. Hunt subsequently edited his LinkedIn post to clarify that "Windows is NOT being rewritten in Rust with AI" and that his team's work is a research project focused on building technology to enable language-to-language migration. He characterized the reaction as "speculative reading between the lines."
Programming

What Might Adding Emojis and Pictures To Text Programming Languages Look Like? 83

theodp writes: We all mix pictures, emojis, and text freely in our communications. So why not in our code? That's the premise of "Fun With Python and Emoji: What Might Adding Pictures to Text Programming Languages Look Like?" (two-image Bluesky explainer; full slides), which takes a look at what mixing emoji with Python and SQL might look like. A GitHub repo includes a Google Colab-ready Python notebook proof of concept that does rudimentary emoji-to-text translation via an IPython input transformer.

So, in the Golden Age of AI -- some 60+ years after Kenneth Iverson introduced the chock-full-of-symbols APL -- are valid technical reasons still keeping symbols and pictures out of code, or is their absence more of a programming dogma thing?
Programming

Microsoft To Replace All C/C++ Code With Rust By 2030 (thurrott.com) 272

Microsoft plans to eliminate all C and C++ code across its major codebases by 2030, replacing it with Rust using AI-assisted, large-scale refactoring. "My goal is to eliminate every line of C and C++ from Microsoft by 2030," Microsoft Distinguished Engineer Galen Hunt writes in a post on LinkedIn. "Our strategy is to combine AI and Algorithms to rewrite Microsoft's largest codebases. Our North Star is '1 engineer, 1 month, 1 million lines of code.' To accomplish this previously unimaginable task, we've built a powerful code processing infrastructure. Our algorithmic infrastructure creates a scalable graph over source code at scale. Our AI processing infrastructure then enables us to apply AI agents, guided by algorithms, to make code modifications at scale. The core of this infrastructure is already operating at scale on problems such as code understanding."

Hunt says he's looking to hire a Principal Software Engineer to help with this effort. "The purpose of this Principal Software Engineer role is to help us evolve and augment our infrastructure to enable translating Microsoft's largest C and C++ systems to Rust," writes Hunt. "A critical requirement for this role is experience building production quality systems-level code in Rust -- preferably at least 3 years of experience writing systems-level code in Rust. Compiler, database, or OS implementation experience is highly desired. While compiler implementation experience is not required to apply, the willingness to acquire that experience in our team is required."
Education

Inaugural 'Hour of AI' Event Includes Minecraft, Microsoft, Google and 13.1 Million K-12 Schoolkids (csforall.org) 13

Long-time Slashdot reader theodp writes: Last September, tech-backed nonprofit Code.org pledged to engage 25 million K-12 schoolchildren in an "Hour of AI" this school year. Preliminary numbers released this week by the Code.org Advocacy Coalition showed that [halfway through the five-day event Computer Science Education Week] 13.1 million users had participated in the inaugural Hour of AI, attaining 52.4% of its goal of 25 million participants.

In a pivot from coding to AI literacy, the Hour of AI replaced Code.org's hugely-popular Hour of Code this December as the flagship event of Computer Science Education Week (December 8-14). According to Code.org's 2024-25 Impact Report, "in 2024–25 alone, students logged over 100 million Hours of Code, including more than 43 million in the four months leading up to and including CS Education Week."

Minecraft participated with their own Hour of AI lessons. ("Program an AI Agent to craft tools and build shelter before dusk falls in this iconic challenge!") And Google contributed AI Quests, "a gamified, in-class learning experience" allowing students to "step into the shoes of Google researchers using AI to solve real-world challenges." Other participating organizations included the Scratch Foundation, Lego Education, Adobe, and Roblox.

And Microsoft contributed two — including one with their block-based programming environment Microsoft MakeCode Arcade, with students urged to "code and train your own super-smart bug using AI algorithms and challenge other AI bugs in an epic Tower battle for ultimate Bug Arena glory!"

See all the educational festivities here...
Programming

Rust's 'Vision Doc' Makes Recommendations to Help Keep Rust Growing (rust-lang.org) 80

The team authoring the Rust 2025 Vision Doc interviewed Rust developers to find out what they liked about the language — and have now issued three recommendations "to help Rust continue to scale across domains and usage levels."

— Enumerate and describe Rust's design goals and integrate them into our processes, helping to ensure they are observed by future language designers and the broader ecosystem.

— Double down on extensibility, introducing the ability for crates to influence the develop experience and the compilation pipeline.

— Help users to navigate the crates.io ecosystem and enable smoother interop


The real "empowering magic" of Rust arises from achieving a number of different attributes all at once — reliability, efficiency, low-level control, supportiveness, and so forth. It would be valuable to have a canonical list of those values that we could collectively refer to as a community and that we could use when evaluating RFCs or other proposed designs... We recommend creating an RFC that defines the goals we are shooting for as we work on Rust... One insight from our research is that we don't need to define which values are "most important". We've seen that for Rust to truly work, it must achieveallthe factors at once...

We recommenddoubling down on extensibilityas a core strategy. Rust's extensibility — traits, macros, operator overloading — has been key to its versatility. But that extensibility is currently concentrated in certain areas: the type system and early-stage proc macros. We should expand it to coversupportive interfaces(better diagnostics and guidance from crates) andcompilation workflow(letting crates integrate at more stages of the build process)... Doubling down on extensibility will not only make current Rust easier to use, it will enable and support Rust's use in new domains. Safety Critical applications in particular require a host of custom lints and tooling to support the associated standards. Compiler extensibility allows Rust to support those niche needs in a more general way.

We recommend finding ways to help users navigate the crates.io ecosystem... [F]inding which crates to use presents a real obstacle when people are getting started. The Rust org maintains a carefully neutral stance, which is good, but also means that people don't have anywhere to go for advice on a good "starter set" crates... Part of the solution is enabling better interop between libraries.

AI

Does AI Really Make Coders Faster? (technologyreview.com) 139

One developer tells MIT Technology Review that AI tools weaken the coding instincts he used to have. And beyond that, "It's just not fun sitting there with my work being done for me."

But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower...

Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles...

The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software.

Other key points from the article:
  • LLMs can only hold limited amounts of information in context windows, so "they struggle to parse large code bases and are prone to forgetting what they're doing on longer tasks."
  • "While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren't built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that's hard for humans to parse and, more important, to maintain."
  • "Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear's Harding. And GitClear's data suggests this is happening at scale..."
  • "As models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of 'code smells' — harder-to-pinpoint flaws that lead to maintenance problems and technical debt."

Yet the article cites a recent Stanford University study that found employment among software developers aged 22 to 25 dropped nearly 20% between 2022 and 2025, "coinciding with the rise of AI-powered coding tools."

The story is part of MIT Technology Review's new Hype Correction series of articles about AI.


Programming

Stanford Computer Science Grads Find Their Degrees No Longer Guarantee Jobs (latimes.com) 125

Elite computer science degrees are no longer a guaranteed on-ramp to tech jobs, as AI-driven coding tools slash demand for entry-level engineers and concentrate hiring around a small pool of already "elite" or AI-savvy developers. The Los Angeles Times reports: "Stanford computer science graduates are struggling to find entry-level jobs" with the most prominent tech brands, said Jan Liphardt, associate professor of bioengineering at Stanford University. "I think that's crazy." While the rapidly advancing coding capabilities of generative AI have made experienced engineers more productive, they have also hobbled the job prospects of early-career software engineers. Stanford students describe a suddenly skewed job market, where just a small slice of graduates -- those considered "cracked engineers" who already have thick resumes building products and doing research -- are getting the few good jobs, leaving everyone else to fight for scraps.

"There's definitely a very dreary mood on campus," said a recent computer science graduate who asked not to be named so they could speak freely. "People [who are] job hunting are very stressed out, and it's very hard for them to actually secure jobs." The shake-up is being felt across California colleges, including UC Berkeley, USC and others. The job search has been even tougher for those with less prestigious degrees. [...] Data suggests that even though AI startups like OpenAI and Anthropic are hiring many people, it is not offsetting the decline in hiring elsewhere. Employment for specific groups, such as early-career software developers between the ages of 22 and 25 has declined by nearly 20% from its peak in late 2022, according to a Stanford study. [...]

A common sentiment from hiring managers is that where they previously needed ten engineers, they now only need "two skilled engineers and one of these LLM-based agents," which can be just as productive, said Nenad Medvidovic, a computer science professor at the University of Southern California. "We don't need the junior developers anymore," said Amr Awadallah, CEO of Vectara, a Palo Alto-based AI startup. "The AI now can code better than the average junior developer that comes out of the best schools out there." [...] Stanford students say they are arriving at the job market and finding a split in the road; capable AI engineers can find jobs, but basic, old-school computer science jobs are disappearing. As they hit this surprise speed bump, some students are lowering their standards and joining companies they wouldn't have considered before. Some are creating their own startups. A large group of frustrated grads are deciding to continue their studies to beef up their resumes and add more skills needed to compete with AI.

Slashdot Top Deals