AI

Microsoft is Making 'Significant Investments' in Training Its Own AI Models (theverge.com) 14

A anonymous reader shares a report: Microsoft AI launched its first in-house models last month, adding to the already complicated relationship with its OpenAI partner. Now, Microsoft AI chief Mustafa Suleyman says the company is making "significant investments" in the compute capacity required to Microsoft's own future frontier models.

"We should have the capacity to build world class frontier models in house of all sizes, but we should be very pragmatic and use other models where we need to," said Suleyman during Microsoft's employee-only town hall on Thursday. "We're also going to be making significant investments in our own cluster, so today MAI-1-preview was only trained on 15,000 H100s, a tiny cluster in the grand scheme of things."

Suleyman hinted that Microsoft has ambitions to train models that are comparable to Meta, Google, and xAI's efforts on clusters that are "six to ten times larger in size" than what Microsoft used for its MAI-1-preview. "Much more to do, but it's good to take the first steps," said Suleyman.

AI

AI-generated Medical Data Can Sidestep Usual Ethics Review, Universities Say (nature.com) 38

An anonymous reader shares a report: Medical researchers at some institutions in Canada, the United States and Italy are using data created by artificial intelligence (AI) from real patient information in their experiments without the need for permission from their institutional ethics boards, Nature has learnt.

To generate what is called 'synthetic data', researchers train generative AI models using real human medical information, then ask the models to create data sets with statistical properties that represent, but do not include, human data.

Typically, when research involves human data, an ethics board must review how studies affect participants' rights, safety, dignity and well-being. However, institutions including the IRCCS Humanitas Research Hospital in Milan, Italy, the Children's Hospital of Eastern Ontario (CHEO) in Ottawa and the Ottawa Hospital, both in Canada, and Washington University School of Medicine (WashU Medicine) in St. Louis, Missouri, have waived these requirements for research involving synthetic data.

The reasons the institutions use to justify this decision differ. However, the potential benefits of using synthetic data include protecting patient privacy, being more easily able to share data between sites and speeding up research, says Khaled El Emam, a medical AI researcher at the CHEO Research Institute and the University of Ottawa.

AI

AI Use At Large Companies Is In Decline, Census Bureau Says (gizmodo.com) 75

An anonymous reader quotes a report from Gizmodo: [D]espite the AI industry's attempts to make itself seem omnipresent, a new report this week shows that adoption at large U.S. companies has declined. The report comes from the Census Bureau and shows that the rate of AI adoption by large companies -- that is, firms with over 250 employees -- has been declining slightly in recent weeks. The report is based on a biweekly survey, dubbed Business Trends and Outlook (or BTOS), of some 1.2 million U.S. firms. The survey, which asks businesses about their use of AI tools, such as machine learning and agents, found that -- between June and now -- the rate of adoption had declined from 14 to 12 percent. Futurism notes that this is the largest drop-off in the adoption rate since the survey first began in 2023, although the survey also showed a slight increase in AI use among smaller companies.

The moderate drop off comes after the rate of adoption had climbed precipitously over the last few years. When the survey first began, in September of 2023, the AI adoption rate hovered around 3.7 percent (PDF), while the adoption rate in December 2024 was around 5.7 percent. In the second quarter of this year, the rate also rose significantly, climbing from 7.4 percent to 9.2. The new drop-off in reported usage comes not long after another study, this one published by MIT, found that a vast majority of corporate AI pilot programs had failed to produce any material benefit to the companies involved.

Cloud

OpenAI and Oracle Ink Historic $300 Billion Cloud Computing Deal (techcrunch.com) 7

Amid yesterday's news of Oracle's soaring stock, which propelled founder Larry Ellison to the top of the world's richest list, the Wall Street Journal reported that the cloud giant and OpenAI have struck one of the largest cloud contracts ever signed. Under the deal, OpenAI will purchase $300 billion worth of compute power from Oracle over roughly five years, with purchases beginning in 2027.

"This move away from Microsoft was timed with OpenAI's involvement with the Stargate Project, in which OpenAI, SoftBank, and Oracle have committed to invest $500 billion into domestic data center projects over the next four years," notes TechCrunch.

OpenAI also recently signed a cloud deal with Google. "The deal ... underscores the fact that the two are willing to overlook heavy competition between them to meet the massive computing demands," wrote analyst in Reuter's report.
AI

Britannica and Merriam-Webster Sue Perplexity Over AI 'Answer Engine' (reuters.com) 20

Perplexity AI is the latest AI startup to be hit with a lawsuit by copyright holders, accused by Encyclopedia Britannica and Merriam-Webster of misusing their content in its "answer engine" for internet searches. From a report: The reference companies alleged in New York federal court on Wednesday that Perplexity unlawfully copied their material and diminished their revenue by redirecting their web traffic to its AI-generated summaries.
Media

Roku Wants You To See a Lot More AI-Generated Ads (theverge.com) 23

Roku plans to dramatically expand its advertiser base from 200 to 100,000 companies using generative AI tools, CFO Dan Jedda told investors at recent conferences. The streaming platform, which commands over 20% of US TV viewing and reaches half of broadband households, is currently "roughly half sold out" on ad inventory. Jedda said small businesses can create commercials "within minutes" using AI tools Roku has integrated into its self-serve platform.
AI

Albania Appoints AI Bot as Minister To Tackle Corruption (straitstimes.com) 34

A new minister in Albania charged to handle public procurement will be impervious to bribes, threats, or attempts to curry favour. That is because Diella, as she is called, is an AI-generated bot. From a report: Prime Minister Edi Rama, who is about to begin his fourth term, said on Sept 11 that Diella, which means "sun" in Albanian, will manage and award all public tenders in which the government contracts private companies for various projects.

"Diella is the first Cabinet member who isn't physically present, but is virtually created by AI," Mr Rama said during a speech unveiling his new Cabinet. She will help make Albania "a country where public tenders are 100 per cent free of corruption." The awarding of such contracts has long been a source of corruption scandals in Albania, a Balkan country that experts say is a hub for gangs seeking to launder their money from trafficking drugs and weapons across the world, and where graft has reached the corridors of power.

The Internet

RSS Co-Creator Launches New Protocol For AI Data Licensing 26

A group led by RSS co-creator Eckart Walther has launched a new protocol designed to standardize and scale licensing of online content for AI training. Backed by publishers like Reddit, Quora, Yahoo, and Medium, Real Simple Licensing (RSL) combines machine-readable terms in robots.txt with a collective rights organization, aiming to do for AI training data what ASCAP did for music royalties. However, it remains to be seen whether AI labs will agree to adopt it. TechCrunch reports: According to RSL co-founder Eckart Walther, who also co-created the RSS standard, the goal was to create a training-data licensing system that could scale across the internet. "We need to have machine-readable licensing agreements for the internet," Walther told TechCrunch. "That's really what RSL solves."

For years, groups like the Dataset Providers Alliance have been pushing for clearer collection practices, but RSL is the first attempt at a technical and legal infrastructure that could make it work in practice. On the technical side, the RSL Protocol lays out specific licensing terms a publisher can set for their content, whether that means AI companies need a custom license or to adopt Creative Commons provisions. Participating websites will include the terms as part of their "robots.txt" file in a prearranged format, making it straightforward to identify which data falls under which terms.

On the legal side, the RSL team has established a collective licensing organization, the RSL Collective, that can negotiate terms and collect royalties, similar to ASCAP for musicians or MPLC for films. As in music and film, the goal is to give licensors a single point of contact for paying royalties and provide rights holders a way to set terms with dozens of potential licensors at once. A host of web publishers have already joined the collective, including Yahoo, Reddit, Medium, O'Reilly Media, Ziff Davis (owner of Mashable and Cnet), Internet Brands (owner of WebMD), People Inc., and The Daily Beast. Others, like Fastly, Quora, and Adweek, are supporting the standard without joining the collective.

Notably, the RSL Collective includes some publishers that already have licensing deals -- most notably Reddit, which receives an estimated $60 million a year from Google for use of its training data. There's nothing stopping companies from cutting their own deals within the RSL system, just as Taylor Swift can set special terms for licensing while still collecting royalties through ASCAP. But for publishers too small to draw their own deals, RSL's collective terms are likely to be the only option.
Businesses

Oracle's Best Day Since 1992 Puts Ellison on Top of the World's Richest List 42

Oracle shares had their best day since 1992, skyrocketing 36% and adding $244 billion in market value as surging AI-driven cloud demand pushed the company toward a $1 trillion valuation. The surge boosted founder Larry Ellison's fortune by $100 billion, making him the new world's wealthiest person. CNBC reports: The company said Tuesday after the bell that it has $455 billion in remaining performance obligations, up 359% from a year earlier. "This is a very historic kind of print right here from Oracle with this backlog," Ben Reitzes, technology research head at Melius Research, told CNBC's "Closing Bell: Overtime" on Tuesday. "The Street was looking for about $180 billion in RPO and they're talking about a number that is a multiple of that. That is astounding."

Oracle now sees $18 billion in cloud infrastructure revenue in fiscal 2026, with the company calling for the annual sum to reach $32 billion, $73 billion, $114 billion and $144 billion over the subsequent four years. Other analysts were left "blown away" and "in shock." D.A. Davidson's Gil Luria called it "absolutely staggering on CNBC's "Fast Money." Wells Fargo analysts said it was a "momentous confirmation" of the AI trade.

Oracle's cloud revenue projections overshadowed an otherwise lackluster fiscal first-quarter report in which the company missed expectations on the top and bottom lines. The company had earnings of an adjusted $1.47 per share for the quarter, just below the $1.48 per share expected by analysts polled by LSEG. Revenue for the first quarter came in at $14.93 billion, missing the $15.04 billion expected.
AI

Developers Joke About 'Coding Like Cavemen' As AI Service Suffers Major Outage (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: On Wednesday afternoon, Anthropic experienced a brief but complete service outage that took down its AI infrastructure, leaving developers unable to access Claude.ai, the API, Claude Code, or the management console for around half an hour. The outage affected all three of Anthropic's main services simultaneously, with the company posting at 12:28 pm Eastern that "APIs, Console, and Claude.ai are down. Services will be restored as soon as possible." As of press time, the services appear to be restored. The disruption, though lasting only about 30 minutes, quickly took the top spot on tech link-sharing site Hacker News for a short time and inspired immediate reactions from developers who have become increasingly reliant on AI coding tools for their daily work. "Everyone will just have to learn how to do it like we did in the old days, and blindly copy and paste from Stack Overflow," joked one Hacker News commenter. Another user recalled a joke from a previous AI outage: "Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024."

The most recent outage came at an inopportune time, affecting developers across the US who have integrated Claude into their workflows. One Hacker News user observed: "It's like every other day, the moment US working hours start, AI (in my case I mostly use Anthropic, others may be better) starts dying or at least getting intermittent errors. In EU working hours there's rarely any outages." Another user also noted this pattern, saying that "early morning here in the UK everything is fine, as soon as most of the US is up and at it, then it slowly turns to treacle." While some users criticized Anthropic for reliability issues in recent months, the company's status page acknowledged the issue within 39 minutes of the initial reports, and by 12:55 pm Eastern announced that a fix had been implemented and that the company's teams were monitoring the results.

AI

AI Darwin Awards Launch To Celebrate Spectacularly Bad Deployments (theregister.com) 19

An anonymous reader shares a report: The Darwin Awards are being extended to include examples of misadventures involving overzealous applications of AI. Nominations are open for the 2025 AI Darwin Awards and the list of contenders is growing, fueled by a tech world weary of AI and evangelists eager to shove it somewhere inappropriate.

There's the Taco Bell drive-thru incident, where the chain catastrophically overestimated AI's ability to understand customer orders. Or the Replit moment, where a spot of vibe coding nuked a production database, despite instructions from the user not to fiddle with code without permission. Then there's the woeful security surrounding an AI chatbot used to screen applicants at McDonald's, where feeding in a password of 123456 gave access to the details of 64 million job applicants.

Microsoft

Microsoft To Use Some AI From Anthropic In Shift From OpenAI 6

Microsoft is diversifying its AI portfolio by integrating some of Anthropic's AI features into Office 365 apps. "The move will blend Anthropic and OpenAI technology in the apps, after years in which Microsoft primarily used OpenAI for the new features in Word, Excel, Outlook and PowerPoint," reports Reuters. From the report: Developers making Office AI features found Anthropic's latest models performed better than OpenAI in automating tasks such as financial functions in Excel or generating Powerpoint presentations based on instructions, the report said, citing one of the two people involved in the effort. Microsoft will pay its cloud rival Amazon Web Services to access the Anthropic models, according to the report. AWS is one of Anthropic's largest shareholders.

OpenAI's launch of GPT-5 is a step up in quality but Anthropic's Claude Sonnet 4 performs better in creating Powerpoint presentations that are more aesthetically pleasing, the report said. Microsoft plans to announce the move in the coming weeks, while the price of AI tools in Office will stay the same, the report said.
"As we've said, OpenAI will continue to be our partner on frontier models and we remain committed to our long-term partnership," a Microsoft spokesperson said.
AI

HHS Asks All Employees To Start Using ChatGPT (404media.co) 64

An anonymous reader quotes a report from 404 Media: Employees at Robert F Kennedy Jr.'s Department of Health and Human Services received an email Tuesday morning with the subject line "AI Deployment," which told them that ChatGPT would be rolled out for all employees at the agency. The deployment is being overseen by Clark Minor, a former Palantir employee who's now Chief Information Officer at HHS. "Artificial intelligence is beginning to improve health care, business, and government," the email, sent by deputy secretary Jim O'Neill and seen by 404 Media, begins. "Our department is committed to supporting and encouraging this transformation. In many offices around the world, the growing administrative burden of extensive emails and meetings can distract even highly motivated people from getting things done. We should all be vigilant against barriers that could slow our progress toward making America healthy again."

"I'm excited to move us forward by making ChatGPT available to everyone in the Department effective immediately," it adds. "Some operating divisions, such as FDA and ACF [Administration for Children and Families], have already benefitted from specific deployments of large language models to enhance their work, and now the rest of us can join them. This tool can help us promote rigorous science, radical transparency, and robust good health. As Secretary Kennedy said, 'The AI revolution has arrived.'" [...] The email says that the rollout was being led by Minor, who worked at the surveillance company Palantir from 2013 through 2024. It states Minor has "taken precautions to ensure that your work with AI is carried out in a high-security environment," and that "you can input most internal data, including procurement sensitive data and routine non-sensitive personally identifiable information, with confidence."

It then goes on to say that "ChatGPT is currently not approved for disclosure of sensitive personally identifiable information (such as SSNs and bank account numbers), classified information, export-controlled data, or confidential commercial information subject to the Trade Secrets Act." The email does not distinguish what "non-sensitive personally identifiable information" is. HHS did not immediately respond to a request for comment from 404 Media. [...] The agency has also said it plans to roll out AI through HHS's Centers for Medicare and Medicaid Services that will determine whether patients are eligible to receive certain treatments. These types of systems have been shown to be biased when they've been tried, and result in fewer patients getting the care they need.

AI

How Google Is Already Monetizing Its AI Services To Generate Revenue (cnbc.com) 25

Google Cloud CEO Thomas Kurian revealed the company has already made billions from AI by monetizing through consumption-based pricing, subscriptions, and upselling. "Our backlog is now at $106 billion -- it is growing faster than our revenue," said Kurian, speaking at the Goldman Sachs Communacopia and Technology Conference in San Francisco. "More than 50% of it will convert to revenue over the next two years." CNBC reports: Kurian said some people pay Google by consumption, giving the example of AI infrastructure purchased by enterprise customers. "Whether it's a GPU, TPU or a model, you pay by token -- meaning you pay by what you use," he said. Tokens represent chunks of text that a AI models process when they generate or interpret language. Some people use customer service systems, paying for it by what Kurian called "deflection rates." Such rates are priced based on the business value customers get -- things like uptime, scalability, AI features and security. Google Cloud also provides tools like a "deflection dashboard," that customers can use to track and manage agent interactions. Last month, Google won a $10 billion cloud contract from Meta spanning six years. Meta had largely been reliant on Amazon Web Services for cloud infrastructure, though it also uses Microsoft Azure.

Some customers pay for cloud services by way of subscriptions. "You pay per user per monthly fee -- for example, agents or Workspace," said Kurian, referring to the company's Gemini products, which has its own subscription tiers with various storage options, and the Google Workspace productivity suite, which also has several subscription tiers. Google One, a popular personal cloud storage subscription, offers a basic monthly service to users for $1.99 a month. Earlier this year, the company offered a new subscription tier called "Google AI Ultra," which offers exclusive access to the company's most "cutting edge" AI products with 30 terabytes of storage for $249.99 per month. Kurian gave an example of Google Cloud's cybersecurity subscription tiers, saying "we've seen huge growth in that."

Kurian said that upselling is another key aspect of Google Cloud's strategy. "We also upsell people as they use more of it from one version to another because we have higher quality models and higher-priced tiers," Kurian said. He said that once customers use Google's AI services, they wind up using more of the company's products. "That leads customers who sign a commitment or contract to spend more than they contacted for, which drives more revenue growth," he added. Kurian says it is capturing new customers more quickly too. "We've seen 28% sequential quarter-over-quarter growth in new customer wins in the first half of the year," said Kurian, adding that nearly two-thirds of customers already use Google Cloud's AI tools in a meaningful way. "Selling to existing customers is always easier than selling to new customers, so it helps us improve the cost of sales," Kurian said.

Social Networks

Sam Altman Says Bots Are Making Social Media Feel 'Fake' (techcrunch.com) 83

An anonymous reader quotes a report from TechCrunch: X enthusiast and Reddit shareholder Sam Altman had an epiphany on Monday: Bots have made it impossible to determine whether social media posts are really written by humans, he posted. The realization came while reading (and sharing) some posts from the r/Claudecode subreddit, which were praising OpenAI Codex. OpenAI launched the software programming service that takes on Anthropic's Claude Code in May. Lately, that subreddit has been so filled with posts from self-proclaimed Code users announcing that they moved to Codex that one Reddit user even joked: "Is it possible to switch to codex without posting a topic on Reddit?"

This left Altman wondering how many of those posts were from real humans. "I have had the strangest experience reading this: I assume it's all fake/bots, even though in this case I know codex growth is really strong and the trend here is real," he confessed on X. He then live-analyzed his reasoning. "I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very 'it's so over/we're so back' extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots)."

[...] Altman also throws a dig at the incentives when social media sites and creators rely on engagement to make money. Fair enough. But then Altman confesses that one of the reasons he thinks the pro-OpenAI posts in this subreddit might be bots is because OpenAI has also been "astroturfed." That typically involves posts by people or bots paid for by the competitor, or paid by some third-degree contractor, giving the competitor plausible deniability. [...] Altman surmises, "The net effect is somehow AI twitter/AI Reddit feels very fake in a way it really didn't a year or two ago." If that's true, who's fault is it? GPT has led models to become so good at writing, that LLMs have become a plague not just to social media sites (which have always had a bot problem) but to schools, journalism, and the courts.

AI

Gemini App Finally Expands To Audio Files 6

Google rolled out three big Gemini updates: the app now supports audio uploads (with tiered limits for free vs. paid users), Search gains AI Mode in five new languages, and NotebookLM expands to generate reports, study guides, quizzes, and other formats in over 80 languages. The Verge reports: According to a Monday post on X by Josh Woodward, vice president of Google Labs and Gemini, audio file compatibility was the "#1 request" to the Gemini app. Free Gemini users max out at 10 minutes of audio, and five free prompts each day. AI Pro or AI Ultra users, meanwhile, can upload audio up to three hours in length. All Gemini prompts accommodate up to 10 files across various file formats, including within ZIP files.

Additionally, Google Search's AI Mode has rolled out five new language options: Hindi, Indonesian, Japanese, Korean, and Brazilian Portuguese, thanks to the integration of Gemini 2.5 with Search, according to a company blog: "With this expansion, more people can now use AI Mode to ask complex questions in their preferred language, while exploring the web more deeply." The Gemini-powered NotebookLM software is also getting an update in the form of new report styles in over 80 languages based on a user's uploaded documents, files, and other media.
AI

All IT Work To Involve AI By 2030, Says Gartner (theregister.com) 61

An anonymous reader quotes a report from The Register: All work in IT departments will be done with the help of AI by 2030, according to analyst firm Gartner, which thinks massive job losses won't result. Speaking during the keynote address of the firm's Symposium event in Australia today, VP analyst Alicia Mullery said 81 percent of work is currently done by humans acting alone without AI assistance. Five years from now Gartner believes 75 percent of IT work will be human activity augmented by AI, with the remainder performed by bots alone.

Distinguished VP analyst Daryl Plummer said this shift will mean IT departments gain labor capacity and will need to show they deserve to keep it. "You never want to look like you have too many people," he advised, before suggesting technology leaders consult with peers elsewhere in a business to identify value-adding opportunities IT departments can execute. Plummer said Gartner doesn't foresee an "AI jobs bloodbath" in IT or other industries for at least five years, adding that just one percent of job losses today are attributable to AI. He and Mullery did predict a reduction in entry-level jobs, as AI lets senior staff tackle work they would once have assigned to juniors.

The two analysts also forecast that businesses will struggle to implement AI effectively, because the costs of running AI workloads balloon. ERP, Plummer said, has straightforward up-front costs: You pay to license and implement it, then to train people so they can use it. AI needs that same initial investment but few organizations can keep up with AI vendors' pace of innovation. Adopting AI therefore creates a requirement for near-constant exploration of use cases and subsequent retraining. Plummer said orgs that adopt AI should expect to uncover 10 unanticipated ancillary costs, among them the need to acquire new datasets, and the costs of managing multiple models. The need to use one AI model to check the output of others -- a necessary step to verify accuracy -- is another cost to consider. AI's hidden costs mean Gartner believes 65 percent of CIOs aren't breaking even on AI investments.

AI

Mathematicians Find GPT-5 Makes Critical Errors in Original Proof Generation 60

University of Luxembourg mathematicians tested whether GPT-5 could extend a qualitative fourth-moment theorem to include explicit convergence rates, a previously unaddressed problem in the Malliavin-Stein framework. The September 2025 experiment, prompted by claims GPT-5 solved a convex optimization problem, revealed the AI made critical errors requiring constant human correction.

GPT-5 overlooked an essential covariance property easily deducible from provided documents. The researchers compared the experience to working with a junior assistant needing careful verification. They warned AI reliance during doctoral training risks students losing opportunities to develop fundamental mathematical skills through mistakes and exploration.
Microsoft

Some Angry GitHub Users Are Rebelling Against GitHub's Forced Copilot AI Features (theregister.com) 63

Slashdot reader Charlotte Web shared this report from the Register: Among the software developers who use Microsoft's GitHub, the most popular community discussion in the past 12 months has been a request for a way to block Copilot, the company's AI service, from generating issues and pull requests in code repositories. The second most popular discussion — where popularity is measured in upvotes — is a bug report that seeks a fix for the inability of users to disable Copilot code reviews. Both of these questions, the first opened in May and the second opened a month ago, remain unanswered, despite an abundance of comments critical of generative AI and Copilot...

The author of the first, developer Andi McClure, published a similar request to Microsoft's Visual Studio Code repository in January, objecting to the reappearance of a Copilot icon in VS Code after she had uninstalled the Copilot extension... "I've been for a while now filing issues in the GitHub Community feedback area when Copilot intrudes on my GitHub usage," McClure told The Register in an email. "I deeply resent that on top of Copilot seemingly training itself on my GitHub-posted code in violation of my licenses, GitHub wants me to look at (effectively) ads for this project I will never touch. If something's bothering me, I don't see a reason to stay quiet about it. I think part of how we get pushed into things we collectively don't want is because we stay quiet about it."

It's not just the burden of responding to AI slop, an ongoing issue for Curl maintainer Daniel Stenberg. It's the permissionless copying and regurgitation of speculation as fact, mitigated only by small print disclaimers that generative AI may produce inaccurate results. It's also GitHub's disavowal of liability if Copilot code suggestions happen to have reproduced source code that requires attribution. It's what the Servo project characterizes in its ban on AI code contributions as the lack of code correctness guarantees, copyright issues, and ethical concerns. Similar objections have been used to justify AI code bans in GNOME's Loupe project, FreeBSD, Gentoo, NetBSD, and QEMU... Calls to shun Microsoft and GitHub go back a long way in the open source community, but moved beyond simmering dissatisfaction in 2022 when the Software Freedom Conservancy (SFC) urged free software supporters to give up GitHub, a position SFC policy fellow Bradley M. Kuhn recently reiterated.

McClure says In the last six months their posts have drawn more community support — and tells the Register there's been a second change in how people see GitHub within the last month. After GitHub moved from a distinct subsidiary to part of Microsoft's CoreAI group, "it seems to have galvanized the open source community from just complaining about Copilot to now actively moving away from GitHub."
IT

There's 50% Fewer Young Employees at Tech Companies Now Than Two Years Ago (fortune.com) 129

An anonymous reader shared this report from Fortune: The percentage of young Gen Z employees between the ages of 21 and 25 has been cut in half at technology companies over the past two years, according to recent data from compensation management software business Pave with workforce data from more than 8,300 companies.

These young workers accounted for 15% of the workforce at large public tech firms in January 2023. By August 2025, they only represented 6.8%. The situation isn't pretty at big private tech companies, either — during that same time period, the proportion of early-career Gen Z employees dwindled from 9.3% to 6.8%. Meanwhile, the average age of a worker at a tech company has risen dramatically over those two and a half years. Between January 2023 and July 2025, the average age of all employees at large public technology businesses rose from 34.3 years to 39.4 years — more than a five year difference. On the private side, the change was less drastic, with the typical age only increasing from 35.1 to 36.6 years old...

"If you're 35 or 40 years old, you're pretty established in your career, you have skills that you know cannot yet be disrupted by AI," Matt Schulman, founder and CEO of Pave, tells Fortune. "There's still a lot of human judgment when you're operating at the more senior level...If you're a 22-year-old that used to be an Excel junkie or something, then that can be disrupted. So it's almost a tale of two cities." Schulman points to a few reasons why tech company workforces are getting older and locking Gen Z out of jobs. One is that big companies — like Salesforce, Meta, and Microsoft — are becoming a lot more efficient thanks to the advent of AI. And despite their soaring trillion-dollar profits, they're cutting employees at the bottom rungs in favor of automation. Entry-level jobs have also dwindled because of AI agents, and stalling promotions across many agencies looking to do more with less. Once technology companies weed out junior roles, occupied by Gen Zers, their workforces are bound to rise in age.

Schulman tells Fortune Gen Z also has an advantage: that tech corporations can see them as fresh talent that "can just break the rules and leverage AI to a much greater degree without the hindrance of years of bias." And Priya Rathod, workplace trends editor for LinkedIn, tells Fortune there's promising tech-industry entry roles in AI ethics, cybersecurity, UX, and product operations. "Building skills through certifications, gig work, and online communities can open doors....

"For Gen Z, the right certifications or micro credentials can outweigh a lack of years on the resume. This helps them stay competitive even when entry level opportunities shrink."

Slashdot Top Deals