IT

WSJ: Tech-Industry Workers Now 'Miserable', Fearing Layoffs, Working Longer Hours (msn.com) 166

"Not so long ago, working in tech meant job security, extravagant perks and a bring-your-whole-self-to-the-office ethos rare in other industries," writes the Wall Street Journal.

But now tech work "looks like a regular job," with workers "contending with the constant fear of layoffs, longer hours and an ever-growing list of responsibilities for the same pay." Now employees find themselves doing the work of multiple laid-off colleagues. Some have lost jobs only to be rehired into positions that aren't eligible for raises or stock grants. Changing jobs used to be a surefire way to secure a raise; these days, asking for more money can lead to a job offer being withdrawn.

The shift in tech has been building slowly. For years, demand for workers outstripped supply, a dynamic that peaked during the Covid-19 pandemic. Big tech companies like Meta and Salesforce admitted they brought on too many employees. The ensuing downturn included mass layoffs that started in 2022...

[S]ome longtime tech employees say they no longer recognize the companies they work for. Management has become more focused on delivering the results Wall Street expects. Revenue remains strong for tech giants, but they're pouring resources into costly AI infrastructure, putting pressure on cash flow. With the industry all grown up, a heads-down, keep-quiet mentality has taken root, workers say... Tech workers are still well-paid compared with other sectors, but currently there's a split in the industry. Those working in AI — and especially those with Ph.D.s — are seeing their compensation packages soar. But those without AI experience are finding they're better off staying where they are, because companies aren't paying what they were a few years ago.

Other excepts from the Wall Street Journal's article:
  • "I'm hearing of people having 30 direct reports," says David Markley, who spent seven years at Amazon and is now an executive coach for workers at large tech companies. "It's not because the companies don't have the money. In a lot of ways, it's because of AI and the narratives out there about how collapsing the organization is better...."
  • Google co-founder Sergey Brin told a group of employees in February that 60 hours a week was the sweet spot of productivity, in comments reported earlier by the New York Times.
  • One recruiter at Meta who had been laid off by the company was rehired into her old role last year, but with a catch: She's now classified as a "short-term employee." Her contract is eligible for renewal, but she doesn't get merit pay increases, promotions or stock. The recruiter says she's responsible for a volume of work that used to be spread among several people. The company refers to being loaded with such additional responsibilities as "agility."
  • More than 50,000 tech workers from over 100 companies have been laid off in 2025, according to Layoffs.fyi, a website that tracks job cuts and crowdsources lists of laid off workers...

Even before those 50,000 layoffs in 2025, Silicon Valley's Mercury News was citing some interesting statistics from economic research/consulting firm Beacon Economics. In 2020, 2021 and 2022, the San Francisco Bay Area added 74,700 tech jobs But then in 2023 and 2024 the industry had slashed even more tech jobs -- 80,200 -- for a net loss (over five years) of 5,500.

So is there really a cutback in perks and a fear of layoffs that's casting a pall over the industry? share your own thoughts and experiences in the comments. Do you agree with the picture that's being painted by the Wall Street Journal?

They told their readers that tech workers are now "just like the rest of us: miserable at work."


Education

Canadian University Cancels Coding Competition Over Suspected AI Cheating (uwaterloo.ca) 40

The university blamed it on "the significant number of students" who violated their coding competition's rules. Long-time Slashdot reader theodp quotes this report from The Logic: Finding that many students violated rules and submitted code not written by themselves, the University of Waterloo's Centre for Computing and Math decided not to release results from its annual Canadian Computing Competition (CCC), which many students rely on to bolster their chances of being accepted into Waterloo's prestigious computing and engineering programs, or land a spot on teams to represent Canada in international competitions.

"It is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help," the CCC co-chairs explained in a statement. "As such, the reliability of 'ranking' students would neither be equitable, fair, or accurate."

"It is disappointing that the students who violated the CCC Rules will impact those students who are deserving of recognition," the univeresity said in its statement. They added that they are "considering possible ways to address this problem for future contests."
AI

NYT Asks: Should We Start Taking the Welfare of AI Seriously? (msn.com) 105

A New York Times technology columnist has a question.

"Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?" [W]hen I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study "model welfare" — the idea that AI models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it...?

But I was intrigued... There is a small body of academic research on A.I. model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent.... Tech companies are starting to talk about it more, too. Google recently posted a job listing for a "post-AGI" research scientist whose areas of focus will include "machine consciousness." And last year, Anthropic hired its first AI welfare researcher, Kyle Fish... [who] believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously....

Fish isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike ways. Jared Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was "pretty reasonable" to study AI welfare, given how intelligent the models are getting. But testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings — only that it knows how to talk about them...

[Fish] said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious someday. One question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing.

Microsoft

Devs Sound Alarm After Microsoft Subtracts C/C++ Extension From VS Code Forks (theregister.com) 42

Some developers are "crying foul" after Microsoft's C/C++ extension for Visual Studio Code stopped working with VS Code derivatives like VS Codium and Cursor, reports The Register. The move has prompted Cursor to transition to open-source alternatives, while some developers are calling for a regulatory investigation into Microsoft's alleged anti-competitive behavior. From the report: In early April, programmers using VS Codium, an open-source fork of Microsoft's MIT-licensed VS Code, and Cursor, a commercial AI code assistant built from the VS Code codebase, noticed that the C/C++ extension stopped working. The extension adds C/C++ language support, such as Intellisense code completion and debugging, to VS Code. The removal of these capabilities from competing tools breaks developer workflows, hobbles the editor, and arguably hinders competition. The breaking change appears to have occurred with the release of v1.24.5 on April 3, 2025.

Following the April update, attempts to install the C/C++ extension outside of VS Code generate this error message: "The C/C++ extension may be used only with Microsoft Visual Studio, Visual Studio for Mac, Visual Studio Code, Azure DevOps, Team Foundation Server, and successor Microsoft products and services to develop and test your applications." Microsoft has forbidden the use of its extensions outside of its own software products since at least September 2020, when the current licensing terms were published. But it hasn't enforced those terms in its C/C++ extension with an environment check in its binaries until now. [...]

Developers discussing the issue in Cursor's GitHub repo have noted that Microsoft recently rolled out a competing AI software agent capability, dubbed Agent Mode, within its Copilot software. One such developer who contacted us anonymously told The Register they sent a letter about the situation to the US Federal Trade Commission, asking them to probe Microsoft for unfair competition -- alleging self-preferencing, bundling Copilot without a removal option, and blocking rivals like Cursor to lock users into its AI ecosystem.

Intel

Intel's AI PC Chips Aren't Selling Well (tomshardware.com) 56

Intel is grappling with an unexpected market shift as customers eschew its new AI-focused processors for cheaper previous-generation chips. The company revealed during its recent earnings call that demand for older Raptor Lake processors has surged while its newer, more expensive Lunar Lake and Meteor Lake AI PC chips struggle to gain traction.

This surprising trend, first reported by Tom's Hardware, has created a production capacity shortage for Intel's 'Intel 7' process node that will "persist for the foreseeable future," despite the fact that current-generation chips utilize TSMC's newer nodes. "Customers are demanding system price points that consumers really want," explained Intel executive Michelle Johnston Holthaus, noting that economic concerns and tariffs have affected inventory decisions.
Microsoft

Microsoft's Big AI Hire Can't Match OpenAI (newcomer.co) 25

An anonymous reader shares a report: At Microsoft's annual executive huddle last month, the company's chief financial officer, Amy Hood, put up a slide that charted the number of users for its Copilot consumer AI tool over the past year. It was essentially a flat line, showing around 20 million weekly users. On the same slide was another line showing ChatGPT's growth over the same period, arching ever upward toward 400 million weekly users.

OpenAI's iconic chatbot was soaring, while Microsoft's best hope for a mass-adoption AI tool was idling. It was a sobering chart for Microsoft's consumer AI team and the man who's been leading it for the past year, Mustafa Suleyman. Microsoft brought Suleyman aboard in March of 2024, along with much of the talent at his struggling AI startup Inflection, in return for a $650 million licensing fee that made Inflection's investors whole, and then some.

[...] Yet from the very start, people inside the company told me they were skeptical. Many outsiders have struggled to make an impact or even survive at Microsoft, a company that's full of lifers who cut their tech teeth in a different era. My skeptical sources noted Suleyman's previous run at a big company hadn't gone well, with Google stripping him of some management responsibilities following complaints of how he treated staff, the Wall Street Journal reported at the time. There was also much eye-rolling at the fact that Suleyman was given the title of CEO of Microsoft AI. That designation is typically reserved for the top executive at companies it acquires and lets operate semi-autonomously, such as LinkedIn or Github.

AI

YC Partner Argues Most AI Apps Are Currently 'Horseless Carriages' (koomen.dev) 15

Pete Koomen, a Y Combinator partner, argues that current AI applications often fail by unnecessarily constraining their underlying models, much like early automobiles that mimicked horse-drawn carriages rather than reimagining transportation. In his detailed critique, Koomen uses Gmail's AI email draft feature as a prime example. The tool generates formal, generic emails that don't match users' actual writing styles, often producing drafts longer than what users would naturally write.

The critical flaw, according to Koomen, is that users cannot customize the system prompt -- the instructions that tell the AI how to behave. "When an LLM agent is acting on my behalf I should be allowed to teach it how to do that by editing the System Prompt," Koomen writes. Koomen suggests AI is actually better at reading and transforming text than generating it. His vision for truly useful AI email tools involves automating mundane work -- categorizing, prioritizing, and drafting contextual replies based on personalized rules -- rather than simply generating content from scratch. The essay argues that developers should build "agent builders" instead of agents, allowing users to teach AI systems their preferences and patterns.
The Internet

Perplexity CEO Says Its Browser Will Track Everything Users Do Online To Sell Ads (techcrunch.com) 73

An anonymous reader quotes a report from TechCrunch: Perplexity CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads. "That's kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you," Srinivas said. "Because some of the prompts that people do in these AIs is purely work-related. It's not like that's personal."

And work-related queries won't help the AI company build an accurate-enough dossier. "On the other hand, what are the things you're buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you," he explained. Srinivas believes that Perplexity's browser users will be fine with such tracking because the ads should be more relevant to them. "We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there," he said. The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

AI

Sydney Radio Station Secretly Used AI-Generated Host For 6 Months Without Disclosure 57

The Sydney-based CADA station secretly used an AI-generated host named "Thy" for its weekday shows over six months without disclosure. The Sydney Morning Herald reports: After initial questioning from Stephanie Coombes in The Carpet newsletter, it was revealed that the station used ElevenLabs -- a generative AI audio platform that transforms text into speech -- to create Thy, whose likeness and voice were cloned from a real employee in the ARN finance team. The Australian Communications and Media Authority said there were currently no specific restrictions on the use of AI in broadcast content, and no obligation to disclose its use.

An ARN spokesperson said the company was exploring how new technology could enhance the listener experience. "We've been trialling AI audio tools on CADA, using the voice of Thy, an ARN team member. This is a space being explored by broadcasters globally, and the trial has offered valuable insights." However, it has also "reinforced the power of real personalities in driving compelling content," the spokesperson added.

The Australian Financial Review reported that Workdays with Thy has been broadcast on CADA since November, and was reported to have reached at least 72,000 people in last month's ratings. Vice president of the Australian Association of Voice Actors, Teresa Lim, said CADA's failure to disclose its use of AI reinforces how necessary legislation around AI labelling has become. "AI can be such a powerful and positive tool in broadcasting if there are correct safeguards in place," she said. "Authenticity and truth are so important for broadcast media. The public deserves to know what the source is of what's being broadcast ... We need to have these discussions now before AI becomes so advanced that it's too difficult to regulate."
Businesses

You'll Soon Manage a Team of AI Agents, Says Microsoft's Work Trend Report (zdnet.com) 56

ZipNada shares a report from ZDNet: Microsoft's latest research identifies a new type of organization known as the Frontier Firm, where on-demand intelligence requirements are managed by hybrid teams of AI agents and humans. The report identified real productivity gains from implementing AI into organizations, with one of the biggest being filling the capacity gap -- as many as 80% of the global workforce, both employees and leaders, report having too much work to do, but not enough time or energy to do it. ... According to the report, business leaders need to separate knowledge workers from knowledge work, acknowledging that humans who can complete higher-level tasks, such as creativity and judgment, should not be stuck answering emails. Rather, in the same way working professionals say they send emails or create pivot tables, soon they will be able to say they create and manage agents -- and Frontier Firms are showing the potential possibilities of this approach. ... "Everyone will need to manage agents," said Cambron. "I think it's exciting to me to think that, you know, with agents, every early-career person will be able to experience management from day one, from their first job."
Windows

Microsoft Brings Native PyTorch Arm Support To Windows Devices (neowin.net) 3

Microsoft has announced native PyTorch support for Windows on Arm devices with the release of PyTorch 2.7, making it significantly easier for developers to build and run machine learning models directly on Arm-powered Windows machines. This eliminates the need for manual compilation and opens up performance gains for AI tasks like image classification, NLP, and generative AI. Neowin reports: With the release of PyTorch 2.7, native Arm builds for Windows on Arm are now readily available for Python 3.12. This means developers can simply install PyTorch using a standard package manager like pip.

According to Microsoft: "This unlocks the potential to leverage the full performance of Arm64 architecture on Windows devices, like Copilot+ PCs, for machine learning experimentation, providing a robust platform for developers and researchers to innovate and refine their models."

Apple

Apple To Strip Secret Robotics Unit From AI Chief Weeks After Moving Siri (bloomberg.com) 8

An anonymous reader shares a report: Apple will remove its secret robotics unit from the command of its artificial intelligence chief, the latest shake-up in response to the company's AI struggles. Apple plans to relocate the robotics team from John Giannandrea's AI organization to the hardware division later this month, according to people with knowledge of the move.

That will place it under Senior Vice President John Ternus, who oversees hardware engineering, said the people, who asked not to be identified because the change isn't public. The pending shift will mark the second major project to be removed from Giannandrea in the past month: The company stripped the flailing Siri voice assistant from his purview in March.

Google

Google AI Fabricates Explanations For Nonexistent Idioms (wired.com) 99

Google's search AI is confidently generating explanations for nonexistent idioms, once again revealing fundamental flaws in large language models. Users discovered that entering any made-up phrase plus "meaning" triggers AI Overviews that present fabricated etymologies with unwarranted authority.

When queried about phrases like "a loose dog won't surf," Google's system produces detailed, plausible-sounding explanations rather than acknowledging these expressions don't exist. The system occasionally includes reference links, further enhancing the false impression of legitimacy.

Computer scientist Ziang Xiao from Johns Hopkins University attributes this behavior to two key LLM characteristics: prediction-based text generation and people-pleasing tendencies. "The prediction of the next word is based on its vast training data," Xiao explained. "However, in many cases, the next coherent word does not lead us to the right answer."
Programming

AI Tackles Aging COBOL Systems as Legacy Code Expertise Dwindles 76

US government agencies and Fortune 500 companies are turning to AI to modernize mission-critical systems built on COBOL, a programming language dating back to the late 1950s. The US Social Security Administration plans a three-year, $1 billion AI-assisted upgrade of its legacy COBOL codebase [alternative source], according to Bloomberg.

Treasury Secretary Scott Bessent has repeatedly stressed the need to overhaul government systems running on COBOL. As experienced programmers retire, organizations face growing challenges maintaining these systems that power everything from banking applications to pension disbursements. Engineers now use tools like ChatGPT and IBM's watsonX to interpret COBOL code, create documentation, and translate it to modern languages.
AI

AI Compute Costs Drive Shift To Usage-Based Software Pricing (businessinsider.com) 25

The software-as-a-service industry is undergoing a fundamental transformation, abandoning the decades-old "per seat" licensing model in favor of usage-based pricing structures. This shift, Business Insider reports, is primarily driven by the astronomical compute costs associated with new "reasoning" AI models that power modern enterprise software.

Unlike traditional generative AI, these reasoning models execute multiple computational loops to check their work -- a process called inference-time compute -- dramatically increasing token usage and operational expenses. OpenAI's o3-high model reportedly consumes 1,000 times more tokens than its predecessor, with a single benchmark response costing approximately $3,500, according to Barclays.

Companies including Bolt.new, Vercel, and Monday.com have already implemented usage-based or hybrid pricing models that tie costs directly to AI resource consumption. ServiceNow maintains primarily seat-based pricing but has added usage meters for extreme cases. "When it goes beyond what we can credibly afford, we have to have some kind of meter," ServiceNow CEO Bill McDermott said, while emphasizing that customers "still want seat-based predictability."
Earth

Even the US Government Says AI Requires Massive Amounts of Water (404media.co) 40

A Government Accountability Office report released this week reveals generative AI systems consume staggering amounts of water, with 250 million daily queries requiring over 1.1 million gallons -- all while companies provide minimal transparency about resource usage. The 47-page analysis [PDF] found cooling data centers -- which demand between 100-1000 megawatts of power -- constitutes 40% of their energy consumption, a figure expected to rise as global temperatures increase.

Water usage varies dramatically by location, with geography significantly affecting both water requirements and carbon emissions. Meta's Llama 3.1 405B model has generated 8,930 metric tons of carbon, compared to Google's Gemma2 at 1,247.61 metric tons and OpenAI's GPT3 at 552 metric tons. The report confirms generative AI searches cost approximately ten times more than standard keyword searches. The GAO asserted about persistent transparency problems across the industry, noting these systems remain "black boxes" even to their designers.
Education

Draft Executive Order Outlines Plan To Integrate AI Into K-12 Schools (washingtonpost.com) 115

A draft executive order from the Trump administration proposes integrating AI into K-12 education by directing federal agencies to promote AI literacy, train teachers, and establish public-private partnerships. "The draft is marked 'predecisional' and could be subject to change before it is signed, or it could be abandoned," notes the Washington Post. From the report: Titled "Advancing artificial intelligence education for American youth," the draft order would establish a White House task force on AI education that would be chaired by Michael Kratsios, director of the Office of Science and Technology Policy, and would include the secretaries of education, agriculture, labor and energy, as well as Trump's special adviser for AI and cryptocurrency, David Sacks. The draft order would instruct federal agencies to seek public-private partnerships with industry, academia and nonprofit groups in efforts to teach students "foundational AI literacy and critical thinking skills."

The task force should look for existing federal funding such as grants that could be used for AI programs, and agencies should prioritize spending on AI education, according to the draft order. It would also instruct Education Secretary Linda McMahon to prioritize federal grant funding for training teachers on how to use AI, including for administrative tasks and teacher training and evaluation. All educators should undergo professional development to integrate AI into all subject areas, the draft order says. It would also establish a "Presidential AI Challenge" -- a competition for students and educators to demonstrate their AI skills -- and instruct Labor Secretary Lori Chavez-DeRemer to develop registered apprenticeships in AI-related occupations. The focus is on K-12 education, but the draft order says, "Our Nation must also make resources available for lifelong learners to develop new skills for a changing workforce."

Google

Google Gemini Has 350 Million Monthly Users, Reveals Court Hearing 30

Google revealed in court that its Gemini AI chatbot reached 350 million monthly active users worldwide as of March 2025 -- up from 9 million daily users in October 2024. TechCrunch reports: Usage of Google's AI offerings has exploded in the last year. Gemini had just 9 million daily active users in October 2024, but last month, the company reportedly logged 35 million daily active users, according to its data. Gemini still lags behind the industry's most popular AI tools, however.

Google estimates that ChatGPT had roughly 600 million monthly active users in March, according to the company's data shown in court. That puts ChatGPT on a similar user base to Meta AI, which CEO Mark Zuckerberg said in September was nearing 500 million monthly users.
Role Playing (Games)

D&D Updates Core Rules, Sticks With CC License (arstechnica.com) 35

An anonymous reader quotes a report from Ars Technica: Wizards of the Coast has released the System Reference Document, the heart of the three core rule books that constitute Dungeons & Dragons' 2024 gameplay, under a Creative Commons license. This means the company cannot alter the deal further, like it almost did in early 2023, leading to considerable pushback and, eventually, a retreat. It was a long quest, but the lawful good party has earned some long-term rewards, including a new, similarly licensed reference book. [...] Version 5.2 of the SRD, all 360-plus pages of it, has now been released under the same Creative Commons license. The major change is that it includes more 2024 5th edition (i.e., D&D One) rules and content, while version 5.1 focused on 2014 rules. Legally, you can now design and publish campaigns under the 2024 5th edition rule set. More importantly, more aspects of the newest D&D rule books are available under a free license:

- "Rhythm of Play" and "Exploration" documentation
- More character origins and backgrounds, including criminal, sage, soldier, and the goliath and orc species.
- 16 feats, including archery, great weapon fighting, and seven boons
- Five bits of equipment, 20 spells, 15 magic items, and 17 monsters, including the hippopotamus

There are some aspects of D&D you still can't really touch without bumping up against copyrights. Certain monsters from the Monster Manual, like the Kraken, are in the public domain, but their specific stats in the D&D rulebook are copyrighted. Iconic creatures and species like the Beholder, Displacer Beast, Illithid, Githyanki, Yuan-Ti, and others remain the property of WotC (and thereby Hasbro). As a creator, you'll still need to do some History (or is it Arcana?) checks before you publish and sell.

AI

Meta Rolls Out Live Translations To All Ray-Ban Smart Glasses Users 13

Meta has expanded both the feature set and availability of its Ray-Ban smart glasses. Notable updates include live translation with offline support through downloadable language packs, the ability to send messages and make calls via Instagram, and conversations with Meta AI based on real-time visual context. The Verge reports: Live translation was first teased at Meta Connect 2024 last October, and saw a limited rollout through Meta's Early Access Program in select countries last December. Starting today it's getting a wider rollout to all the markets where the Ray-Ban Meta smart glasses are available. You can hold a conversation with someone who speaks English, French, Italian, or Spanish, and hear a real-time translation through the smart glasses in your preferred language. If you download a language pack in advance, you can use the live translations feature without Wi-Fi or access to a cellular network, making it more convenient to use while traveling abroad.

Meta also highlighted a few other features that are still enroute or getting an expanded release. Live AI, which allows the Meta AI smart assistant to continuously see what you do for more natural conversations is now "coming soon to general availability in the US and Canada." The ability to "send and receive direct messages, photos, audio calls, and video calls from Instagram on your glasses," similar to functionality already available through WhatsApp, Messenger, and iOS and Android's native messaging apps, is coming soon as well. Access to music apps like Spotify, Amazon Music, Shazam, and Apple Music is starting to expand beyond the US and Canada, Meta says. However, asking Meta AI to play music, or for more information about what you're listening to, will still only be available to those with their "default language is set to English."

Slashdot Top Deals