Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI Programming IT

Entry-Level Tech Workers Confront an AI-Fueled Jobpocalypse (restofworld.org) 76

AI "has gutted entry-level roles in the tech industry," reports Rest of World.

One student at a high-ranking engineering college in India tells them that among his 400 classmates, "fewer than 25% have secured job offers... there's a sense of panic on the campus." Students at engineering colleges in India, China, Dubai, and Kenya are facing a "jobpocalypse" as artificial intelligence replaces humans in entry-level roles. Tasks once assigned to fresh graduates, such as debugging, testing, and routine software maintenance, are now increasingly automated. Over the last three years, the number of fresh graduates hired by big tech companies globally has declined by more than 50%, according to a report published by SignalFire, a San Francisco-based venture capital firm. Even though hiring rebounded slightly in 2024, only 7% of new hires were recent graduates. As many as 37% of managers said they'd rather use AI than hire a Gen Z employee...

Indian IT services companies have reduced entry-level roles by 20%-25% thanks to automation and AI, consulting firm EY said in a report last month. Job platforms like LinkedIn, Indeed, and Eures noted a 35% decline in junior tech positions across major EU countries during 2024...

"Five years ago, there was a real war for [coders and developers]. There was bidding to hire," and 90% of the hires were for off-the-shelf technical roles, or positions that utilize ready-made technology products rather than requiring in-house development, said Vahid Haghzare, director at IT hiring firm Silicon Valley Associates Recruitment in Dubai. Since the rise of AI, "it has dropped dramatically," he said. "I don't even think it's touching 5%. It's almost completely vanished." The company headhunts workers from multiple countries including China, Singapore, and the U.K... The current system, where a student commits three to five years to learn computer science and then looks for a job, is "not sustainable," Haghzare said. Students are "falling down a hole, and they don't know how to get out of it."

Entry-Level Tech Workers Confront an AI-Fueled Jobpocalypse

Comments Filter:
  • When everyone was talking about AI gutting jobs, is it safe to say that future is now here? The only answer is that if you have a CS degree you're probably smart enough to get a job pretty much anywhere else but it doesn't look good for CS lecturers when the ROI suddenly turns to crap
    • Re:As predicted (Score:5, Interesting)

      by gweihir ( 88907 ) on Sunday December 14, 2025 @04:27PM (#65858149)

      When everyone was talking about AI gutting jobs, is it safe to say that future is now here?

      I don't think so. LLM-type AI with cheap mainstream availability may well be a very temporary thing. First, they still do not have any business model that would even remotely justify the expenses. Second, LLMs cannot perform on professional level and cannot perform any task that requires insight. Third, the training-data piracy mau well kill the whole thing. Hence some niche applications for small special-purpose LLMs may remain, but that will likely be it.

      On the other hand, some types of jobs may still get "gutted", even with that limited usefulness.

      • Re: As predicted (Score:4, Insightful)

        by jlowery ( 47102 ) on Sunday December 14, 2025 @04:36PM (#65858167)

        I would say AI surprises me with its insight sometimes. Ask it for suggestions and you will often get good ones, with tradeoffs, pros and cons.

        What it can't do is: requirements gathering, prioritizing, political wrangling, and managing expectations or scheduling.

        • Re: As predicted (Score:5, Informative)

          by gweihir ( 88907 ) on Sunday December 14, 2025 @04:51PM (#65858197)

          They may surprise you, but these are not insights the LLM had. These are insights other people had and the LLM essentially only found them by insightless statistical pattern matching.

          • ...and the LLM essentially only found them by insightless statistical pattern matching.

            Absolutely correct. I have an example just from today. I told an AI today that I couldn't launch Steam from one of my son's Linux Mint desktop when I double-clicked on the desktop icon. It had several possibilities for me to try, with each suggestion having a footnote pointing to where it got the potential answer. That was kind of cool, in a bizarre kind of search-engine way.

            After reading a bunch of them, most of which involved doing things that had zero chance of working (but which the AI picked up from v

            • Just to clarify.

              Did the model actually have enough information to possibly answer the question? If it wasn't familiar with your Mint configuration, and you were using something you rolled on your own, it had no real chance of solving your problem. Second, were you using a paid reasoning model, or a budget light model? In my experience, even today's LLMs are terrible, but the paid reasoning models fair pretty well. Your story just seems odd because in my experience, this is the sort of task reasoners excel a

              • by gweihir ( 88907 )

                If it were AGI, it clearly had enough information. Since it is not AGI and cannot think at all, it did not have enough information and it would have needed the actual solution as input.

              • Did the model actually have enough information to possibly answer the question?

                By definition, the inspiration struck them while they were looking at the linked help articles the AI gave. So yes, the AI did indeed have the info needed to solve the problem. (Because the human figured it out from what the AI offered them.)

                If it wasn't familiar with

                Irrelevant excuses. The LLM failed to come up with the answer. Unless you think it's reasonable for the human to have memorized all of those statistics points that the LLM has.

                Second, were you using a paid reasoning model, or a budget light model?

                Because only through your tithe can ye find the truth!.....Er, were we talking about troubl

            • by gweihir ( 88907 )

              Statistical matching, which is all AI does, is 100% incapable of thinking outside the math that drives it. More accurately, AI is 100% incapable of thinking. End of story.

              Indeed. But, you know, I am beginning to think that most people (outside of the about 10-15% independent thinkers and the additional 5% or so that can be convinced using rational arguments) are actually incapable of rational thinking or actively chose not to do it. People that do not understand the difference between an implication and a correlation. People think that people that do some thing makes them responsible for something entirely different with no causation chain present, but they have this fuzzy

              • I'm going to try to temper this by pointing out that whatever percentage the folks who expect other people to do their thinking is... it's going to appear inflated because part of their mentality makes them far more likely to communicate with entitlement than someone who has already worked it out themselves, or someone who already knew it was a stupid idea and wouldn't have tried

                or someone who never had the free time to engage in that kind of frivolity.

                So hopefully it's just that the independent thinkers ar

                • by gweihir ( 88907 )

                  The thing is, these numbers are apparently reliable and quite old. They are from an interview (I cannot find anymore) I read about 10 years ago with a very respected sociologist. And the 20% (total) that can be convinced by rational argument are from another interview with another very respected sociologist. I have been unable to find literature references, probably because they are old enough to not be online. ChatGPT has partially confirmed these numbers (apparently, when it does not matter the number of

                  • Given the amount of folks who claim to work as programmers on this site who can't actually manage the logic of a simple if statement... I'm just not convinced that the boundaries are discrete on either side of the curve. I'm sure they're telling the truth for the most part, maybe they produce a bunch of process bugs that someone has to fix for them... but maybe they actually can apply reasoning in that context that doesn't get carried to their other thought processes, simply due to lack of prompting.
                    I could

            • Did you report back and inform the LLM of the solution, so it could improve its answers?

              Also, does /. permit LLMs to drag the /. swamp and scrape out these nuggets?

          • You could say that about most insights people have, business will not care as long as it saves them some money then you are out of a job, just like they didn't care when they sent jobs of to India, if it saves them money, its irrelevant, and any business that doesn't just loses and goes out of business.

            I think AI will also be able to political wrangling, I am sure I read something saying its better at convincing people than people, https://www.technologyreview.c... [technologyreview.com]
            and managing expectations, requirem

          • by AmiMoJo ( 196126 )

            Which is still useful. I was trying to fix an issue with a web app I made (I'm not a web developer, this is just a hobby project) and a bit of googling didn't turn up the answer. ChatGPT knew what the solution was from my description. So as a glorified search engine, it did better than Google and DDG.

            • by gweihir ( 88907 )

              I do agree on that one. I have, so far, found two instances where Internet-trained LLMs are useful: Search when you only can describe the issue or describe a concept and looking for cooking recipes by description. Both instances of "better search". For the effort invested, those results are utterly pathetic.

        • I would say AI surprises me with its insight sometimes. Ask it for suggestions and you will often get good ones, with tradeoffs, pros and cons.

          What it can't do is: requirements gathering, prioritizing, political wrangling, and managing expectations or scheduling.

          I could say the same about a rubberduck

      • Re:As predicted (Score:4, Interesting)

        by HiThere ( 15173 ) <charleshixsn.earthlink@net> on Sunday December 14, 2025 @04:39PM (#65858173)

        It may be temporary (I doubt it), but it's not "very temporary" as the same thing has been reported for months with pretty steadily increasing urgency.

        OTOH, the AIs clearly aren't good enough to replace programmers, or probably even coders. So what's currently happening is probably jobs being redesigned to use an AI where it makes sense. Expect LOTS of failures in this redesign, but it will be the successes that shape the future...unless the AIs get a LOT better. (Currently they don't understand the problem they're trying to answer.)

      • It's not a product it's capital. It's not something they sell even to businesses it's something you own and use for your own purposes. AKA capital.

        AI exists to solve the problem of paying wages. Because of that it doesn't need to be profitable it just needs to serve the needs of the people who own it, billionaires.

        I think this is a difficult idea for people to wrap their heads around because what's happening here is capitalism is going away and since we all grew up being told that capitalism is immu
      • by troff ( 529250 )

        > I don't think so. LLM-type AI [...] First, they still do not have

        You're not answering the question "When everyone was talking about AI gutting jobs, is it safe to say that future is now here?".

        You're answering the question "Is it a good idea, is it feasible to let AI gut jobs?".

        Your answer does not take into account that the people doing the job-gutting may not have the understanding-of-function you have.

        The future is here.

      • There is also the fact that we very recently experienced global hyperinflation, followed by extremely high interest rates to tame it. The natural (and intended) effect was mass downsizing and layoffs, which we did, in fact see. So we are sitting in the valley of that effect right now.

        It's popular to put "AI into every headline, but there there are other things simultaneously going on in the world that are significant contributors to the current low demand for workers in various industries, including tech.

        • Hyperinflation is defined as 100% a year or so. It lacks a formal definition - a search throws up lots of suggestions - but globally we got nowhere close to that.

          Extremely high interest rates? You're obvious young; in the early 1990s interest rates were of the order of 10%; this time round they didn't get to 6%.

          https://www.macrotrends.net/20... [macrotrends.net]

        • by gweihir ( 88907 )

          I do agree, there are other effects at work. LLMs are far too incapable to have caused these effects all by themselves. They are clearly being used as pretext in many cases.

          But the world does not run on compassion.

          That is not quite true. Quite a bit of the world does run at least in part on compassion. The uncivilized part certainly does not.

      • by sinij ( 911942 )

        On the other hand, some types of jobs may still get "gutted", even with that limited usefulness.

        If your job is a task, then AI can take it away. Otherwise, you are safe.

        • by gweihir ( 88907 )

          While this is a very fuzzy statement and requires qualification, it probably close to what is happening. Whether it will actually work out that way remains to be seen. As soon as that task requires some mental flexibility or high reliability, LLM-type AI is already out, even if people may not realize that.

    • If you think being a CS RCG is bad, try being a 55 year old coder who spent the last 30 years honing the craft, only to be laid off into a market where those skills are just no longer needed at all.

      • by Temkin ( 112574 )

        try being a 55 year old coder who spent the last 30 years honing the craft, only to be laid off into a market where those skills are just no longer needed at all.

        I don't have to think about it, I'm 15 months into it... :-/

        T

  • by Anonymous Coward

    I thought AI was going to create more jobs? Why, just the other day there was an article on this site which claimed that using AI in x-ray imaging grew the demand for radiologists.

    • In some isolated cases like that it might, and we've also had someone comment that the AI push is actually a free-labor ponzi scheme, since it will eventually take warehouses full of low-paid sweatshop workers to help wrangle the AI responses into something presentable for any large amount of labor where the quality of the work actually directly influences customer perceptions, so in the long run overall it might increase total jobs available albeit at a lower average pay for them, but those sweet high-paid

    • The US of AI in x-ray imaging did grow the demand for radiologists That's accurate. That doesn't that AI will necessarily grow jobs in general, or even if it does that AI creating more jobs will not cause enough disruptions that it will take time to shakeout. It is possible for example that a lot of jobs which are being lost to AI now will be temporary as corporations realize more the limits of the technology. It is also possible that what we're seeing now in terms of job losses will become more extreme as
      • We really need to start making distinctions in what type of AI is being discussed. I can't imagine computer vision is driving much of the reduced Indian work force here. It's going to be the LLM stuff.
    • You cannot have creative destruction if nothing is destroyed. AI is going to be like the Linotype: most of the typesetters lost their jobs when the newspapers switched over. Then the book and magazine publishers switched over and they hired many times more typesetters than the newspapers laid off.

    • ChatGPT was released *THREE* years ago. The technology is still so full of hot air nobody knows what's real and what's not. Give it a few more minutes, the jobs are still coming (and being eliminated) as happens with every new technology.

  • For criminal organizations and terrorists to hire tech expertise. Will be interesting to see how much damage this "optimization" does in the end and how much it will cost.

  • I don't buy this excuse. You can't count on it to debug anything, because it has no idea what is correct and what isn't. If you tell it that you want your gloves to have five fingers, maybe it will manage that, or maybe it will tell you that gloves have six fingers and make up a reference for that fact. Maybe it will apologize after making a run of two-fingered gloves, but that won't stop it from repeating the error.

  • Among the "Related links" appearing on this stories page: "New Junior Developers Can't Actual Code." [slashdot.org]

    No more fake-it-till-you-make-it eye-tee jerbs.

    Also, what will India do? There aren't going to be positions for the hoards of $60k/year visa slaves and their "masters" degrees. There won't even be work for the remote ones: the language models are just as good, if not better, at copypasta "consultant" work as the remote Indians.

  • But only because they hate to admit that business isn't good.

    All of their competitors already used the AI excuse already, so if they admit to what the real problem is, they'll stand out. Markets reward optimism, not honesty.

    https://apnews.com/article/ksh... [apnews.com]

  • Students at engineering colleges in India, China, Dubai, and Kenya are facing a "jobpocalypse" as artificial intelligence replaces humans in entry-level roles.

    So ... super cheap code monkeys are being replaced by LLMs?

    There really isn't any other way to read that.

  • by high_rolla ( 1068540 ) on Sunday December 14, 2025 @06:26PM (#65858331) Homepage

    Then this will inevitably lead to a shortage of experienced programmers in a decade or so's time. Then what?
    Or is the plan that by then the AI will be improved such that it can progressively replace experience as the experience disappears anyway?
    And once the experienced programmers are gone, who will create the material for the AI's to be trained upon?

    • The plan is that they are all suffering from "AI psycosis" and believe that AGI will make all of that moot. They are wrong, of course, but they have all the money.

    • Have you ever heard or read the expression “greed is blind”? You and I know that there is no way this can end well, but those who rule the world have been completely blinded by the fantasy of having an “absolutely servile and powerful servant” that would allow them to free themselves from dependence on other humans (guards, employees, consumers, etc.). And they will spend as many trillions as necessary to achieve this, even though it is actually impossible with what is being sold to
    • It's a *BUBBLE*. Bubbles blow up quickly, and then pop.

      Yes, AI is real, it is already solving real problems. But not as quickly as the believers hoped. And it comes with a whole new set of problems.

      Pendulums swing, and then they swing back. In 10 years, it won't still be swinging in the same direction as it is now.

    • Then this will inevitably lead to a shortage of experienced programmers in a decade or so's time. Then what? Or is the plan that by then the AI will be improved such that it can progressively replace experience as the experience disappears anyway? And once the experienced programmers are gone, who will create the material for the AI's to be trained upon?

      No one gave a shit about "rebuilding" the failures of the dot-bomb era. It just fell over and died. And nothing of non-value vaporware was lost.

      We won't have that luxury in another 5 years after Half-Ass AI manages to hypnotize enough CxOs that they start prematurely relying on AI.

      In summary, I feel there will be plenty of jobs come available for the much-needed clean-up efforts.

  • That is the only thing that will get the attention of oligarchs and politicians. Also think about it, everything costs money yet they expect you to produce these valuable kids for free even while paying through the nose to raise them,
    • China has even started to tax birth control now. https://apnews.com/article/chi... [apnews.com]

      Sure, the 13% tax shouldn't really be that big of a deal but it's definitely going to have some kind of affect. Be curious to see if more woman start adopting the 4B movement. https://en.wikipedia.org/wiki/... [wikipedia.org]

    • OK I'm glad to oblige. Of course, I'm 59, so I wasn't really thinking about having more kids anyway.

    • by Tailhook ( 98486 )

      That is the only thing that will get the attention of oligarchs and politicians.

      Sure, it will get their attention. Then what? Apparently you think: "well then they'll change conditions to make parenting feasible." No, they won't. They'll just do what they've been doing: import 80 IQ third worlders, preferably under a visa regime that makes them compliant.

  • Is the problem here really AI or are there just too many people graduating with STEM degrees? Businesses and governments have been telling young people to get a STEM degree for years, schools expanded and created new CS programs, parents all over the world pushed kids into CS programs, and now there are too many programmers for the market to absorb. Maybe the issue here isn't AI. It's just nobody thinking about education beyond filling the immediate needs of very loud businesses.

  • In India (Score:1, Insightful)

    I won't be shedding any tears for the new grads from the Subcontinent.

    Go peddle your lies in another field.

  • Tariffs create uncertainty and hesitation to invest....especially since they're applied unpredictably. Nearly all programming jobs are investments in future growth. When the economy is in the shitter and the president of the world's largest economy is unreliable and unpredictable, it makes businesses and investors hold off on major investments. Even if you love the guy, if you can't predict the cost of materials for a 2 year project, it makes it very hard to find financing. The same with software...if you need 50 professionals and you don't know if 20 of them are going to get their green card revoked, it makes you nervous. Even without Trump's chaos, we have a LOT of economic headwinds: deglobalization, COVID aftermath, wars in Europe/Middle-East....as well as no real growth opportunity in the tech sector.

    We got really lucky in the past. As soon as one technology was introduced, another arrived shortly afterwards to give the illusion that endless growth is possible with innovation. However, we created multiple new markets back-to-back: In the late 90s, everyone had to get connected to the internet. In the early 2000s, all business process had to be moved from client/server to web-based applications. Then the iPhone was introduced and now every business needed a mobile presence. Afterwards, we had economic expansion from the big data craze as well as some crypto jackassery....then we had ML and AI. OK...well now we don't have anything new and exciting for businesses to spend money on beyond LLMs which aren't really providing the return on investment promised. Once we find the next useful business innovation, we'll see more familiar growth patterns. However, now...everything is on the web, architecture is largely web-scale, if a mobile app is needed, it's written already, if ML is useful, it has been applied...big data systems are now in production.

    I've said it here many times before, but historically, no business wants to do just do the same amount of volume at slightly less cost. Nearly all of them want to crush their rivals and expand their market share. Wall Street loves growth MUCH more than cost savings. It's just bullshit to say we've stopped hiring because of AI. If AI was really helping, they'd keep the headcount the same and increase workload/volume....and lay everyone off much later after they've grown as much as they can.

    Look the economy is shitty, it's harder to predict costs than it has been in modern history.largely due to the stupidity going on in the US gov right now. A CEO can either say "we overestimated our demand and need in previous years and need to correct our headcount"....or "hey, we're going all in on the future...AI, baby!!!!!"
  • India needs to demolish like 90% of its housing and rebuild to modern standards. They also need to install an extensive sewage, water treatment, and drainage system. Finally, they need to have extensive cultural education on the importance of hygienic food handling (use utensils when preparing and serving food!). I am not racist by the way, this is objectively needed. Ironically, from what I could gather about India in say the 1600s, India had excellent hygienic practices and (for its time) sewage systems.

  • This article cites so many thinly-sliced statistics, making the apocalypse seem much worse than it is.

    One student at a high-ranking engineering college in India tells them that among his 400 classmates, "fewer than 25% have secured job offers

    the number of fresh graduates hired by big tech companies globally has declined by more than 50%

    Indian IT services companies have reduced entry-level roles by 20%-25% thanks to automation and AI

    These qualifying phrases focus on the worst-hit segments of the market.

    Also...

    Five years ago, there was a real war for [coders and developers]

    Yes, I remember! Five years ago, there was a huge hiring frenzy. Comparing today's market to that, is going to make even a healthy market look bad.

    Sure, hiring is down. But not anything close to apocalyptic.

  • As many as 37% of managers said they'd rather use AI than hire a Gen Z employee..

    Sadly, there is the actual problem. And you better believe it's a personal one.

    If you disagree, name another generation more shunned by professionals. Then we'll discuss the reality of why they feel that way.

  • Scam jobs being hurt and taken over by AI...also.
  • When companies replace people with AI, tax them higher. When they hire people, tax them less. It'll work.
    • I agree, but we want them to replace people and provide revenue for UBI, so don't go overboard on taxing the AI.

Bus error -- driver executed.

Working...