Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Programming

JPMorgan Engineers' Efficiency Jumps as Much as 20% From Using Coding Assistant (reuters.com) 20

Tens of thousands of JPMorgan Chase software engineers increased their productivity 10% to 20% by using a coding assistant tool developed by the bank, its global chief information officer Lori Beer said. From a report: The gains present "a great opportunity" for the lender to assign its engineers to other projects, Beer told Reuters ahead of DevUp, an internal conference hosted by JPMorgan, bringing together its top engineers in India this year. The largest lender in the U.S. had a technology budget of $17 billion for 2024. Its tech workforce of 63,000 employees, with a third of them based in India, represents about 21% of its global headcount. The efficiency gains from the coding assistant will also allow JPMorgan's engineers to devote more time to high-value projects focusing on artificial intelligence and data, Beer said.

JPMorgan Engineers' Efficiency Jumps as Much as 20% From Using Coding Assistant

Comments Filter:
  • A 10-20% increase doesn't mean much when their productivity is only 1% to begin with. With that kind of budget, you'd think they would have decent software, but they don't.

    • Past a certain point money is no longer a mark of quantity and throwing more of it at a problem only tends to attract the sort of people who are skilled at spending it. I wouldn't believe anything they have to say anyway. If what they had was actually valuable they'd be trying to keep it a secret instead of blabbing about it. They're trying to sell you something.
    • I have heard from people that work there that they're super cheap and the problems they have are mostly a result of that. Possibly AI is achieving results because when it attempts seppuku they just restart the machine, as opposed to having to hire and train a fresh body.

    • About 20 years ago, I kept track of how many lines of code I wrote at work. When I switched employers, I went from a Linux environment to a Windows environment. In terms of productivity, I calculated I could write code four to six times faster on Linux than Windows. Yet I don't hear of employers jumping on the bandwagon to switch to Linux.

      AI is nice as a developer tool, but anyone who believes it's going to eliminate engineers hasn't seen the types of mistakes it makes. It solves "toy" problems well

      • I don't want to take away from your overall point, which still stands (at least for now- anyone who thinks it will continue to is digging their own career's grave)
        But I was quite surprised it fucked up the insertion sort in S/360 assembler- being it's a pretty trivial task.

        So, I asked, for shits and giggles, Gemma 3 27B FP16 to give it a go. This isn't a coding-trained LLM, so I don't expect its performance to be that great for this job.
        However, it aced it.

        I also found your driver code example unlikel
  • "Decalcify calcium ducts?"

    Well, give me a "Y."

    Give me a-- Hey.

    All I have to type is "Y."

    Hey, Miss "Doesn't Find Me Attractive Sexually Anymore", I just tripled my productivity. [youtu.be]

  • Generated SLOC? Or integrated and tested SLOC? I wonder how much time it takes to inspect and verify AI produced code...

    (I remember a project where I designed using the Ada generics model, similar to C++ templates. Our compiler didn't support generics/templates, So for each of roughly 10 instantiations of the generic, I "generated" the resulting code using a rather complex EMACS macro from the initial, tested, version. But when it came to performance evaluation time, I wrote, " Generated 4k SLOC over a

    • Among business types there is widespread belief that anything can be measured accurately, even developer productivity. There has been much written that debunks this in very clear detail (things like, taking longer to code but producing far fewer bugs is actually higher in productivity than producing lots of code but with so many bugs that it costs the company lots of time and money to fix them, for example).

      But despite this debunking, they continue to believe, and continue to do sketchy things to come up w

      • by david.emery ( 127135 ) on Friday March 14, 2025 @05:13PM (#65234341)

        Two anecdotes on that:

        1. A friend led a major refactoring project (for a compiler). He eliminated 20k SLOC from that compiler. I told him, "By most productivity models, you have so much NEGATIVE PRODUCTIVITY you'll probably owe them the next 2 years."

        2. As part of a formal method experiment, another guy and I implemented most of the TCP protocol. To handle the timeout provisions of the protocol, for a week I studied the problem, talked to a friend, and wrote 5 lines of Ada95 (using the Asynchronous Transfer of Control mechanism), so 1 SLOC/day productivity. The other guy (who had experience coding protocols) wrote 200 lines of C in 2 weeks, so 40 SLOC/day. (And his code had a bug in it.) Who was more productive?

    • I wonder how much time it takes to inspect and verify AI produced code...

      Well, it takes just as much time to review AI produced code as it does human made code. But unlike the human, the AI will continue to make the same formatting and coding-standards mistakes over and over again.

      AI will not ever develop a deep understanding of your codebase - because if you're letting the AI company train on your proprietary code, your competitor has your code. So it will perpetually function at the level of a ne

      • This is completely untrue.
        It belies a basic lack of knowledge of how LLMs work.
        Large-context window coding LLMs are very at ingesting code, and following any standards you put into the prompt.
        With a 128K, or even 1M context window, you can gives it a shit-ton of context to work with, and it will perform excellently.

        You seem to have very strong opinions for someone who doesn't seem to have actually tried to make this technology work.
    • Why, productivity metrics, of course! You know, those data points that indicate how productive you are?

  • by rsilvergun ( 571051 ) on Friday March 14, 2025 @04:54PM (#65234289)
    In the old days when we used to have competition and we at least pretended antitrust laws existed companies would compete and would expand and work hard to make new products requiring them to employ lots of people to make those new products.

    That was expensive and it didn't make for a good q1 through Q4 so CEOs decided to do away with it. They spent 50 years undermining our fundamental institutions so that they wouldn't have to compete anymore.

    As a result they can fire large swafts of their employees and they don't have to worry about those employees going off and starting competing companies anymore. They can use all sorts of anti-competitive tactics that are on paper illegal but it's not a law if it's not enforced. So any startup you try to get going is just either going to get hammered down or if you're really really really really really lucky you might win the lottery and get a buyout.

    Now and of course this is absolutely terrible for consumers because we get runaway inflation. Competition after all is what really keeps prices down, that and scientific advancements but we've given up on science too. I mean we literally have the head of health and human services telling us to fight measles with quackery. I forget which quackery in particular I think it was colloidal silver but feel free to Google it and correct me.
    • I know, I know, you think we'll all soon be fired and need to be fed by government soup kitchens.

      Real companies constrain the pace of their software development because it's expensive. In most of these, more productivity means they get more done, *not* layoffs. Yeah, some companies will. Others will scoop up the laid of developers, if they're any good.

  • I would really like to see the results broken down by experience level. For juniors? Sure, and that's been replicated in many studies. I want to see how their tools work for developers with 5+ years experience. Or are they just firing every experienced dev and replacing them with bootcamp grads armed with an AI coding tool, since they're so much more productive now?
    • Like the DOGE firing of most "provisional" new employees in Civil Service, where will experience come from if you don't hire the beginners and train them?

  • The company that couldnt provide enough desks for their workers (despite an RTO mandate) wants us to believe they know the secret to efficiency?

  • JP Morgan Lays Off 20% of its Engineers
  • Or, maybe the productivity increase is a result of the mandatory return to office edict that went into effect March 1st. Perhaps people are actually working on 1 job, not 2 at the same time, or not cleaning/taking care of the kids on company time.

Theory is gray, but the golden tree of life is green. -- Goethe

Working...