Forgot your password?
typodupeerror
Programming

Spotify Says Its Best Developers Haven't Written a Line of Code Since December, Thanks To AI (techcrunch.com) 106

Spotify's best developers have stopped writing code manually since December and now rely on an internal AI system called Honk that enables remote, real-time code deployment through Claude Code, the company's co-CEO Gustav Soderstrom said during a fourth-quarter earnings call this week.

Engineers can fix bugs or add features to the iOS app from Slack on their phones during their morning commute and receive a new version of the app pushed to Slack before arriving at the office. The system has helped Spotify ship more than 50 new features throughout 2025, including AI-powered Prompted Playlists, Page Match for audiobooks, and About This Song. Soderstrom credited the system with speeding up coding and deployment tremendously and called it "just the beginning" for AI development at Spotify. The company is building a unique music dataset that differs from factual resources like Wikipedia because music-related questions often lack single correct answers -- workout music preferences vary from American hip-hop to Scandinavian heavy metal.
This discussion has been archived. No new comments can be posted.

Spotify Says Its Best Developers Haven't Written a Line of Code Since December, Thanks To AI

Comments Filter:
  • by haruchai ( 17472 ) on Friday February 13, 2026 @03:33PM (#65987190)

    and they won't get a Honkin' big severance either

    • and they won't get a Honkin' big severance either

      Some thought they were honkin' Bobo.

      Turns out Bobo was THE new hire.

    • by ArmoredDragon ( 3450605 ) on Friday February 13, 2026 @04:59PM (#65987386)

      Until they release a breaking bug on their morning commute, and some people on the bus suddenly lose their Spotify. Then said developer has no idea what went wrong, likely for hours or even days, while the AI keeps hallucinating fixes that don't work, as both it and the developers have no idea what they're actually doing.

      • It's fine, let Spotify show the world how it goes when you de-skill a team.

        It's a mostly finished product now anyway, this is probably more telling that no new client of theirs has demanded something that their cookie cutter team didn't have a template for already.

      • So nobody cares about bugs. You fix them and you move on. The only place where a software bugs matters is the airlines and that's only because there are specific laws with heavy fines left over from the days when only rich people could afford to fly.

        I suspect this is them pumping their stock but then again there really isn't a hell of a lot too Spotify it's literally just a streaming MP3 player you learn the right one of those in a 102 level computer science course for Pete's sake.
        • How would you like to lose all the money in your bank and investment accounts ?

          Of course bugs matter. This has got to be your stupidest statement yet.

    • slopi slopi slopi slop.
      AI hallucinates all my code.
      What could possibly go wrong ?
      slopi slopi slopi slop.

    • Spotify is in Sweden. They don't fire people in Sweden.

      • by haruchai ( 17472 )

        "Spotify is in Sweden. They don't fire people in Sweden"
        Spotify reduced headcount by ~20% in 4 rounds of cuts made between 2023 and 2025

  • Please don't (Score:5, Insightful)

    by OrangeTide ( 124937 ) on Friday February 13, 2026 @03:37PM (#65987208) Homepage Journal

    Please don't work during your morning commute. Especially if you're the one driving.
    But almost as importantly, if your employer makes you come into the office then you should ONLY work while at the office. And they can go F themselves if they want you to work on your own time as well.

    • Re:Please don't (Score:5, Insightful)

      by 93 Escort Wagon ( 326346 ) on Friday February 13, 2026 @03:51PM (#65987242)

      Maybe I'm just naive* but - it's hard for me to imagine a competent developer willingly allowing new code to be "pushed to Slack" before they have a chance to run through the changes with their own eyes.

      This doesn't pass the smell test.

      * I realize this may be true regardless

      • It's Spotify, it doesn't have to work correctly or even at all. I'll worry when I hear about one of my financial institutions doing this.

      • Maybe I'm just naive* but - it's hard for me to imagine a competent developer willingly allowing new code to be "pushed to Slack"

        I didn't see the word "competent" in the article, so I'm guessing that explains their willingness to push this black-box code to prod. (And probably on a Friday at 6pm or so, just before they leave for a weekend trip.)

        • (And probably on a Friday at 6pm or so, just before they leave for a weekend trip.)

          Hey I think I used to work with one of those guys...

          Many years ago, I worked with a Linux admin who would do that sort of crap all the freaking time. I remember one year he rebuilt our mail server, using Slackware rather than our standard Red Hat because "he wanted to learn Slackware" (his words, after the fact). He threw it together, then powered it up on his way out the door for a two-weeks-long ski trip in another country.

          Oh, did I mention this was on December 23rd?

          Guess what happened, and who had to fix

        • Re:Please don't (Score:4, Insightful)

          by unrtst ( 777550 ) on Friday February 13, 2026 @05:54PM (#65987504)

          Maybe I'm just naive* but - it's hard for me to imagine a competent developer willingly allowing new code to be "pushed to Slack"

          I didn't see the word "competent" in the article, so I'm guessing that explains their willingness to push this black-box code to prod. (And probably on a Friday at 6pm or so, just before they leave for a weekend trip.)

          Pardon me... I didn't RTFA. In this context, does "pushed to Slack" actually mean the same as "push this black-box code to prod"?!?!

          The way I read TFS was that:
          * Dev uses phone to tell some LLM to do something.
          * LLM does the thing, if it can, and will send it down the build pipeline.
          * That can take a while ("during their morning commute"), which is actually a big negative - iteration requires lots of waiting for the computer to do things.
          * Later, the build may complete. If so, the dev can test out the new feature (I'd refer to it as a prototype, but to each their own).

          That's the same sort of thing I've been doing WITHOUT an LLM for decades! Working late, or early, or whenever... getting some big tasks lined up (ex. bunch of shell scripts in different terminals and such)... firing them off while I go take care of myself for a bit (sleep, eat, drive, errands, small tasks, email follow ups, etc..)... analyze the results after it's done (often finding something that was overlooked and the whole lot needs rerun after a one line bugfix).

          TBH, it's more of a testament to whomever setup their dev infrastructure such that this is feasible. The actual LLM involvement is mainly in the code generation/edits, and the rest is (I assume) the automated lint/test/build/deploy systems, likely with some well defined LLM tool/mcp integration (which was probably took a good deal of manual involvement).

          At a past employer, my team had a QA team assigned. Once they were actively involved in day to day processes, devs stopped testing their own code. It was kinda infuriating. They'd just check it in without even trying it themselves to see if it even looked close to right, allowing the QA process to do their preliminary testing. If your devs have already been abdicating those responsibilities, it's not a big leap to let an LLM throw garbage at the process - at least it'll try to write some tests and documentation first.

          • Devs shall not test the code by running it manually, CI shall! You learn to run some tests locally before pushing to avoid waiting for CI server and to be able to debug the test, but you rely on CI pipelines to do the actual testing. But actually open "the program" and run it as an requirement before pushing??? Only if you do GUI programming, and want to see it, what you have changed. A developing process relying on all developers knowing what to test before merge is faulty. A developer knows the task given
            • by unrtst ( 777550 )

              I think we'd completely agree if we were on the same job. IE: GUI thing - dev should look at it. New backend function - dev should write a test for it. New API function - dev should ensure docs are right and write a test for it. And I'd go so far as to say that the newly written tests should be run and they should pass before the code gets handed off.

              An example of what I was referring to...
              Let's say you have a dev add a "nickname" field to a person info page. But they copy/paste a dropdown instead of a text

            • I test my tests manually, then include my tests with my change and let CI test the whole thing across the matrix of configurations.
              At my previous job, I had some tools to manually trigger the more extensive nightly tests for a change. That was nice if there are long-running performance tests that could regress because of my change. Nice to know before I submit to staging if the change is never going to make it onto the nightly integration. Saves a lot of CI runner time too when I'm a little proactive.

      • They didn't say a "competent" developer, they said "their best" developer. They could have some Dilbert-esque metric of "best", the developer who completes their task the fastest, who completes more tasks per time period, etc. Ignore the fact their code generates more followup tasks than anyone else due to omissions, bugs, incompleteness, etc. People will game the system. I have literally seen a coworker introduce a bug and not fix it. With a straight face he explained to me the bug (that they just created)
    • 100% agree.

      However, I don't have a commute (WFH) and I don't really work even when I'm 'working', so what I'd appreciate would be some tips on gracefully reducing my productivity even further.

      • I don't think it's possible to reduce your productivity any further. Perhaps try getting someone who is less competent than you promoted to be your manager or team director.

    • Driving? Like, a car? Why drive when you can pay two bucks for someone else to do it for you, and not worry about parking and car maintenance? What is this, the 20th century?

    • I used to work on my morning commute. I took the express bus to downtown (where the office was). It was 30-45 of time I could bill as a contractor as I wrote documentation or designed what I'd be working on that day. Didn't do that every day, but often enough it was useful and allowed me to leave a little earlier. I couldn't work on the way home as the bus was too crowded to get a seat, but in the morning there was always a seat available. Note, this was my choice to have a shorter day, not the company's ma

  • Is it true? (Score:5, Interesting)

    by Revek ( 133289 ) on Friday February 13, 2026 @03:38PM (#65987210)
    Is it true that AI code can't be copyrighted? If so, could spotify's code now be freeware? Semi trolling minds want to know.
    • by broward ( 416376 ) <{moc.liamg} {ta} {enrohdraworb}> on Friday February 13, 2026 @03:52PM (#65987246) Homepage

      this story reminds me a company meeting where our new manager was introduced as having delivered his last two projects with zero bugs.

      we all burst into laughter at the same time
      followed by an awkward silience.

      • Re: (Score:2, Funny)

        by Anonymous Coward

        this story reminds me a company meeting where our new manager was introduced as having delivered his last two projects with zero bugs.

        I once typed out a fairly complex program and it compiled first time without a single syntax errors. I was shocked. I'd managed to comment out all but a few lines of code.

    • Trade secrets don't need to be copyrighted or copyrightable. If they don't publish their code, you can't read it.
    • Is it true that AI code can't be copyrighted?

      Pretty much the whole industry assumes that AI-written code is owned by the company who employs the engineer who was using the AI. I don't think this has been litigated, but if it were to go the way you suggest it would create... problems.

      • Re:Is it true? (Score:4, Interesting)

        by Fallen Kell ( 165468 ) on Friday February 13, 2026 @09:58PM (#65987986)
        Well, the courts have already ruled on very similar things, multiple times. If the code is machine created, even if the user is pushing buttons to tell the computer to run, the end work can not be copyrighted.

        In 2023, the U.S. District Court for the District of Columbia became the first court to specifically address the copyrightability of AI-generated outputs. The plaintiff challenged the Office's refusal to register an image that was described in his application as "autonomously created by a computer algorithm running on a machine." Affirming the Office's refusal, the court stated that "copyright law protects only works of human creation," and that "human authorship is a bedrock requirement of copyright." It found that "copyright has never stretched so far [as] . . . to protect works generated by new forms of technology operating absentany guiding human hand." Because, by his own representation, the "plaintiff played no role in using the AI to generate the work," the court held that it did not meet the human authorship requirement. The decision has been appealed.

        I have not seen any appeals ruling yet on this. But I also do not expect one as this follows many other such copyright rulings in the past, such as the cases like the "monkey selfie" case, which the Copyright office issued a Compendium of U.S. Copyright Office Practices 12/22/2014 which stated:

        "only works created by a human can be copyrighted under United States law, which excludes photographs and artwork created by animals or by machines without human intervention" and that "Because copyright law is limited to 'original intellectual conceptions of the author', the [copyright] office will refuse to register a claim if it determines that a human being did not create the work. The Office will not register works produced by nature, animals, or plants."

        Then there is the case of the copyright of comic book "Zarya of the Dawn", authors by artist and AI consultant, Kris Kashtanova, which the images were all created/generated through the use of Midjourney. They Copyright office provided a copyright only on the compendium of the book itself, but all the individual images generated via Midjourney are not copyrightable and are in the public domain to be free for use by anyone, as the simple operation of prompts and instructions to Midjourney was insufficient human authorship to be able to claim a human created the work.

      • But is that assumption correct ? The AIs were trained on a lot of open source code with a variety of licenses. They regurgitate code similarly to book excerpts. And we have already had settlements about that. We will have some similar cases for code. It will be interesting to see, especially which parties get sued and settle - the AI companies, or also customers that used their output.

        • They regurgitate code similarly to book excerpts.

          That probably does happen some, but I doubt it's even a significant minority of AI-generated code, which generally produced to fit into a very specific context. Excerpts wouldn't work.

          • What if you ask it for a specific C library or kernel function implementation ? Or answers to common coding test questions ? I think it may just return very unoriginal code, minus the license. And where do you draw the line ? If variable or function names have all been renamed, or if only indentation has been changed, does it still qualify as original ? And does it come with the original license ?

            I think there are a whole bunch of legal questions that are yet to be tested in court.

            I have tried GenAI to crea

            • Sure, if you ask it for unoriginal code, it'll give you unoriginal code. But outside of people playing around with it for fun, that's not how it's used. If you go look how it's being used by highly-skilled engineers at top tier companies who pay hundreds of dollars per engineer per month to give their engineers access to the frontier models, basically none of that is unoriginal code. Not that the AI is writing "original code" by itself; there's a lot of human guidance and decisionmaking. The AI is writing

              • by madbrain ( 11432 )

                I'm not saying those tools are not useful, or effective, only questioning the legality.

                While I'm no longer employed, I still code personal projects. One former colleague at Google gave me credits on Code rhapsody, which is connected to Claude sonnet 4.5. I use it to mostly to write very complex custom scripts that I don't think anyone has done before. I have found it very helpful. Fair to say I would never have gotten started on any of it without the tool, especially because I can't really write Python - or

                • I'm not saying those tools are not useful, or effective, only questioning the legality.

                  And I'm saying that effectively the whole software industry is using LLMs to write software approximately the way I am. Some a little less so, some more. If the courts were to decide five years from now (it takes that long for courts to decide anything) that AI-produced code is not copyrightable, it would be an incredible rug pull. It would throw years of work by hundreds of thousands of developers into legal limbo. Worse, it would be impossible even to tell what the legal status of that code was because

    • by allo ( 1728082 )

      Kinda, but you would need to separate the AI code from the rest. You would also need to get your hands on it legally. So if someone else leaks it (and breaks a contract and leaks trade secret) you are allowed to copy it (you didn't have a contract), if you know its pure AI. You also need to prepared to win a lawsuit against a company that has the money to fight a long legal battle while you probably do not even have the time to go through with it. So you may have a chance to troll them, but it would be very

  • Well, yeah, the EXISTING ones. The top ones that have been there. I use AI to tell me how to do some obscure windows OS thing solely because I can vet the answer and verify it's correct. The same goes for talented coders. 20 years from now when nobody knows how to actually read and test code, it'll be a huge problem.
    • by haruchai ( 17472 )

      the way things are going it'll be a lot sooner than 20 years

    • Computer code --like all machines -- exists in the service of humans. That service remains flawed. Computers are ubiquitous, yet only a few percent of humans can write professional-grade code. OTOH nearly ALL humans can speak "poetically". Perhaps it's long overdue for humans to exit the "explicit" computer-code world and return to the poetic "Queens English" concrete noun/active verb etc. Let computer code talk to computer code natively. Thus flaw removed. Nekbeards/byte
  • AI Hype needs money (Score:5, Informative)

    by fuzzyf ( 1129635 ) on Friday February 13, 2026 @03:47PM (#65987230)
    This smells like bs to me. No way experienced developers are letting AI generate bug fixes or entirely new features using Slack to talk to AI on the way to work.

    The only question here is: What are they selling?

    Increased stock value?
    AI Coding tool that management has a stock option for?


    The simple fact is that AI can generate code, but has absolutely no understanding of anything. It's a very useful tool, but not as what this bs article is trying to sell it as.
    • Just another CEO spouting BS in order to promote his product and pump the share price. Any knowledge he has of the dev process has probably gone through 3 or 4 layers of management chinese whispers first.

      • I also wonder if their new definition of "best developers" is "developers who rely entirely on LLMs for coding."

        With that semantic shift in place, they can hire new cheap greenies who rely entirely on LLMs because they can't code, and who do nothing but cause trouble for the actual competent developers who are manually fixing everything they break, and spin it to sound like progress.

    • by Brain-Fu ( 1274756 ) on Friday February 13, 2026 @04:22PM (#65987290) Homepage Journal

      The experiences reported in these articles are so utterly unlike the ones I have using AI to generate code. It HAS gotten better in the last year, but it is still no where near this capable, for me.

      If I give it too many requirements at once, it completely fails and often damages the code files significantly, and I have to refresh from backup.
      If I give it smaller prompts in a series, doing some testing myself between prompts, there is usually something I need to fix manually. And if I don't, and just let it successfully build on what it built before, the code becomes increasingly more impenetrable. The variable names and function names are "true" but not descriptive (too vague, usually) and when those mount up the code becomes unreadable. It generates code comments but they are utterly worthless noise that point out the outright obvious without telling you anything actually useful. When new requirements negate or alter prior ones, the AI does not refactor them into a clean solution but just duplicates code and leaves the old no-longer-needed code behind and makes variable names even more weird to make up for it. The performance of the code decays quickly. And on top of all this, it STILL can't succeed at all if you need to do anything that is a little too unique to your business needs. Like a fancy complex loose sort with special rules or whatever. It tries and fails, but tells you it succeeds, and you get code that doesn't work.

      Sometimes it can solve surprisingly hard problems, and then get utterly stuck on something trivial. You tell it what is wrong and it shuffles a lot of code around and says "there, fixed" and it is still doing exactly what it did wrong before.

      I have good success getting new projects started using AI code generation. When it is just generating mostly scaffolding and foundational feature support code that tends to be pretty generic, it saves me time. But once the aspects of the code that are truly unique to the needs start coming into focus, AI fails.

      I still do most of my coding by hand because of this. I use AI when I can but once this stuttering starts happing I drop it like a hot potato because it causes nothing but problems from then on.

      I simply don't see how the same solution could reliably make consistent and significant changes to a codebase and produce reliable, performant, or even functional code on an ongoing basis. That hasn't ever worked for me and still doesn't, even with the latest gen AI models.

      • You tell it what is wrong and it shuffles a lot of code around and says "there, fixed" and it is still doing exactly what it did wrong before.

        Copilot often does this when it gets an image wrong and you correct it. It'll get confused and after it gets confused it never gets un-confused- it just gets worse and worse, recognizing the error and apologizing every step of the way as it continues to make it worse and worse. Like, WTF?

        The frustrating part is that it'll often start out creating almost exactly what I want but as you modify or give it corrections, instead of fixing the image, it just progressively wrecks it bit by bit.

        At that point I often

      • I'm pretty sure if you used Spotify level money to have your own AI cluster trained on your own code, it would have been a much different experience.

        The rest of us won't get similar results.

      • >> If I give it too many requirements at once, it completely fails

        Same experience here, but the trick is knowing what 'too many requirements' consists of. The modern agents (see Google's Antigravity for an example) make a plan beforehand that you review and approve. You can make adjustments and tell it to break the implementation out into incremental phases so things don't go awry.

      • by Junta ( 36770 )

        Very consistent with my experience. Sure, it can accelerate certain tasks, but it will blunder along the way.

        Even if it gets something mostly right but I see a mistake, it sounds like they say explain the mistake to it to let it try to fix it (which for me when I tried was very unreliable, and more work than just manually amending the code, since I also know that correcting it's mistake won't even pay dividends because it won't 'learn' from that interaction). I have similar experiences with people, it's s

      • They need money and lie to non-programmers This is one of the best summaries I have seen of what "the best" can do. One thing I will add is that sometimes the thing flails out of the blue on something really easy and you know it can't be trusted anymore. At this point it is far better to just go in and fix the problem. The best developers should not waste time trying to wake up a confused llm. There are many times it is just best to lose the current context and maybe have it write a note to itself. Anyone
      • It would be much more useful if you specify which AI tools you are using. They are not all equal.
    • These kinds of AI fluff pieces are all about trying to assuage their investors' fears. Don't worry, it seems to say: we're at the cutting edge, and no we won't be replaced by a vibe-coded app anytime soon. It could be true, it could be all made up, but it doesn't matter as long as their shareholders feel good.

      • by Junta ( 36770 )

        Yeah, they speak to stupid shareholders, and shareholders that might not be stupid, but are willing to bank on the stupidity of others, either way, currently money wise the money flows to the hype.

        But very good point that this *should* be a double edged sword. Our software can be completely constructed low skilled using LLM, so what might prevent competition from just eating their lunch on the technical front? Of course, it is just spotify, and it's not exactly a technical marvel to begin with, basically

    • by ffkom ( 3519199 )

      The only question here is: What are they selling? Increased stock value?

      From what I have observed in recent years, C-level people believe that their company will profit greatly from using LLM-based services that they pay other companies for. And when their stock value drops, they find out too late that any potential upside of their use of such services would also apply to any competitor, to the point where mundane SaaS-services can be vibe-coded by anyone, accepting the same lower quality standards they introduced by using LLM-generated code.

      But to the question what are they s

      • by Junta ( 36770 )

        Well, for a company like Spotify, the downside isn't so scary because their software isn't exactly doing rocket science either. Their business is internet radio with on-demand capability. The technical piece is relatively basic enablement of that direction that isn't difficult and others can and have easily competed on technical design. See the cited three features, *super* easy sounding stuff.

        But I *have* seen a software sales guy fail to understand the point you just made. He was excited because now wh

    • by tlhIngan ( 30335 )

      Or Spotify has simply laid off their best developers and replaced them with AI.

      Or Spotify has gone to crap and no one gives a damn because Spotify blames all their problems on Apple Music Monopoly.

      Or Spotify is trying to cash in on AI hype money while they still can.

      Or Spotify's developers were on vacation since December. They're in Europe, so they get like 2 months of vacation, right?

      Or Spotify just hasn't done anything that requires code changes in 2 months. The backend systems are mature and the APIs wor

    • Autocompletes are counted as it writing it!
      It completes 1 letter of the word, it gets counted as typing that word for you.

      Think how high your stats would be on your phone for how much the phone's autocomplete and word suggestion "wrote for you."

      If you use AI coding and everything you type gets replaced search suggestions and your corrections are also completed then it can be 100% even if you actually wrote most of it; it's how you count it and they are counting this stuff at extremes.

    • No way experienced developers are letting AI generate bug fixes or entirely new features using Slack to talk to AI on the way to work.

      Depends on whether they can review the code and tests effectively first. I frequently push commits without ever typing a line of code myself: Tell the LLM to write the test, and how to write it, check the test, tell the LLM how to tweak it if necessary, then tell the LLM to write the code and verify the test passes, check the code, tell the LLM what to fix, repeat until good, then tell the LLM to write the commit message (which I also review), then tell it to commit and push.

      Actually "tell the LLM what t

    • by allo ( 1728082 )

      Who says they push to production? I'd hope Spotify has a test system and possibly a CI pipeline. I still don't think you should have to code on the way to work.

    • by mkwan ( 2589113 )

      Anthropic was putting out a lot of PR just before their capital raise. Maybe they bribed Spotify to join in?
      Anyway, the raise was successful, so hopefully the hype will die down for a while.

    • It's plausible that they didn't write any code because they're still coming back from a Christmas vacation.

    • No it is not bullshit, the slashdot 'AI baaad!!' And other bollocks opinions presented as factual are out in full force as usual. A friend of mine is a senior developer at Shopify, they do the same thing. She also does not have a regular scrum, she tells the AI and it collects everyone's notes, then summarizes them. AI is here, sorry slashbros.
    • The lords of AI are elevated the highest when people believe not that AI will save the world, but that it already has.

      In order to get a bailout you have to be critical infrastructure, too big to fail. That's the next phase of the scam.

  • by test321 ( 8891681 ) on Friday February 13, 2026 @03:58PM (#65987260)

    Engineers can fix bugs or add features to the iOS app from Slack on their phones during their morning commute and receive a new version of the app pushed to Slack before arriving at the office.

    This makes Spotify the most awful workplace on the planet for suggesting (even if not demanding), that employees should be working during commute.

    But also, this makes it the most poorly managed software shop. They're expecting employees to do away with being serious, and carelessly push updates "from Slack during commute". AI apart, you'd think he wants his engineers to at least pay attention to what they're doing. No way this attitude can end up well for the product.

    • You had the power before and didn't get a union when you had a chance.

      ANY time you spend working you should be paid. The CEO gets paid a million per week and nothing anybody does is really worth that; even if they "work" long hours... but we are supposed to work on all our free time.

    • This makes Spotify the most awful workplace on the planet for suggesting (even if not demanding), that employees should be working during commute.

      Well yes, but you never asked what this means for the employee. I'm perfectly happy being expected to work during my commute providing that the commute is considered working hours. In fact I would prefer this outcome.

      But who am I kidding, they are just another greedy corp.

  • why commute? (Score:5, Insightful)

    by awwshit ( 6214476 ) on Friday February 13, 2026 @04:18PM (#65987284)

    If you can do all of your work while commuting then why commute at all? Obviously does not matter where you sit.

  • ...so all guns are out and blazing. Propaganda is more aggressive with every passing moment.

    Because so many bosses will have to answer very hard questions about vast quantities of money once it bursts.

    It can and will be worse.

  • by JustAnotherOldGuy ( 4145623 ) on Friday February 13, 2026 @04:37PM (#65987330) Journal

    Oh so that's why Spotify stopped working and some of the features disappeared. Awesome, I expect the next revision to just be a big ol' Play button with no other interface at all.

  • If I was one of the best devs, I wouldnâ(TM)t want to be regulated to vibe coding.

    Maybe I just weird in that I actually enjoy solving problems and trying to think of the way things are put together?

  • by MNNorske ( 2651341 ) on Friday February 13, 2026 @05:02PM (#65987396)
    If your AI can act on instructions given in slack, update code in source control, and then compile/deploy that code you just opened a whole can of worms. If I were a threat actor I would 100% be aiming to try and compromise their slack. Just tell the AI to introduce these few lines of code into the build... Or add this feature... It sounds like a security nightmare to me.
  • I haven't written a line of code since November...so does that make me better than Spotify's best developers?
  • But as a developer, writing code, making it work, creating something, is actually the whole fun and love of development.
  • You mean that obnoxious, always-on-top-of-the-page AI DJ that literally nobody wants?

  • I mean this is just a bullshit metric: Declare "the best" to be those that have not written code, then claim the best have not written code.

    • by Junta ( 36770 )

      Yeah, noticed that too, he felt the need to clarify 'best'. That sounds super odd, why would specifically only the best be able to claim that? I would have assumed it more likely for lower level developers to claim that achievement. So if you have some people writing code but only the 'best' not writing code....

      Of course, I suspect it's like some executive I recently dealt with who felt that 'developers' were beneath him until they left that stupid 'coding' behind and became just executives. So he would be

  • Not a line? Really? Wrapped something in a conditional? Added a comment?
  • C'mon, Linus, why won't you accept "my" kernel updates? ;)

  • by Daina.0 ( 7328506 ) on Friday February 13, 2026 @08:02PM (#65987784)

    The best developers spend their time herding other developers, in meetings, and jabbering with management. I've worked for lots of companies and for a lead developer to say they haven't touched a line of code in months sounds right, AI or not!

  • of code because they where cheaper to replace with ai .
  • To properly review code, you need a full-sized screen. I don't believe this story, it's nothing but hype.

  • Should I stock up on popcorn, for the next security exploit for Spotify? :)

    I hope their "engineers" are spending time auditing the code that is generated by AI and used in production... otherwise, this could be very career limiting for them.

  • Just like my manager showing my boss how AI can write code by demonstrating a 'SELECT * FROM table LIMIT 10;' query.

  • I've been using AI to write code recently. I figured I should give it a go.
    Not reviewing and understanding and editing the output code is a recipe for disaster.

    For example, in code for a cryptographic hash function there are padding rules to bring the data size to a multiple of the block size and add a length. So for example with SHA3, a minimum of 65 extra bits. If your data length mod the block size is 65 bits less than the block size, then add the pad bit, put the length at the end of the block and fill

In any formula, constants (especially those obtained from handbooks) are to be treated as variables.

Working...