Spotify Says Its Best Developers Haven't Written a Line of Code Since December, Thanks To AI (techcrunch.com) 106
Spotify's best developers have stopped writing code manually since December and now rely on an internal AI system called Honk that enables remote, real-time code deployment through Claude Code, the company's co-CEO Gustav Soderstrom said during a fourth-quarter earnings call this week.
Engineers can fix bugs or add features to the iOS app from Slack on their phones during their morning commute and receive a new version of the app pushed to Slack before arriving at the office. The system has helped Spotify ship more than 50 new features throughout 2025, including AI-powered Prompted Playlists, Page Match for audiobooks, and About This Song. Soderstrom credited the system with speeding up coding and deployment tremendously and called it "just the beginning" for AI development at Spotify. The company is building a unique music dataset that differs from factual resources like Wikipedia because music-related questions often lack single correct answers -- workout music preferences vary from American hip-hop to Scandinavian heavy metal.
Engineers can fix bugs or add features to the iOS app from Slack on their phones during their morning commute and receive a new version of the app pushed to Slack before arriving at the office. The system has helped Spotify ship more than 50 new features throughout 2025, including AI-powered Prompted Playlists, Page Match for audiobooks, and About This Song. Soderstrom credited the system with speeding up coding and deployment tremendously and called it "just the beginning" for AI development at Spotify. The company is building a unique music dataset that differs from factual resources like Wikipedia because music-related questions often lack single correct answers -- workout music preferences vary from American hip-hop to Scandinavian heavy metal.
Guess who'll be kicked to the curb real soon (Score:5, Insightful)
and they won't get a Honkin' big severance either
Re: (Score:2)
and they won't get a Honkin' big severance either
Some thought they were honkin' Bobo.
Turns out Bobo was THE new hire.
Re: Guess who'll be kicked to the curb real soon (Score:5, Insightful)
Until they release a breaking bug on their morning commute, and some people on the bus suddenly lose their Spotify. Then said developer has no idea what went wrong, likely for hours or even days, while the AI keeps hallucinating fixes that don't work, as both it and the developers have no idea what they're actually doing.
Re: (Score:2)
It's fine, let Spotify show the world how it goes when you de-skill a team.
It's a mostly finished product now anyway, this is probably more telling that no new client of theirs has demanded something that their cookie cutter team didn't have a template for already.
Re: (Score:2)
I suspect this is them pumping their stock but then again there really isn't a hell of a lot too Spotify it's literally just a streaming MP3 player you learn the right one of those in a 102 level computer science course for Pete's sake.
Re: Guess who'll be kicked to the curb real soon (Score:1)
/facepalm
Re: Guess who'll be kicked to the curb real soon (Score:2)
How would you like to lose all the money in your bank and investment accounts ?
Of course bugs matter. This has got to be your stupidest statement yet.
Re: Guess who'll be kicked to the curb real soon (Score:2)
I had not heard those claims about Manning or Sanders. These appear to be unconfimed rumors. The claims that they received FAS diagnoses don't seem to be substantiated, unless we are talking about online diagnoses by strangers.
Re: (Score:1)
Whoever that is lifted that paragraph from a past post I made, it may be word-for-word, but I'm uncertain.
I had not heard those claims about Manning or Sanders. These appear to be unconfimed rumors.
In the case of Manning, his defense presented that as evidence during his trial:
https://web.archive.org/web/20... [archive.org]
You can go directly to the NYT itself, but you're either going to either hit a paywall or a truncated article. Relevant portion:
Under questioning from Mr. Coombs, Capt. David Moulton, a clinical psychiatrist who extensively examined his client after his arrest, described the stress and isolation that Private Manning was under, and framed his release of the documents to WikiLeaks as the immature, even neurotic act of an idealist who thought he could end all wars.
Under such stress, Captain Moulton said, “his abnormal personality traits became more prominent — he was acting out his grandiose ideation, his difficulties during that post-adolescent period. And ultimately, when he came into contact or had contact with the information which he ended up releasing, his decision-making capacity at that point was influenced by the stress of his situation, for sure.”
He also said Private Manning exhibited traits of fetal alcohol syndrome and some of the social difficulties associated with Asperger’s syndrome, along with narcissistic tendencies like grandiosity and haughtiness which became heightened under duress.
Along with Captain Moulton, Private Manning’s older sister, Casey Major, and aunt, Debra Van Alstyne, testified that he had been underweight since birth because his mother drank and smoked during her pregnancy.
There were some other bits that inspired the diagnosis, which mostly came from his features, especially his chin and short stature, but I don't see it in that partic
slopi slopi slopi slop. (Score:2)
slopi slopi slopi slop.
AI hallucinates all my code.
What could possibly go wrong ?
slopi slopi slopi slop.
Re: (Score:2)
Spotify is in Sweden. They don't fire people in Sweden.
Re: (Score:2)
"Spotify is in Sweden. They don't fire people in Sweden"
Spotify reduced headcount by ~20% in 4 rounds of cuts made between 2023 and 2025
Please don't (Score:5, Insightful)
Please don't work during your morning commute. Especially if you're the one driving.
But almost as importantly, if your employer makes you come into the office then you should ONLY work while at the office. And they can go F themselves if they want you to work on your own time as well.
Re:Please don't (Score:5, Insightful)
Maybe I'm just naive* but - it's hard for me to imagine a competent developer willingly allowing new code to be "pushed to Slack" before they have a chance to run through the changes with their own eyes.
This doesn't pass the smell test.
* I realize this may be true regardless
Re: (Score:3)
It's Spotify, it doesn't have to work correctly or even at all. I'll worry when I hear about one of my financial institutions doing this.
Re: (Score:3)
Equifax has entered the chat...
Re: (Score:1)
Maybe I'm just naive* but - it's hard for me to imagine a competent developer willingly allowing new code to be "pushed to Slack"
I didn't see the word "competent" in the article, so I'm guessing that explains their willingness to push this black-box code to prod. (And probably on a Friday at 6pm or so, just before they leave for a weekend trip.)
Re: (Score:3)
(And probably on a Friday at 6pm or so, just before they leave for a weekend trip.)
Hey I think I used to work with one of those guys...
Many years ago, I worked with a Linux admin who would do that sort of crap all the freaking time. I remember one year he rebuilt our mail server, using Slackware rather than our standard Red Hat because "he wanted to learn Slackware" (his words, after the fact). He threw it together, then powered it up on his way out the door for a two-weeks-long ski trip in another country.
Oh, did I mention this was on December 23rd?
Guess what happened, and who had to fix
Re:Please don't (Score:4, Insightful)
Maybe I'm just naive* but - it's hard for me to imagine a competent developer willingly allowing new code to be "pushed to Slack"
I didn't see the word "competent" in the article, so I'm guessing that explains their willingness to push this black-box code to prod. (And probably on a Friday at 6pm or so, just before they leave for a weekend trip.)
Pardon me... I didn't RTFA. In this context, does "pushed to Slack" actually mean the same as "push this black-box code to prod"?!?!
The way I read TFS was that:
* Dev uses phone to tell some LLM to do something.
* LLM does the thing, if it can, and will send it down the build pipeline.
* That can take a while ("during their morning commute"), which is actually a big negative - iteration requires lots of waiting for the computer to do things.
* Later, the build may complete. If so, the dev can test out the new feature (I'd refer to it as a prototype, but to each their own).
That's the same sort of thing I've been doing WITHOUT an LLM for decades! Working late, or early, or whenever... getting some big tasks lined up (ex. bunch of shell scripts in different terminals and such)... firing them off while I go take care of myself for a bit (sleep, eat, drive, errands, small tasks, email follow ups, etc..)... analyze the results after it's done (often finding something that was overlooked and the whole lot needs rerun after a one line bugfix).
TBH, it's more of a testament to whomever setup their dev infrastructure such that this is feasible. The actual LLM involvement is mainly in the code generation/edits, and the rest is (I assume) the automated lint/test/build/deploy systems, likely with some well defined LLM tool/mcp integration (which was probably took a good deal of manual involvement).
At a past employer, my team had a QA team assigned. Once they were actively involved in day to day processes, devs stopped testing their own code. It was kinda infuriating. They'd just check it in without even trying it themselves to see if it even looked close to right, allowing the QA process to do their preliminary testing. If your devs have already been abdicating those responsibilities, it's not a big leap to let an LLM throw garbage at the process - at least it'll try to write some tests and documentation first.
Re: Please don't (Score:2)
Re: (Score:2)
I think we'd completely agree if we were on the same job. IE: GUI thing - dev should look at it. New backend function - dev should write a test for it. New API function - dev should ensure docs are right and write a test for it. And I'd go so far as to say that the newly written tests should be run and they should pass before the code gets handed off.
An example of what I was referring to...
Let's say you have a dev add a "nickname" field to a person info page. But they copy/paste a dropdown instead of a text
Re: (Score:2)
I test my tests manually, then include my tests with my change and let CI test the whole thing across the matrix of configurations.
At my previous job, I had some tools to manually trigger the more extensive nightly tests for a change. That was nice if there are long-running performance tests that could regress because of my change. Nice to know before I submit to staging if the change is never going to make it onto the nightly integration. Saves a lot of CI runner time too when I'm a little proactive.
They said "best" not competent, by what metric? (Score:1)
Re: (Score:2)
Make a stupid metric, get a stupid result.
Re: They said "best" not competent, by what metric (Score:2)
We lost so much productivity when we switched code wrapping from 80 to 100 columns.
Re: (Score:2)
100% agree.
However, I don't have a commute (WFH) and I don't really work even when I'm 'working', so what I'd appreciate would be some tips on gracefully reducing my productivity even further.
Re: (Score:2)
I don't think it's possible to reduce your productivity any further. Perhaps try getting someone who is less competent than you promoted to be your manager or team director.
Driving?? (Score:2)
Driving? Like, a car? Why drive when you can pay two bucks for someone else to do it for you, and not worry about parking and car maintenance? What is this, the 20th century?
Re: (Score:2)
I think it's $38 to get someone else to drive for you during surge pricing.
Re: (Score:2)
I used to work on my morning commute. I took the express bus to downtown (where the office was). It was 30-45 of time I could bill as a contractor as I wrote documentation or designed what I'd be working on that day. Didn't do that every day, but often enough it was useful and allowed me to leave a little earlier. I couldn't work on the way home as the bus was too crowded to get a seat, but in the morning there was always a seat available. Note, this was my choice to have a shorter day, not the company's ma
Is it true? (Score:5, Interesting)
Re: Is it true? (Score:5, Funny)
this story reminds me a company meeting where our new manager was introduced as having delivered his last two projects with zero bugs.
we all burst into laughter at the same time
followed by an awkward silience.
Re: (Score:2, Funny)
this story reminds me a company meeting where our new manager was introduced as having delivered his last two projects with zero bugs.
I once typed out a fairly complex program and it compiled first time without a single syntax errors. I was shocked. I'd managed to comment out all but a few lines of code.
Re: (Score:2)
Re: (Score:3)
If they don't publish their code, you can't read it.
Ghidra would like a word with you.
Re: (Score:2)
Can't wait until Ghidra or some alternative goes full AI reverse compiler with meaningful results.
That will really take the gloves off.
Re: (Score:2)
Is it true that AI code can't be copyrighted?
Pretty much the whole industry assumes that AI-written code is owned by the company who employs the engineer who was using the AI. I don't think this has been litigated, but if it were to go the way you suggest it would create... problems.
Re:Is it true? (Score:4, Interesting)
In 2023, the U.S. District Court for the District of Columbia became the first court to specifically address the copyrightability of AI-generated outputs. The plaintiff challenged the Office's refusal to register an image that was described in his application as "autonomously created by a computer algorithm running on a machine." Affirming the Office's refusal, the court stated that "copyright law protects only works of human creation," and that "human authorship is a bedrock requirement of copyright." It found that "copyright has never stretched so far [as] . . . to protect works generated by new forms of technology operating absentany guiding human hand." Because, by his own representation, the "plaintiff played no role in using the AI to generate the work," the court held that it did not meet the human authorship requirement. The decision has been appealed.
I have not seen any appeals ruling yet on this. But I also do not expect one as this follows many other such copyright rulings in the past, such as the cases like the "monkey selfie" case, which the Copyright office issued a Compendium of U.S. Copyright Office Practices 12/22/2014 which stated:
"only works created by a human can be copyrighted under United States law, which excludes photographs and artwork created by animals or by machines without human intervention" and that "Because copyright law is limited to 'original intellectual conceptions of the author', the [copyright] office will refuse to register a claim if it determines that a human being did not create the work. The Office will not register works produced by nature, animals, or plants."
Then there is the case of the copyright of comic book "Zarya of the Dawn", authors by artist and AI consultant, Kris Kashtanova, which the images were all created/generated through the use of Midjourney. They Copyright office provided a copyright only on the compendium of the book itself, but all the individual images generated via Midjourney are not copyrightable and are in the public domain to be free for use by anyone, as the simple operation of prompts and instructions to Midjourney was insufficient human authorship to be able to claim a human created the work.
Re: Is it true? (Score:2)
But is that assumption correct ? The AIs were trained on a lot of open source code with a variety of licenses. They regurgitate code similarly to book excerpts. And we have already had settlements about that. We will have some similar cases for code. It will be interesting to see, especially which parties get sued and settle - the AI companies, or also customers that used their output.
Re: (Score:2)
They regurgitate code similarly to book excerpts.
That probably does happen some, but I doubt it's even a significant minority of AI-generated code, which generally produced to fit into a very specific context. Excerpts wouldn't work.
Re: Is it true? (Score:2)
What if you ask it for a specific C library or kernel function implementation ? Or answers to common coding test questions ? I think it may just return very unoriginal code, minus the license. And where do you draw the line ? If variable or function names have all been renamed, or if only indentation has been changed, does it still qualify as original ? And does it come with the original license ?
I think there are a whole bunch of legal questions that are yet to be tested in court.
I have tried GenAI to crea
Re: (Score:2)
Sure, if you ask it for unoriginal code, it'll give you unoriginal code. But outside of people playing around with it for fun, that's not how it's used. If you go look how it's being used by highly-skilled engineers at top tier companies who pay hundreds of dollars per engineer per month to give their engineers access to the frontier models, basically none of that is unoriginal code. Not that the AI is writing "original code" by itself; there's a lot of human guidance and decisionmaking. The AI is writing
Re: (Score:2)
I'm not saying those tools are not useful, or effective, only questioning the legality.
While I'm no longer employed, I still code personal projects. One former colleague at Google gave me credits on Code rhapsody, which is connected to Claude sonnet 4.5. I use it to mostly to write very complex custom scripts that I don't think anyone has done before. I have found it very helpful. Fair to say I would never have gotten started on any of it without the tool, especially because I can't really write Python - or
Re: (Score:2)
I'm not saying those tools are not useful, or effective, only questioning the legality.
And I'm saying that effectively the whole software industry is using LLMs to write software approximately the way I am. Some a little less so, some more. If the courts were to decide five years from now (it takes that long for courts to decide anything) that AI-produced code is not copyrightable, it would be an incredible rug pull. It would throw years of work by hundreds of thousands of developers into legal limbo. Worse, it would be impossible even to tell what the legal status of that code was because
Re: (Score:2)
Kinda, but you would need to separate the AI code from the rest. You would also need to get your hands on it legally. So if someone else leaks it (and breaks a contract and leaks trade secret) you are allowed to copy it (you didn't have a contract), if you know its pure AI. You also need to prepared to win a lawsuit against a company that has the money to fight a long legal battle while you probably do not even have the time to go through with it. So you may have a chance to troll them, but it would be very
Not the solution, not the problem (Score:1)
Re: (Score:3)
the way things are going it'll be a lot sooner than 20 years
Re: (Score:2)
Re: Not the solution, not the problem (Score:2)
It's "King's English" now.
Re: (Score:2)
Let's not and say we did.
AI Hype needs money (Score:5, Informative)
The only question here is: What are they selling?
Increased stock value?
AI Coding tool that management has a stock option for?
The simple fact is that AI can generate code, but has absolutely no understanding of anything. It's a very useful tool, but not as what this bs article is trying to sell it as.
Re: AI Hype needs money (Score:3)
Just another CEO spouting BS in order to promote his product and pump the share price. Any knowledge he has of the dev process has probably gone through 3 or 4 layers of management chinese whispers first.
Re: (Score:2)
I also wonder if their new definition of "best developers" is "developers who rely entirely on LLMs for coding."
With that semantic shift in place, they can hire new cheap greenies who rely entirely on LLMs because they can't code, and who do nothing but cause trouble for the actual competent developers who are manually fixing everything they break, and spin it to sound like progress.
Re:AI Hype needs money (Score:5, Interesting)
The experiences reported in these articles are so utterly unlike the ones I have using AI to generate code. It HAS gotten better in the last year, but it is still no where near this capable, for me.
If I give it too many requirements at once, it completely fails and often damages the code files significantly, and I have to refresh from backup.
If I give it smaller prompts in a series, doing some testing myself between prompts, there is usually something I need to fix manually. And if I don't, and just let it successfully build on what it built before, the code becomes increasingly more impenetrable. The variable names and function names are "true" but not descriptive (too vague, usually) and when those mount up the code becomes unreadable. It generates code comments but they are utterly worthless noise that point out the outright obvious without telling you anything actually useful. When new requirements negate or alter prior ones, the AI does not refactor them into a clean solution but just duplicates code and leaves the old no-longer-needed code behind and makes variable names even more weird to make up for it. The performance of the code decays quickly. And on top of all this, it STILL can't succeed at all if you need to do anything that is a little too unique to your business needs. Like a fancy complex loose sort with special rules or whatever. It tries and fails, but tells you it succeeds, and you get code that doesn't work.
Sometimes it can solve surprisingly hard problems, and then get utterly stuck on something trivial. You tell it what is wrong and it shuffles a lot of code around and says "there, fixed" and it is still doing exactly what it did wrong before.
I have good success getting new projects started using AI code generation. When it is just generating mostly scaffolding and foundational feature support code that tends to be pretty generic, it saves me time. But once the aspects of the code that are truly unique to the needs start coming into focus, AI fails.
I still do most of my coding by hand because of this. I use AI when I can but once this stuttering starts happing I drop it like a hot potato because it causes nothing but problems from then on.
I simply don't see how the same solution could reliably make consistent and significant changes to a codebase and produce reliable, performant, or even functional code on an ongoing basis. That hasn't ever worked for me and still doesn't, even with the latest gen AI models.
Re: (Score:3)
You tell it what is wrong and it shuffles a lot of code around and says "there, fixed" and it is still doing exactly what it did wrong before.
Copilot often does this when it gets an image wrong and you correct it. It'll get confused and after it gets confused it never gets un-confused- it just gets worse and worse, recognizing the error and apologizing every step of the way as it continues to make it worse and worse. Like, WTF?
The frustrating part is that it'll often start out creating almost exactly what I want but as you modify or give it corrections, instead of fixing the image, it just progressively wrecks it bit by bit.
At that point I often
Re: (Score:2)
I'm pretty sure if you used Spotify level money to have your own AI cluster trained on your own code, it would have been a much different experience.
The rest of us won't get similar results.
Re: (Score:2)
>> If I give it too many requirements at once, it completely fails
Same experience here, but the trick is knowing what 'too many requirements' consists of. The modern agents (see Google's Antigravity for an example) make a plan beforehand that you review and approve. You can make adjustments and tell it to break the implementation out into incremental phases so things don't go awry.
Re: (Score:2)
Very consistent with my experience. Sure, it can accelerate certain tasks, but it will blunder along the way.
Even if it gets something mostly right but I see a mistake, it sounds like they say explain the mistake to it to let it try to fix it (which for me when I tried was very unreliable, and more work than just manually amending the code, since I also know that correcting it's mistake won't even pay dividends because it won't 'learn' from that interaction). I have similar experiences with people, it's s
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
These kinds of AI fluff pieces are all about trying to assuage their investors' fears. Don't worry, it seems to say: we're at the cutting edge, and no we won't be replaced by a vibe-coded app anytime soon. It could be true, it could be all made up, but it doesn't matter as long as their shareholders feel good.
Re: (Score:2)
Yeah, they speak to stupid shareholders, and shareholders that might not be stupid, but are willing to bank on the stupidity of others, either way, currently money wise the money flows to the hype.
But very good point that this *should* be a double edged sword. Our software can be completely constructed low skilled using LLM, so what might prevent competition from just eating their lunch on the technical front? Of course, it is just spotify, and it's not exactly a technical marvel to begin with, basically
Re: (Score:2)
The only question here is: What are they selling? Increased stock value?
From what I have observed in recent years, C-level people believe that their company will profit greatly from using LLM-based services that they pay other companies for. And when their stock value drops, they find out too late that any potential upside of their use of such services would also apply to any competitor, to the point where mundane SaaS-services can be vibe-coded by anyone, accepting the same lower quality standards they introduced by using LLM-generated code.
But to the question what are they s
Re: (Score:2)
Well, for a company like Spotify, the downside isn't so scary because their software isn't exactly doing rocket science either. Their business is internet radio with on-demand capability. The technical piece is relatively basic enablement of that direction that isn't difficult and others can and have easily competed on technical design. See the cited three features, *super* easy sounding stuff.
But I *have* seen a software sales guy fail to understand the point you just made. He was excited because now wh
Re: (Score:2)
Or Spotify has simply laid off their best developers and replaced them with AI.
Or Spotify has gone to crap and no one gives a damn because Spotify blames all their problems on Apple Music Monopoly.
Or Spotify is trying to cash in on AI hype money while they still can.
Or Spotify's developers were on vacation since December. They're in Europe, so they get like 2 months of vacation, right?
Or Spotify just hasn't done anything that requires code changes in 2 months. The backend systems are mature and the APIs wor
Spellcheck writes 50% of my writing ! (Score:2)
Autocompletes are counted as it writing it!
It completes 1 letter of the word, it gets counted as typing that word for you.
Think how high your stats would be on your phone for how much the phone's autocomplete and word suggestion "wrote for you."
If you use AI coding and everything you type gets replaced search suggestions and your corrections are also completed then it can be 100% even if you actually wrote most of it; it's how you count it and they are counting this stuff at extremes.
Re: (Score:2)
No way experienced developers are letting AI generate bug fixes or entirely new features using Slack to talk to AI on the way to work.
Depends on whether they can review the code and tests effectively first. I frequently push commits without ever typing a line of code myself: Tell the LLM to write the test, and how to write it, check the test, tell the LLM how to tweak it if necessary, then tell the LLM to write the code and verify the test passes, check the code, tell the LLM what to fix, repeat until good, then tell the LLM to write the commit message (which I also review), then tell it to commit and push.
Actually "tell the LLM what t
Re: (Score:2)
Who says they push to production? I'd hope Spotify has a test system and possibly a CI pipeline. I still don't think you should have to code on the way to work.
Re: (Score:2)
Anthropic was putting out a lot of PR just before their capital raise. Maybe they bribed Spotify to join in?
Anyway, the raise was successful, so hopefully the hype will die down for a while.
Re: AI Hype needs money (Score:2)
It's plausible that they didn't write any code because they're still coming back from a Christmas vacation.
Re: AI Hype needs money (Score:2)
Re: (Score:2)
The lords of AI are elevated the highest when people believe not that AI will save the world, but that it already has.
In order to get a bailout you have to be critical infrastructure, too big to fail. That's the next phase of the scam.
Spotify = worst company to work for (Score:4, Funny)
Engineers can fix bugs or add features to the iOS app from Slack on their phones during their morning commute and receive a new version of the app pushed to Slack before arriving at the office.
This makes Spotify the most awful workplace on the planet for suggesting (even if not demanding), that employees should be working during commute.
But also, this makes it the most poorly managed software shop. They're expecting employees to do away with being serious, and carelessly push updates "from Slack during commute". AI apart, you'd think he wants his engineers to at least pay attention to what they're doing. No way this attitude can end up well for the product.
IT fools are antiunion. (Score:2)
You had the power before and didn't get a union when you had a chance.
ANY time you spend working you should be paid. The CEO gets paid a million per week and nothing anybody does is really worth that; even if they "work" long hours... but we are supposed to work on all our free time.
Re: (Score:2)
This makes Spotify the most awful workplace on the planet for suggesting (even if not demanding), that employees should be working during commute.
Well yes, but you never asked what this means for the employee. I'm perfectly happy being expected to work during my commute providing that the commute is considered working hours. In fact I would prefer this outcome.
But who am I kidding, they are just another greedy corp.
why commute? (Score:5, Insightful)
If you can do all of your work while commuting then why commute at all? Obviously does not matter where you sit.
A Bubble Is In A Trouble... (Score:2)
...so all guns are out and blazing. Propaganda is more aggressive with every passing moment.
Because so many bosses will have to answer very hard questions about vast quantities of money once it bursts.
It can and will be worse.
That explains it (Score:3)
Oh so that's why Spotify stopped working and some of the features disappeared. Awesome, I expect the next revision to just be a big ol' Play button with no other interface at all.
That sounds depressing (Score:2)
If I was one of the best devs, I wouldnâ(TM)t want to be regulated to vibe coding.
Maybe I just weird in that I actually enjoy solving problems and trying to think of the way things are put together?
Every threat actor now aiming for slack (Score:5, Interesting)
Their best developers aren't writing code (Score:2)
So only the worst ones are!
Well, considering the current Spotify client... (Score:1)
...it can't get much worse.
November (Score:2)
Yuk (Score:2)
AI-prompted playlists (Score:2)
You mean that obnoxious, always-on-top-of-the-page AI DJ that literally nobody wants?
May not be their "best" devs after all then... (Score:2)
I mean this is just a bullshit metric: Declare "the best" to be those that have not written code, then claim the best have not written code.
Re: (Score:2)
Yeah, noticed that too, he felt the need to clarify 'best'. That sounds super odd, why would specifically only the best be able to claim that? I would have assumed it more likely for lower level developers to claim that achievement. So if you have some people writing code but only the 'best' not writing code....
Of course, I suspect it's like some executive I recently dealt with who felt that 'developers' were beneath him until they left that stupid 'coding' behind and became just executives. So he would be
Re: May not be their "best" devs after all then... (Score:2)
Hmm (Score:2)
LMAO, 0 lines (Score:2)
C'mon, Linus, why won't you accept "my" kernel updates? ;)
Lazy, Impatient, and Hubris (Score:4, Insightful)
The best developers spend their time herding other developers, in meetings, and jabbering with management. I've worked for lots of companies and for a lead developer to say they haven't touched a line of code in months sounds right, AI or not!
Its Best Developers Haven't Written a line (Score:2)
Nobody can properly review code on their phones (Score:2)
To properly review code, you need a full-sized screen. I don't believe this story, it's nothing but hype.
The next exploit is waiting... (Score:2)
Should I stock up on popcorn, for the next security exploit for Spotify? :)
I hope their "engineers" are spending time auditing the code that is generated by AI and used in production... otherwise, this could be very career limiting for them.
The Best Developers Trying To Fix the AI-Slop (Score:2)
Just like my manager showing my boss how AI can write code by demonstrating a 'SELECT * FROM table LIMIT 10;' query.
They're Doomed (Score:2)
I've been using AI to write code recently. I figured I should give it a go.
Not reviewing and understanding and editing the output code is a recipe for disaster.
For example, in code for a cryptographic hash function there are padding rules to bring the data size to a multiple of the block size and add a length. So for example with SHA3, a minimum of 65 extra bits. If your data length mod the block size is 65 bits less than the block size, then add the pad bit, put the length at the end of the block and fill