Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Programming

Stack Overflow 'Evolves', Previewing AI-Powered Answers and Chat Followups (stackoverflow.blog) 64

"Stack Overflow is adding artificial intelligence to its offerings," reports ZDNet (which notes traffic to the Q&A site has dropped 5% in the last year).

So in a video, Stack Overflow's CEO Prashanth Chandrasekar says that search and question-asking "will evolve to provide you with instant summarized solutions with citations to sources, aggregated by generative AI — plus the option to ask follow-up questions in a chat-like format."

The New Stack provides some context: As computer scientist Santiago Valdarrama remarked in a tweet, "I don't remember the last time I visited Stack Overflow. Why would I when tools like Copilot and ChatGPT answer my questions faster without making me feel bad for asking?" It's a problem Stack Overflow CEO Prashanth Chandrasekar acknowledges because, well, he encountered it too.

"When I first started using Stack Overflow, I remember my first experience was quite harsh, because I basically asked a fairly simple question, but the standard on the website is pretty high," Chandrasekar told The New Stack. "When ChatGPT came out, it was a lot easier for people to go and ask ChatGPT without anybody watching...."

But what may be of more interest to developers is that Stack Overflow is now offering an IDE (integrated development environment) extension for Visual Studio Code that will be powered by OverflowAI. This means that coders will be able to ask a conversational interface a question and find solutions from within the IDE.

Stack Overflow also is launching a GenAI Stack Exchange, where the community can post and share knowledge on prompt engineering, getting the most out of AI and similar topics.

And they're integrating it into other workflows as well. "Of course, AI isn't replacing humans any time soon," CEO Chandrasekar says in the video. "But it can help you draft a question to pose to our community..."

Signups for the OverflowAI preview are available now. "With your help, we'll be putting AI to work," CEO Chandrasekar says in the video.
This discussion has been archived. No new comments can be posted.

Stack Overflow 'Evolves', Previewing AI-Powered Answers and Chat Followups

Comments Filter:
  • by Revek ( 133289 ) on Sunday August 13, 2023 @07:34PM (#63764804)
    I saw someone asking a question about a bug. All the jerks were not getting it and kept blasting the person who asked the question. The poster kept clarifying the problem but all that brought about was more and more venom. The thread was a few weeks old but I had that same problem before and posted the answer only to have the same jerks try to tell me that wouldn't work. Even after a day or so when the person who posted the question came back to say my solution had fixed their problem there were know it all jerks who refused belive it. This is the only time I ever was involved in one of these type of posts but I've read through hundreds just like them over the years. The bar at stack overflow isn't high at all. Quite the opposite.
    • Re: (Score:1, Troll)

      by braden87 ( 3027453 )
      if you have time to address other peoples questions on stackoverflow you could be actively improving your own skillset... Just saying maybe the folks that reply most frequently aren't the best. Signed, a person on slashdot who could be actively improving his own skillet.
      • > if you have time to address other peoples questions on stackoverflow you could be actively improving your own skillset... Like technical writing on a public website, where you will get feedback and possibly help people solve an issue they could not. Don't know why I'd want someone like that on the payroll.
        • it's not technical writing, it's being a dick to folks new to programming in a cult like way. It's clear less than 5% of answers are well formulated, mostly they're just telling the asker why the question shouldn't have been asked. Have you heard of capitalism? It's nice to help others for no reason, but it doesn't pay the bills.
    • Re: (Score:2, Insightful)

      by gweihir ( 88907 )

      Yep. One reason I am not active there. Too many big-ego-small-skills assholes. Hence absolutely no value in going there. Incidentally, not uncommon in tech-communities if there are no barriers (i.e. real STEM degree) to entry. Most people without a real STEM degree cannot understand the actual complexities of technology and them mess it up because they think their tiny amount of knowledge makes them experts.

      • by AmiMoJo ( 196126 )

        Meh, I've know plenty of people with STEM degrees who vastly over-estimated their abilities too. In fact some of the textbook examples of bad answers on Stack Overflow are technically, academically correct, but practically hopeless.

        • by gweihir ( 88907 )

          That does not invalidate my statement in any way. Think about it. The really clueless morons you only get in force in people without a relevant degree. Sure, you can have some clueless morons with degrees, but the relative numbers are a lot lower.

      • by RedK ( 112790 )

        STEM degrees are just crappy credentialism. And outdated 3 years after the student is out of college with how fast tech is evolving and how low quality most college level courses are.

        Industry experience is a much better metric, but even then, you'll find a bunch of paper weights holding down chairs accruing it for years.

        Let's face it, a basement self taught dude who works as a botanist can be the guy who's the most right. You can flex all your paper and HR file length at him, at the end of the day, he's t

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I saw someone asking a question about a bug. All the jerks were not getting it and kept blasting the person who asked the question. The poster kept clarifying the problem but all that brought about was more and more venom. The thread was a few weeks old but I had that same problem before and posted the answer only to have the same jerks try to tell me that wouldn't work. Even after a day or so when the person who posted the question came back to say my solution had fixed their problem there were know it all jerks who refused belive it. This is the only time I ever was involved in one of these type of posts but I've read through hundreds just like them over the years. The bar at stack overflow isn't high at all. Quite the opposite.

      Remember during early days of the pandemic having some time on my hands so I created a filter on SO and went around answering questions that matched it. The whole scene was surreal. It seemed like there were some prolific users who become very upset when questions are not conducive to farming fast easy points. Questions I understood clearly being treated to continuous badgering for information.

      I've long been fascinated by the high rate of questions useful to me sporting negative feedback and having been

    • by jythie ( 914043 )
      And now they will offer an automated pathological liar that has been trained on jerk content. What could go wroing?
    • by Tablizer ( 95088 )

      Mass message websites cannot afford good vetting of moderators such that a form of group-think develops where like-minded people up-vote each other, kind of like the board of a corporation voting to give each other raises.

  • by QuietLagoon ( 813062 ) on Sunday August 13, 2023 @07:34PM (#63764806)
    ... why?
  • Not even AI will be able to figure out how to post an acceptable question to Stack Overflow without having to spend an afternoon jumping through hoops to demonstrate your worthiness. I think they've lost their minds.
    • +1

      I too have had that problem - I ask "what options are there to do X", and I get told "opinion based - this isn't a design site".

      I get the feeling that the moderators aren't very "uniform". That is, the one that closed my questions thinks that site is one thing, but another moderator thinks its another. As such, if I get past the first, the second closes my question. If I ask it another way, the first closes it before the second gets to see it. I don't have an afternoon to prove my worthiness, so I'm just

  • by mbourgon ( 186257 ) on Sunday August 13, 2023 @08:30PM (#63764864) Homepage

    (disclaimer: I've used ChatGPT 4 times to try and solve a problem, after searching SO and not coming up with a solution).
    I wish StackOverflow the best for this. While I've had fantastic luck with SO over the years, you can absolutely have a bad experience, and can absolutely not get an answer. Maybe they'll manage to make it more useful.

    But man, I've tried ChatGPT. 4 times. On 3 of those I wound up going to SO and reposting my question, and got the solution I needed - the GPT answer was either wrong or actively bad (like levels of "the command would have deleted my VM" levels of bad). 1 time it worked, but those other 3 were terrifying, if only because I could see people using it and trying it - it's convincing, even when wrong).

    • The AI will get better, while the SO community will die out. SO is panicking and rightfully so.
      • by gweihir ( 88907 ) on Sunday August 13, 2023 @09:37PM (#63764964)

        Does not look like it. General ChatAI will probably only get worse from here on because the training data gets poisoned now by AI output being in there. Specialized instances may get better, but that is not assured.

        • That does appear to be the paradox of the current AI model. It's trying to replace the need for humans to produce information with a system that depends on copying/training information generated by humans. If there aren't enough human-produced data for AI to copy/train, then it becomes ineffective. But if AI isn't replacing the need for humans to produce information, then what economic value does it actually have?

          They will either have to drastically change how they train AIs or the novelty will soon pass.

          • by gweihir ( 88907 )

            But if AI isn't replacing the need for humans to produce information, then what economic value does it actually have?

            It seems to be basically a "get-rich-quick" scheme. That the CEO of OpenAI is also active in the crapcoin space is pretty telling.
            That said, specialized limited versions of LLMs can likely replace humans to some degree. But general-context LLMs seem to be a dead end.

            They will either have to drastically change how they train AIs or the novelty will soon pass.

            I think for general-context LLMs that is basically economically infeasible. ChatGPT benefitted from (probably illegally) scraping a lot of the web, but that is bound to not work much longer because of poisoning by AI generated content and becau

            • by jythie ( 914043 )
              This, so many times this. Right now we have an exciting technology looking for a use, and hypsters trying to pump up interest so they can be on the next big thing that 'goes to the moon', then cash out. But I do not get the impression they actually have much of a plan outside getting rich quick.
            • by jvkjvk ( 102057 )

              >I think for general-context LLMs that is basically economically infeasible. ChatGPT benefitted from (probably illegally) scraping a lot of the web

              Why do you think that scraping publicly available information from the web would be illegal? To download copyrighted works from the web is not copyright infringement because (unless the site is hosting illegal works) the author authorized them to be there for that purpose.

              • by gweihir ( 88907 )

                Stuff published without a license is automatically copyrighted and you are not allowed to just take it and resell it in any way. Training an LLM and then selling use of that LLM is copyright infringement and, because it is commercial, it is criminal copyright infringement. The only exception is if the material lacks originality. For stuff with a licence, unless LLM training is explicitly allowed, the same applies.

                If they had done it for research, that would have fallen under an exception. But since they are

      • The model handles the grunt work, but humans still need to step in occasionally and fix things. There's a persistent need for human ingenuity to solve edge cases, and life is mostly edge cases. If we can combine LLMs and humans in a way that complements strengths and offsets weaknesses, it'll be a win.
        Humans are slow and can be offensive. AIs don't recognize their own knowledge gaps and confidently generate flawed solutions - though some humans do that too. But AIs can handle simple tasks well, contextual
      • > The AI will get better, while the SO community will die out.

        It's training on SO material. If people stop using SO that it has less to train on.

        One of the biggest issues with LLMs is that it always sounds convincing when wrong. An expert is going to notice it, but newbies coming to the bot will not.

  • by S_Stout ( 2725099 ) on Sunday August 13, 2023 @08:32PM (#63764870)
    Putting ChatGPT in a iframe will not save Stack Overflow.
  • now an AI entity/ fake consciousness ( I don't even know the correct term )... oh INSTANCE so now an AI instance is going to tell me that my question is terrible and to fuck right off from the site? I certainly look forward to being discouraged with the speed and accuracy of machines with no humans involved.
    • It isn't even a fake consciousness or an instance. It is just an LLM. It predicts what the next word should be given an input, previous words it spat out and the training it's had on random bulk internet content.

      Nothing more.

      • Each token as it is being generated by the LLM visits the whole model. You could say it "touches" the whole training set, and in the case of very large models, the whole human culture.
        • Sure, yes, and it is still just an LLM. It has zero consciousness. It is still only picking words based on statistical analysis/unknown neural net magic.

          They train these things with randomly acquire bulk input and they spit out stuff based on the stuff previously fed in. That's their goal, function, purpose, design. There is no thought, emotion, consciousness, decision making, etc.

          Visiting or,touching or whatever you want to call it doesn't change how LLMs work. It's a statistical next word chooser. I

      • what happens if you have two LLMs with the same logic and training? Would you not have instance 1 and instance 2? LIke there's a definition and then you instantiate and train it, like a class in OOP, no?
        • Depending on how the LLM is programmed you may not get the same response to the same input.

          If so then do you consider them instances of the same thing? It's all semantics at that point. I'd say no but can see how others say they would be for some reasonable definition of instance based solely on origin and design but I use a stricter definition.

          When I think instance, I'm thinking of something like a pool of load balanced web servers with the same content and server configuration. Always the same answer a

          • by jvkjvk ( 102057 )

            So, if I have a program and the only thing it does is print a random integer, and then create another copy, to you they are not instances of the same thing, because they will output different random numbers? Huh, that doesn't seem like a very good metric to me, honestly.

            • There are always edge cases. That's a trivial program suited for academic discussion not what happens in the real world of large scale AI.
              Most people would expect a computer to generate the same answer given the same question, unless the question was, "generate a random number" in which case they expect the opposite.

    • by jythie ( 914043 )
      More like a giant spreadsheet. The current wave of AI is a branch of statistics, not symbolic reasoning. It has become popular because it can be run on GPUs, which means it can be wrong really fast, but is good at giving you the answers you want. This is great when applied to things like recommendation systems where the whole purpose is to find things you'll like, but it is fundamentally flawed when it comes to giving accurate answers.
  • by Anubis IV ( 1279820 ) on Sunday August 13, 2023 @08:38PM (#63764888)

    AI is here to stay, but the current level of interest in it is a passing trend. ChatGPT usage has already dropped off a cliff. StackOverflow’s value is in high quality answers. Many of us are old enough to remember what it was like before. I don’t want to see a return to those days, but letting AI provide answers is a sure way to get there.

    • ChatGPT usage has already dropped off a cliff.

      That may be so, but they've convinced pretty much everyone to integrate the technology into their systems for some reason or another. Stack Overflow is just one of many examples. That, combined with Microsoft basically buying them, means that even if they fail they have already won.

      They probably have no idea whether they'll be able to move beyond a gimmick, but their sales/marketing team convinced everyone that matters to sign a contract before finding out.

      • Oh, it makes short-term business sense, sure. But it destroys the long-term value of the site. With Reddit and StackOverflow both destroying their long-term value that's derived from human-provided answers, I can't help but think that someone else will step up to fill that void.

  • by Anonymous Coward
    It includes the cynical, pretentious attitude that most StackOverflow responses do.
    • Hahaha yea it gotta have that "Hey I found a missing comma in your comment, let me be that memory piece of shit website that going to leave you marked for life with a bad experience, and start telling the world how much of a fucking peon you are, and oh... you're just trying to learn ? RTFM and ask question that include an answer from a bachelor degree thesis, otherwise, what the hell are you doing here being ignorant and be wanting to learn something at your pace or your way ?"

    • You know it Stack Overflow when you prefer to watch an howto video from India with a strong accent, rather than wasting your time on S.O to get prey on.

  • by illogicalpremise ( 1720634 ) on Sunday August 13, 2023 @09:05PM (#63764910)

    The solution to low-quality questions isn't more low-quality answers.

    CEO is an idiot. SO's founding mission was very clearly to "help people help themselves". If you can't be bothered checking if your question has been answered before and ask it in a clear manner that shows you put EFFORT into it - why should anybody put effort into helping you? Why shouldn't people be upset that your laziness is wasting their time?

    If his question was so simple - why on earth was he asking it? The answer would have been on the first page of any decent SO or Google search had he bothered to check.

    SO's rules have always been very reasonable. The only reason stricter rules are sometimes needed is because of the people is CEO claims are the victims here. They don't want to even understand the issues. They think their own time is more precious than others.

    It's not SO's job to compete with chatbots. SO's job is to give high-quality answers to difficult problems where the person asking has actually tried to understand what they want to achieve and the tools they are using to achieve it. It's not to do your homework or help you pass a job screening test.

    No, good sir! You are in fact the very problem you want to solve - you want instant gratification from other people and you don't actually care how you get it. No wonder you became a CEO you fucking human parasite.

    • If you ever used an answer from SO before writing this shit, you are in self contradiction.
    • by Miamicanes ( 730264 ) on Sunday August 13, 2023 @11:17PM (#63765060)

      Part of the catch-22 with "asking a good question" on StackOverflow is that often, people asking "poor" questions REALLY need a more ephemeral site where they can ask questions when they just need a gentle nudge in the right direction... like, someone telling them the proper Google'able terminology to describe what it is that they're stuck on.

      StackOverflow also has never really solved its problem of, "Problem {x} HAS NO actual solution TODAY (but 7 years later, it might)... or has {this solution} today, but will require doing something completely different 5 years from now".

      Let's suppose you're having some problem related to... say... Android. A problem that might very well involve some radical change Google made to Gradle, Android Studio, or Android's API itself within the past year or two. Being a good community member, you search SO, and discover a question that's a few years old... with an accepted answer that subsequent changes by Google broke YEARS ago. At this point, you have no good options. If you post a new question, someone is going to kick you in the balls and close the question within minutes because "it's a duplicate of an existing question" (regardless of the fact that the original answer no longer works). But without posting that new question, nobody is ever going to BOTHER posting an updated answer.

      The problem is particularly painful with regard to anything Android-related, because Android's API and development tools are so volatile, the half-life of pretty much ANY answer (or online tutorial, or real book) is about 12-18 months before Google's cumulative changes break things badly enough to prevent it from working as-written, and 24-30 months until things break so badly, the very existence of that old tutorial almost does more harm than good. 5+ years down the road, Android's API and/or toolchain will have probably changed so much, you won't even be able to make sense of that old article/tutorial/book, because literally everything has changed.

      For a perfect example, just try finding out how to write your own IME/soft keyboard. Google's own IME example code hasn't been buildable for YEARS, because they flat-out abolished an entire API class it depends on (you can, of course, rip the class out of the AOSP source code... but if you don't already know that, you're basically fucked, and Google's own API docs are worthless). I mention this, because almost any question you could conceivably ask related to soft keyboards/IMEs and Android has theoretically been asked in the past... but pretty much every single question related to the topic has answers that are now somewhere between "incomplete", "not quite right", and "utterly and completely wrong". And this is just one of many, MANY aspects of Android development where SO has become a literal minefield of no-win situations.

      What makes the situation particularly awful is the fact that SO is one of the only things that ever MADE Android development semi-accessible to begin with. Android has always been volatile & had incomplete, broken documentation... but at least 10 years ago, SO was a safe, positive community space where people could guide each other through the minefield. Nowadays, it really isn't anymore.

      • by illogicalpremise ( 1720634 ) on Monday August 14, 2023 @02:31AM (#63765214)

        Generally a good question will acknowledge and link to existing answers and explain how the results they get are different from the results expected. The question would include relevant software version numbers so people who might answer or moderate have enough context.

        Bad questions are usually along the lines of "I tried nothing and nothing worked. I can't be bothered providing more details. Please write the code for me".

        The concept that people on SO are mean/evil seems to be coming from people who feel like they are owed answers to problems they can't or won't articulate or haven't adequately researched.

        What is being proposed here is that throwing AI at poorly understood/articulated questions will solve everything. I really don't see how that's helpful unless you want to spend days going down a rabbithole caused by nonsense answers generated out of code soup based on nonsense questions.

        • by RedK ( 112790 )

          > The concept that people on SO are mean/evil seems to be coming from people who feel like they are owed answers to problems they can't or won't articulate or haven't adequately researched.

          I never got that impression, until I read both your posts here.

          Figure that one out.

        • by jvkjvk ( 102057 )

          >The concept that people on SO are mean/evil seems to be coming from people who feel like they are owed answers to problems they can't or won't articulate or haven't adequately researched.

          No, the basic concept that people on SO are mean/evil comes from all the horrible interactions even reasonable questions face. There are tons of toxic people on SO. It's just a fact, and your hand waving about blaming the victims of these bullies is just off base.

      • by jeremyp ( 130771 )

        What you do is say "I tried this answer (with link) and it didn't work because x, y and z happened."

        • The problem is, SOME users with high-value reputations get to a point where they just start indiscriminately swatting down ANY question with a past answer, regardless of merit... and often, it's BLATANTLY obvious they didn't even bother to read the older question, let alone thoughtfully contemplate the current-validity of its answer. I think a few of them have literally written scripts to partially automate the process so they can be "first to pounce".

          One supremely annoying thing I've noticed... the users w

    • The solution to low-quality questions isn't more low-quality answers.

      CEO is an idiot. SO's founding mission was very clearly to "help people help themselves". If you can't be bothered checking if your question has been answered before and ask it in a clear manner that shows you put EFFORT into it - why should anybody put effort into helping you? Why shouldn't people be upset that your laziness is wasting their time?

      If his question was so simple - why on earth was he asking it? The answer would have been on the first page of any decent SO or Google search had he bothered to check.

      SO's rules have always been very reasonable. The only reason stricter rules are sometimes needed is because of the people is CEO claims are the victims here. They don't want to even understand the issues. They think their own time is more precious than others.

      It's not SO's job to compete with chatbots. SO's job is to give high-quality answers to difficult problems where the person asking has actually tried to understand what they want to achieve and the tools they are using to achieve it. It's not to do your homework or help you pass a job screening test.

      No, good sir! You are in fact the very problem you want to solve - you want instant gratification from other people and you don't actually care how you get it. No wonder you became a CEO you fucking human parasite.

      Remember, the point is to help people, not to ensure that they are worthy of help.

      • Remember, the point is to help people, not to ensure that they are worthy of help.

        That might be the point of a charity or drug rehabilitation but SO isn't those things. It has clear expectations about what a quality question looks like so conversely an "unworthy question" is most definitely a thing.

        Pretty much all replies to my initial comment are based on the idea that every question is worthy. The underlying claim there is that if 5 people say it's not worthy it must be because they are all mean people.

        I'm sure everyone can anecdotally point to a "reasonable" question that was unreason

        • You're confusing a practical rule with a moral stance.

          One of the signs of someone who is unfamiliar with a topic is that even with a lot of work they can still have trouble asking a clear question. There's basic stuff they misunderstand (but are unaware of) so they say nonsensical things, and they overlook "obvious" solutions because they're not obvious to them.

          The trouble with the SO standards is they filter out a lot of people who put in the effort and are trying hard but don't have the knowledge base to

          • You're confusing a practical rule with a moral stance.

            Not at all. I said moderators are well within their rights to close bad questions. I'm not the one claiming that's bullying.

            One of the signs of someone who is unfamiliar with a topic is that even with a lot of work they can still have trouble asking a clear question. There's basic stuff they misunderstand (but are unaware of) so they say nonsensical things, and they overlook "obvious" solutions because they're not obvious to them.

            The standards about what make a good question do not require the poster to know good solutions. However they are expected to show what they have tried and clearly describe the problem they are trying to solve and the requirements/limitations any answers should meet. If you can't clearly formulate your problem how can it be answered? A question like that serves no purpose on SO so they

  • Job security for those who ask questions on Stack to occasionally help them get around something. That is, rather than those who get jobs professing to be a developer to get a job, but basically ask Stack contributors to give them much of what they have to write, and cut and paste the rest. Given some of the bullshit that ChatGPT spits out, companies might start paying the price for offshoring for cheap labour instead of hiring North Americans. At least I hope it works out that way.

  • I'm absolutely not an expert in the field, but there's a logical issue here and I'm baffled that these people can't comprehend it. Sure, we currently have generative AI that sometimes gives an insightful answer (and also sometimes give a dreadfully wrong answer with the confidence of a politician and the flow of a singer), but these come from existing data.
    While you can probably "enhance" SO with a better UI using generative AI, you'll end up with two big issues :
    • - it will be "sometimes right, sometimes wrong"
    • - it will not "improve" beyond today if people are discouraged from providing

    Meaning you need human input in it. And who's going to care about the "traditional" SO when there's this convenient "I can stop thinking and get an output to any vague questions I have" option.
    Sure, a big part of the SO community is a bit rough, to say the least. But when I have to go there, very often I end up on a well formed question, with pertinent answers at the top, thanks to the score system. Sometimes it's wrong too usually indicated by downvotes.
    It might not be a perfect system, but it works for all intents and purpose.
    If anything, SO should have looked into how they could be part of the big AI situation as a provider of human labor, which is still a big requirement for these systems to somewhat operate.

  • "When I first started using Stack Overflow, I remember my first experience was quite harsh, because I basically asked a fairly simple question, but the standard on the website is pretty high"

    When I first started using SO, circa 2016, I found it very useful and quite friendly.

    Today practically every question is closed for some tedious reason like not posting a self-contained app from your million lines of code.

    This is not a problem that it always had, this is recent, the last two years.

    Kick the jerks off, pr

  • My recent experience in the domain of probability and statistics for capacity planning:
    • * The correct answer was in two 10+ year old items in stats.stackexchange.com.
    • * Asked three times with slightly different wording, each time ChatGPT summarized the problem correctly but then provided an incorrect derivation and result, with a different result each time.

Genius is ten percent inspiration and fifty percent capital gains.

Working...