Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Programming

GitHub Copilot Labs Add Photoshop-Style 'Brushes' for ML-Powered Code Modifying (githubnext.com) 56

"Can editing code feel more tactile, like painting with Photoshop brushes?"

Researchers at GitHub Next asked that question this week — and then supplied the answer. "We added a toolbox of brushes to our Copilot Labs Visual Studio Code extension that can modify your code.... Just select a few lines, choose your brush, and see your code update."

The tool's web page includes interactive before-and-after examples demonstrating:
  • Add Types brush
  • Fix Bugs brush
  • Add Debugging Statements brush
  • Make More Readable brush

And last month Microsoft's principle program manager for browser tools shared an animated GIF showing all the brushes in action.

"In the future, we're interested in adding more useful brushes, as well as letting developers store their own custom brushes," adds this week's announcement. "As we explore enhancing developers' workflows with Machine Learning, we're focused on how to empower developers, instead of automating them. This was one of many explorations we have in the works along those lines."

It's ultimately grafting an incredibly easy interface onto "ML-powered code modification", writes Visual Studio Magazine, noting that "The bug-fixing brush, for example can fix a simple typo, changing a variable name from the incorrect 'low' to the correct 'lo'....

"All of the above brushes and a few others have been added to the Copilot Labs brushes toolbox, which is available for anyone with a GitHub Copilot license, costing $10 per month or $100 per year.... At the time of this writing, the extension has been installed 131,369 times, earning a perfect 5.0 rating from six reviewers."


This discussion has been archived. No new comments can be posted.

GitHub Copilot Labs Add Photoshop-Style 'Brushes' for ML-Powered Code Modifying

Comments Filter:
  • by Arethan ( 223197 ) on Sunday January 15, 2023 @04:40AM (#63209784) Journal

    I saw this a few days ago, chuckled a bit, and moved on with my day.
    But, honestly, I'm curious. Has anyone here tried it out yet?
    Do you like the experience, or hate it, and why?

    • by narcc ( 412956 )

      Laughing is absolutely the right reaction. This is silly nonsense. Heaven help anyone who thinks this, or anything like it, is a serious tool.

      • Yup. I want a compiler to notify me about a (possible) problem with my code so I can sort it out, not a Chaos Monkey to randomly change it to whatever some GNN has told it would look good.
        • by dknj ( 441802 )

          Chaos Monkey

          Interesting way to spell Jr Dev

          • If you are a dev for a project you just do that. Browser extensions? You may be up to date with the browser changes but someone else does that and you assume the platform is handled somewhere..... Which is why I think ml and ai chat is handled wrong. Its by design presentation/application layer data types pushed to the end user. At that point nobody can really complain about what interfaces get around between userlands....

        • Yup. I want a compiler to notify me about a (possible) problem with my code so I can sort it out, not a Chaos Monkey to randomly change it to whatever some GNN has told it would look good.

          Chaos Monkey? Did you mean Code Monkey [youtube.com]?

    • I can see trying this on code as long as you do a diff on the results to see if it really caught a bug, or just wanted to reformat the code to the brush-creator's preferred style. The comment adding feature might be nice, but plenty of "beautify" scripts can do the same thing. Adding more robust error handling was neat for those that don't do that as a matter of course.

      I guess it is about as useful as having a co-worker review your code and offer comments.

    • I installed it on VS code today and I'm impressed with the brushes feature. You can highlight a section of code and 'make robust', sure enough it does add some useful hardening. The 'clean' function will refactor things nicely. I recommend people give it a shot.

  • by greytree ( 7124971 ) on Sunday January 15, 2023 @04:52AM (#63209810)
    Sorry boss, not coming to work today, my mood is wrong for API creation.
    I need to head off on an art-donor-funded retreat in Vermont, to reconnect with my muse with other gentle, like-minded C++ developers.
    • by narcc ( 412956 )

      Programming is a skill. It's an art, not engineering. Maybe you do need a Vermont retreat...

  • What about (Score:3, Insightful)

    by dknj ( 441802 ) on Sunday January 15, 2023 @06:09AM (#63209878) Journal

    'Removing open source code' brush?

    • Re: (Score:2, Interesting)

      by drinkypoo ( 153816 )

      How about the "properly attribute this source code" brush? Because unlike stable diffusion or midjourney or what have you, copilot actually reproduces recognizable elements of copyrighted works used to produce the model.

      • by narcc ( 412956 )

        I've explained this one before. The only way this can happen is if a particular bit of code is included in the training data many, many, times. Like a ridiculous number of times. This is why you find common license text pretty often, but very little code that could be considered a copyright violation. The only example I've seen, aside from license text, was the 'fast inverse square root' trick from Quake.

        That is a problem with the training data and can be fixed.

        Don't get me wrong, I'm not defending this

        • I get why it does what it does (in broad strokes anyway) but when it's producing byte for byte what went into it, then it might meet the requirement for a recognizable element. That's not what happens with the images because they can be surprisingly different and still be very similar.

          • by narcc ( 412956 )

            It should be possible to avoid the copyright problem by putting some extra effort into the training data.

            Text generation and image generation are very different, but you can still run into 'similar' problems. For example, even the smaller text-to-image things will give you a Mona Lisa good enough to make a copyright claim. The reason this happens is the same reason you get FISR in text-to-code generators: the source was was included in the training data an unreasonable number of times. You won't find too

  • by Tom ( 822 ) on Sunday January 15, 2023 @06:44AM (#63209914) Homepage Journal

    There's quite some hubris in many of the comments posted.

    Should we really assume so quickly that a well-trained AI is worse than a few junior devs? Given the code I've seen over the years (my job is in security, so I look from a bugs-and-exploits perspective, and sometimes readability/comprehensibility - but not from, say, performance) ... how to phrase that diplomatically? I would say that a lot of human-written code definitely has room for improvement. Sometimes quite a lot. Sometimes essentially the entire room.

    An AI (again, assuming it's well-trained) should be able to at least avoid the most common issues, and possibly be much better at writing code that follows a given guideline.

    I would still want a senior dev to do a code review. But he should do that on junior dev written code as well, so not much of a difference.

    I DEFINITELY think that within the next 1-2 years most of the code that students write for exams and exercises at university will be AI generated...

    • Should we really assume so quickly that a well-trained AI is worse than a few junior devs?

      Should you really assume that an AI owned by Microsoft is well-trained? History tells another story.

    • Should we really assume so quickly that a well-trained AI is worse than a few junior devs?

      If it's not, they need to be trained and the fastest way to train them is to force them to write their own code and tests.

      I would still want a senior dev to do a code review.

      Code review should be seen as an extra layer of bug avoidance, not as the primary defense against bugs. Because it won't work for that. A code review that verifies there are no bugs takes just as long as writing it in the first place.

      • by Tom ( 822 )

        If it's not, they need to be trained and the fastest way to train them is to force them to write their own code and tests.

        That's actually an interesting idea there. Let humans write the tests and let the AI write code that satisfies the tests. If your tests are well-written, that should work, right? Provided TDD is on the right track...

    • Should we really assume so quickly that a well-trained AI is worse than a few junior devs?

      Yes.

      The point of decent junior devs isn't that they produce good code, but that they can, when competently led, gradually learn to produce good code. The only thing this NN can produce is code snippets that superficially look like they might do what you want. IMO that is worse than nothing, because

      • finding all subtle bugs in such code often requires more effort than writing it from scratch
      • it inhibits learning in people who use it
      • this NN is fundamentally incapable of achieving any deeper understanding of what
      • by Tom ( 822 )

        Writing code that actually solves your problem requires understanding the problem and the context. Neural networks don't merely lack this capability today, they probably aren't even going the right way.

        I find myself writing "boring" code quite often. The core problem is interesting, but once you've got it, there's also code all around it. AI can be useful to do that. I'm fairly sure this'll eventually be part of IDEs and you get a code generator that you can tell what to generate.

        It won't solve the core problem, but why not use it to, say, generate the code for commandline parameters, with error catching in somewhat usefull user messages?

    • by godrik ( 1287354 )

      I actually think this can be a good tool. Something like:
      "I got the logic correct, please add the exception trapping" seems perfectly fine for copilot. Or things like "add the doxygen documentation for the function and pre-populate it."

    • by narcc ( 412956 )

      Should we really assume so quickly that a well-trained AI is worse than a few junior devs?

      Yes.

      How do you think programs like this work? There is no analysis happening here. This is, truly, code generated on the basis of similarity. It's not that dissimilar from predictive text on your phone, as it happens. It's nothing short of a miracle this ever generates functional code. That's certainly a testament to the power of large amounts of data, but you can't trust any of it for a second.

      An AI (again, assuming it's well-trained) should be able to at least avoid the most common issues, and possibly be much better at writing code that follows a given guideline.

      No.

      Again, there is no understanding here. No analysis. You seem to think an AI is something like a robot per

      • by Tom ( 822 )

        Again, there is no understanding here. No analysis. You seem to think an AI is something like a robot person. That is simply not the case. When I compared this to predictive text on your phone, that wasn't hyperbole. That's really how it works. The biggest difference is that it uses a RNN with an obscene number of parameters.

        I can tell a junior to follow a style guide or use a particular set of templates. That is well-beyond the capabilities of an AI like this.

        I'm somewhat aware of how AI works, there's a paper or two on AI with my name on it out there.

        And no, this is not beyond current AI. We can already tell AI to follow certain styles, the most obvious example being Stable Diffusion. I can TODAY tell ChatGPT to write a poem with a specific rhyme scheme. I can also tell it to write "hello" world in a variety of bracket styles, for example. It understands those things.

        Students copy/pasting the assignment text should get the exact same output. That tends to get noticed immediately

        Oh come on. You so old you don't remember? We were lazy as well, but also inventive enough to n

        • by narcc ( 412956 )

          this is not beyond current AI.

          You can disagree if you like, but you're absolutely wrong here. I've called these text-to-image and text-to-code toys "parlor tricks" for a reason. They're doing a lot less than you think.

          It understands those things.

          No, no it doesn't. That's beyond absurd and reveals a deep ignorance about the technology. Oh, I skimmed over the paper with your name on it. Are you really going to claim expertise on that basis?

          You so old you don't remember?

          If you were to type to cheat on your schoolwork, that does explain quite a bit. Just don't assume that the rest of us were

          • by Tom ( 822 )

            You can disagree if you like, but you're absolutely wrong here. I've called these text-to-image and text-to-code toys "parlor tricks" for a reason. They're doing a lot less than you think.

            And also a lot more than what was thought possible a decade ago. We've been seing rapid progress. "AI" might be a loaded term, there's no self-awareness or any other components of true artificial intelligence there, and speaking as someone who wrote a chatbot in the early 90s there's a lot that isn't that much advanced - but there also is a lot that is.

            No, no it doesn't. That's beyond absurd and reveals a deep ignorance about the technology. Oh, I skimmed over the paper with your name on it. Are you really going to claim expertise on that basis?

            We're on /. here, so take "understand" as a simplified term of a complex whole. The point was that if I have a code-generating AI then yes, I absolutely can

            • by narcc ( 412956 )

              yes, I absolutely can train that AI to follow a certain style guide.

              Making a much weaker claim, I see. (I'm giving you a lot of leeway here as this isn't your field.) Did you think I would forget that you were claiming that toy code generators could replace junior developers? Let's review:

              Should we really assume so quickly that a well-trained AI is worse than a few junior devs? [...] An AI (again, assuming it's well-trained) should be able to at least avoid the most common issues, and possibly be much better at writing code that follows a given guideline.

              I can tell a junior to follow a style guide or use a particular set of templates. That is well-beyond the capabilities of an AI like this.

              this is not beyond current AI.

              Now that you've actually looked into your silly nonsense, you probably now know that your earlier claims are laughably absurd. Do you still want to pretend that you were making a significantly weaker claim? The truth is displayed above, for all to see. Show some character. Admit your m

              • by Tom ( 822 )

                Making a much weaker claim, I see.

                Trying to win a dialog with silly games I see. I was answering to a specific point, as the part you quoted clearly shows.

                I suppose you can at least accept, now, that silly toy code generators are not going to replace junior developers.

                Replacing devs was not my claim. You're mixing it with something that you read elsewhere. At least look what you claim. I asked if we should assume so quickly that AI is worse then junior devs. That's not the same thing.

                I've given junior developers tasks that - and I've actually tried - chatGPT can solve to a surprising degree. Not perfect, but again I would expect that I need to fix a ju

                • by narcc ( 412956 )

                  How, sad, Tom. This is a rhetorical game. This is me calling you out on the fact that you haven't thought through your bullshit. The only one distracting things here is you. It's pathetic.

                  Replacing devs was not my claim.

                  Lies. This is what you wrote:

                  Should we really assume so quickly that a well-trained AI is worse than a few junior devs?[...] I would still want a senior dev to do a code review. But he should do that on junior dev written code as well, so not much of a difference.

                  You make the same idiotic claim again:

                  I've given junior developers tasks that - and I've actually tried - chatGPT can solve to a surprising degree. Not perfect, but again I would expect that I need to fix a junior dev's code as well. And it can handle stuff like "that's nice but, please change ..."

                  Sorry, Tom. Between your incredible dishonesty and shocking ignorance, I'm done with this non-conversation. You're obviously not interested in reality, only spreading your own bullshit.

  • installed 131,369 times, earning a perfect 5.0 rating from six reviewers.

    I'm more interested in the opinion of the other 131.363 users.

    Because if it was any good at all, a lot more people would have reviewed it. In fact, if it was mediocre of bad, more people would have reviewed it also.

    What's really concerning is that it might be so shit that people try it and simply forget it immediately because it's just not worth a minute of their time. Either that or bad reviews are filtered out by Microsoft, which wouldn't surprise me one bit.

    Not to mention, only 5 star ratings screams fak

  • Other than the fancier interface, this isn't much different than what ReSharper (mainly for C#) offered. I found it to be a handy tool for "fixing" previous coders' ugly coding (i.e. "if (boolean_var == true) {blah}". Of course it is important to carefully check what it is doing, in case the original programmer was doing something sneaky.
    • Once, years ago, I was writing a small program for myself that was mostly meant to make sure I understood the algorithm. I don't remember why, but I ended up reversing the standard definitions of true and false because it made the program's logic look better. No comment for it, but there would have been if I ever expected anybody else to have to read it. I can just imagine what this tool would have done with it.
  • by Required Snark ( 1702878 ) on Sunday January 15, 2023 @08:35AM (#63210022)
    Now imagine that you are on the witness stand because your company is being sued. The software product failed and a lot of money was lost or someone got injured and it ended up in court. The plaintiff attorney is asking you about coding practice, and you describe how you used an AI paintbrush to fix the code. You are asked if you understand exactly what was fixed and how you know that the resulting code was correct. What do you say, given the context that the result was so bad it ended up in court?

    The opposing attorney is going to paint you as a lazy incompetent fool who takes the easy way out and has no professional standards. Are you going to say that it's OK because you trusted Microsoft? Good luck with that. You can be sure that when you downloaded the software you agreed to terms and conditions that completely absolved Microsoft from any responsibility. If a person from Microsoft is asked to testify they will blame you as well and say you misused their system.

    As I said, good luck with that.

    • You can easily see what code was modified and decide if you like it. And in your example, it would surely have needed to pass the test suite which is how you knew it was correct.

      • Don't bet on it. I remember once attending a presentation where we were shown some new software developed in-house that we were going to have to use. Somebody from the development team presented it to us and a suit and the rest of us were there to listen, not to ask questions. Several times during the presentation, the suit asked about the lack of error checking or other common sense protections and the presenter always replied, "In a perfect world people wouldn't make that kind of mistake or they'd spot
        • These days it is getting to the point where you can tell the AI code generator to write a test suite for that software, and perpetually be on the lookout for bugs in any code you write. I experimented with the Copilot Lab 'brushes' yesterday on some existing code and it was really pretty amazing.

  • Sure, this laser-printed stuff looks kind of OK, but think about kerning. Think about the huge library of custom fonts we have on our photo-typesetter disks. Think about the low quality of laser-printed output compared to our 1800dpi typesetter paper. It's just not viable.

    Developers: history doesn't repeat, but it rhymes

    • You're comparing low-resolution laser printers to high-resolution laser printers. That's fine I guess, but obvious. But this has nothing to do with kerning (that can be done in any good typesetting software, including software for either macOS of Windows) and also nothing to do with Macs. You can connect a Mac or Windows PC to a high-resolution laser printer just fine. And you can get any typefaces you want for either Mac or Windows as well. I know: I've done it.
      • You're not getting the point.

        The first paragraph was what I heard from graphic arts pros in the early 90s.

        A train is coming. A lot of the above developers are standing on the tracks, pointing at it and complaining that "that's NOT how you make a train", as it approaches.

        History doesn't repeat. It rhymes. AI code generation rhymes with desktop publishing. And with transitioning from print to web.

        • by Kremmy ( 793693 )
          What it truly rhymes with is every story of our technological knowledge being lost as new ways of doing things let you skip the groundwork you needed to do the thing before. This isn't going to be something that improves the quality of the end product, though, this is going to be something that scrambles your eggs for you so you don't have to.
        • Then your point wasn't clear. You could have qualified it by saying something to the effect that this is how professional typesetters used to think and that it has an analog with current programmers.
  • by Whatsmynickname ( 557867 ) on Sunday January 15, 2023 @11:35AM (#63210252)
    Yeah I look at somebody's code I have to maintain and it's structure reminds me of M.C. Escher's "Relativity" or Salvador Dalí's "The Persistence of Memory".
    • by niver ( 925866 )
      You're lucky. The code I have to maintain looks like Andy Warhol's works.
    • Yeah I look at somebody's code I have to maintain and it's structure reminds me of M.C. Escher's "Relativity" or Salvador Dalí's "The Persistence of Memory".

      I like to think of myself as a modern day Jackson Pollock.

  • Tools that help you format code, enforce typing, and even find and suggest fixes for bugs have been around forever. And t hey can be useful, they're fine.

    Just now it says ML and pretends it's Photoshop. This was inevitable once Microsoft bought Github, the giant needs to do a bunch of nonsense no one wants to remind you about its other nonsense. We see you Visual Studio, you are relevant.

    Basically, vim users are still more productive than Visual Studio users, and I would argue, better human beings.

    Fight me.

  • I'd imagine often you'd want to apply most of these "brushes" in tandem. In other words, a single button that simply reads, "Make Code As Perfect As Possible".
  • What about a code version of the "milk" brush [artstation.com]?

To invent, you need a good imagination and a pile of junk. -- Thomas Edison

Working...