GitHub Announces Its 'Refounding' on Copilot, Including an AI-Powered 'Copilot Chat' Assistant (github.blog) 33
This week GitHub announced the approaching general availability of the GPT-4-powered GitHub Copilot Chat in December "as part of your existing GitHub Copilot subscription" (and "available at no cost to verified teachers, students, and maintainers of popular open source projects.")
And this "code-aware guidance and code generation" will also be integrated directly into github.com, "so developers can dig into code, pull requests, documentation, and general coding questions with Copilot Chat providing suggestions, summaries, analysis, and answers." With GitHub Copilot Chat we're enabling the rise of natural language as the new universal programming language for every developer on the planet. Whether it's finding an error, writing unit tests, or helping debug code, Copilot Chat is your AI companion through it all, allowing you to write and understand code using whatever language you speak...
Copilot Chat uses your code as context, and is able to explain complex concepts, suggest code based on your open files and windows, help detect security vulnerabilities, and help with finding and fixing errors in code, terminal, and debugger...
With the new inline Copilot Chat, developers can chat about specific lines of code, directly within the flow of their code and editor.
InfoWorld notes it will chat in "whatever language a developer speaks." (And that Copilot Chat will also be available in GitHub's mobile app.) But why wait until December? GitHub's blog post says that Copilot Chat "will come to the JetBrains suite of IDEs, available in preview today."
GitHub also plans to introduce "slash commands and context variables" for GitHub Copilot, "so fixing or improving code is as simple as entering /fix and generating tests now starts with /tests."
"With Copilot in the code editor, in the CLI, and now Copilot Chat on github.com and in our mobile app, we are making Copilot ubiquitous throughout the software development lifecycle and always available in all of GitHub's surface areas..."
CNBC adds that "Microsoft-owned GitHub" also plans to introduce "a more expensive Copilot assistant" in February "for developers inside companies that can explain and provide recommendations about internal source code."
Wednesday's blog post announcing these updates was written by GitHub's CEO, who seemed to be predicting an evolutionary leap into a new future. "Just as GitHub was founded on Git, today we are re-founded on Copilot." He promised they'd built on their vision of a future "where AI infuses every step of the developer lifecycle." Open source and Git have fundamentally transformed how we build software. It is now evident that AI is ushering in the same sweeping change, and at an exponential pace... We are certain this foundational transformation of the GitHub platform, and categorically new way of software development, is necessary in a world dependent on software. Every day, the world's developers balance an unsustainable demand to both modernize the legacy code of yesterday and build our digital tomorrow. It is our guiding conviction to make it easier for developers to do it all, from the creative spark to the commit, pull request, code review, and deploy — and to do it all with GitHub Copilot deeply integrated into the developer experience.
And if you're worried about the security of AI-generated code... Today, GitHub Copilot applies an LLM-based vulnerability prevention system that blocks insecure coding patterns in real-time to make GitHub Copilot's suggestions more secure. Our model targets the most common vulnerable coding patterns, including hardcoded credentials, SQL injections, and path injections. GitHub Copilot Chat can also help identify security vulnerabilities in the IDE, explain the mechanics of a vulnerability with its natural language capabilities, and suggest a specific fix for the highlighted code.
But for Enterprise accounts paying for GitHub Advanced Security, there's also an upgrade coming: "new AI-powered application security testing features designed to detect and remediate vulnerabilities and secrets in your code." (It's already available in preview mode.)
GitHub even announced plans for a new AI assistant in 2024 that generates a step-by-step plan for responding to GitHub issues. (GitHub describes it as "like a pair programming session with a partner that knows about every inch of the project, and can follow your lead to make repository-wide changes from the issue to the pull request with the power of AI.")
CNBC notes that AI-powered coding assistants "are still nascent, though, with less than 10% enterprise adoption, according to Gartner, a technology industry research firm."
But last month Microsoft CEO Satya Nadella told analysts GitHub Copilot already had one million paying users...
And GitHub's blog post concludes, "And we're just getting started."
And this "code-aware guidance and code generation" will also be integrated directly into github.com, "so developers can dig into code, pull requests, documentation, and general coding questions with Copilot Chat providing suggestions, summaries, analysis, and answers." With GitHub Copilot Chat we're enabling the rise of natural language as the new universal programming language for every developer on the planet. Whether it's finding an error, writing unit tests, or helping debug code, Copilot Chat is your AI companion through it all, allowing you to write and understand code using whatever language you speak...
Copilot Chat uses your code as context, and is able to explain complex concepts, suggest code based on your open files and windows, help detect security vulnerabilities, and help with finding and fixing errors in code, terminal, and debugger...
With the new inline Copilot Chat, developers can chat about specific lines of code, directly within the flow of their code and editor.
InfoWorld notes it will chat in "whatever language a developer speaks." (And that Copilot Chat will also be available in GitHub's mobile app.) But why wait until December? GitHub's blog post says that Copilot Chat "will come to the JetBrains suite of IDEs, available in preview today."
GitHub also plans to introduce "slash commands and context variables" for GitHub Copilot, "so fixing or improving code is as simple as entering /fix and generating tests now starts with /tests."
"With Copilot in the code editor, in the CLI, and now Copilot Chat on github.com and in our mobile app, we are making Copilot ubiquitous throughout the software development lifecycle and always available in all of GitHub's surface areas..."
CNBC adds that "Microsoft-owned GitHub" also plans to introduce "a more expensive Copilot assistant" in February "for developers inside companies that can explain and provide recommendations about internal source code."
Wednesday's blog post announcing these updates was written by GitHub's CEO, who seemed to be predicting an evolutionary leap into a new future. "Just as GitHub was founded on Git, today we are re-founded on Copilot." He promised they'd built on their vision of a future "where AI infuses every step of the developer lifecycle." Open source and Git have fundamentally transformed how we build software. It is now evident that AI is ushering in the same sweeping change, and at an exponential pace... We are certain this foundational transformation of the GitHub platform, and categorically new way of software development, is necessary in a world dependent on software. Every day, the world's developers balance an unsustainable demand to both modernize the legacy code of yesterday and build our digital tomorrow. It is our guiding conviction to make it easier for developers to do it all, from the creative spark to the commit, pull request, code review, and deploy — and to do it all with GitHub Copilot deeply integrated into the developer experience.
And if you're worried about the security of AI-generated code... Today, GitHub Copilot applies an LLM-based vulnerability prevention system that blocks insecure coding patterns in real-time to make GitHub Copilot's suggestions more secure. Our model targets the most common vulnerable coding patterns, including hardcoded credentials, SQL injections, and path injections. GitHub Copilot Chat can also help identify security vulnerabilities in the IDE, explain the mechanics of a vulnerability with its natural language capabilities, and suggest a specific fix for the highlighted code.
But for Enterprise accounts paying for GitHub Advanced Security, there's also an upgrade coming: "new AI-powered application security testing features designed to detect and remediate vulnerabilities and secrets in your code." (It's already available in preview mode.)
GitHub even announced plans for a new AI assistant in 2024 that generates a step-by-step plan for responding to GitHub issues. (GitHub describes it as "like a pair programming session with a partner that knows about every inch of the project, and can follow your lead to make repository-wide changes from the issue to the pull request with the power of AI.")
CNBC notes that AI-powered coding assistants "are still nascent, though, with less than 10% enterprise adoption, according to Gartner, a technology industry research firm."
But last month Microsoft CEO Satya Nadella told analysts GitHub Copilot already had one million paying users...
And GitHub's blog post concludes, "And we're just getting started."
I felt a great disturbance in the Force... (Score:5, Insightful)
"like a pair programming session with a partner that knows about every inch of the project, and can follow your lead to make repository-wide changes from the issue to the pull request with the power of AI."
Bullsh*t. And the problem is the staff that understand this and how generative AI works will generally be using CoPilot appropriate (small, repetitive blocks of code that are NOT specific to your projects requirements) are now going to have to be reviewing and debugging even more "close enough" crapcode churned out by a LLM that almost certainly didn't have your training requirements in it's training set and is doing it's closet approximation based on things that approximately match your requirements. Wwweeeeeehhhhhhh!
I really wish there was a way to clearly flag every line of code (even edited) that was created by generative AI. Again, I don't have a problem with it in general but I do want to review it and make sure it's doing exactly what is intended (and handling cases that were not intended).
Re: (Score:3)
Not terror but Horror. I am just waiting for security-bugs that transcend languages because automated crapcoding put the same stupid mistake in there everywhere and with no senior developer to catch things. Bonus points for making the mistakes really non-obvious so not even careful review helps. I also expect there are already efforts underway to seed the artificial morons with things like these.
Re: (Score:2)
I also expect there are already efforts underway to seed the artificial morons with things like these.
Not even needed.
The primary source for example source code to train the LLMs on is forum postings such as stackoverflow and other github repositories.
Forum postings answer a specific question with code illustrating that specific issue - with no regards to side-effects, context and unless it was part of the question typically without performance or security considerations, often not even ones as simple as input validation.
Github, meanwhile, has 100 abandoned pet projects for every popular repository, and ton
Re: (Score:2)
I also expect there are already efforts underway to seed the artificial morons with things like these.
Not even needed.
Agreed. It would make finding the seeded backdoors cheaper and easier though. On the other hand, using the crapcode that is out there makes it a lot harder to distinguish between "designed backdoor" and incompetent coding.
Anyways, my trust in the security of AI assisted/generated code is negative. What I do not get is why people jump on every hype without even a thought to possible unintended consequences. Are people just not smart enough to see obvious stuff like this?
Re: (Score:2)
Anyways, my trust in the security of AI assisted/generated code is negative. What I do not get is why people jump on every hype without even a thought to possible unintended consequences. Are people just not smart enough to see obvious stuff like this?
Any question that's a variation of "are people stupid?" can always be answered with an emphatical "yes, yes they are".
Intelligence like so many things follows roughly a normal distribution. So you're essentially asking if there are people on the left slope and yes, of course there are. It's the nature of things.
Re:I felt a great disturbance in the Force... (Score:5, Interesting)
as if millions of staff/senior developers suddenly cried out in terror.
"like a pair programming session with a partner that knows about every inch of the project, and can follow your lead to make repository-wide changes from the issue to the pull request with the power of AI."
Bullsh*t. And the problem is the staff that understand this and how generative AI works will generally be using CoPilot appropriate (small, repetitive blocks of code that are NOT specific to your projects requirements)
I agree it's BS but for slightly different reasons. The current CoPilot can't even follow API usage in my current file, I'm not holding my breath for them to figure out the proper API calls for my module in another file.
are now going to have to be reviewing and debugging even more "close enough" crapcode churned out by a LLM that almost certainly didn't have your training requirements in it's training set and is doing it's closet approximation based on things that approximately match your requirements. Wwweeeeeehhhhhhh!
I really wish there was a way to clearly flag every line of code (even edited) that was created by generative AI. Again, I don't have a problem with it in general but I do want to review it and make sure it's doing exactly what is intended (and handling cases that were not intended).
The last company I was at every change had to go through code review. And that meant looking over and understanding what every line in the proposed change did.
I don't see LLMs changing that.
Re: (Score:2)
When my code runs into an error I expected Copilot to take a look, and inspect the data as well. But it only looks at code. Of course data can be huge and LLMs have limited context, but not having any self-fixing ability makes it much less useful.
Re: (Score:2)
The last company I was at every change had to go through code review. And that meant looking over and understanding what every line in the proposed change did.
I don't see LLMs changing that.
For good reasons and on the contrary. More companies will become like your company. Us security guys are pressing them to do it. Ever iteration of standards like NIST or ISO27k is adding requirements to the secure coding chapters. The more critical infrastructure and billion-dollar companies can't survive without reliable IT, the less we can afford code written by infinite monkeys.
Re: (Score:2)
CoPilot bothered me, but... (Score:3)
CoPilot bothered me, but not personality, I was mainly concerned about it "stealing" GPL software. I heard there are issues with that.
But the last straw was 2FA. If linked to the email address I would be OK, but it forces one to either link it to a Cell Phone, or a piece of hardware I would have to buy or "supported" PW Manager. In the forums, some people said the initial setup required a Cell # no matter what option you chose.
That was it for me, I moved my repos back to anon ftp and deleted them in githhub. Seems many people have done the same.
Re: (Score:2)
I went with a YubiKey for my GitHub access, just because I use the YubiKey for GPG signing my repositories anyway. For anything GitHub related, I do recommend using YubiKeys, preferably at least two, just because it raises the bar against people getting into one's account.
If one wants a private Git server, I'd look at PikaPods and run one's own Gitea server. This way, you control the horizontal and vertical and can use whatever identification you want.
It uses TOTP (Score:2)
GitHub 2FA just uses TOTP [wikipedia.org]. You can use any application that supports it, or write your own if you're so inclined. Incidentally SourceForge and GitLab also use TOTP for 2FA, so one application will work with all of them.
Well, they don't support IPv6 (Score:1)
Fork Time (Score:1)
I won't belabor the obvious.
Fork time. Github as it was meant to be - code repository, history, documentation, and accountability.
No microsoft shenanigans, no fake intelligence edits, no 3rd party edits or suggestions.
Coders. Coding. Pulling. Git.
Github. Pure, and since microsoft shit all over this one, time to fork.
Re: (Score:2)
I use GitLab internally, works great.
Re: (Score:3)
Re: (Score:2)
Yeah, a classic case of blind leading the blind in a human-AI-centipede configuration.
Great, more crappy code (Score:2)
Before, people at least needed the skills to find something, copy&paste it. Now they only need the skill to copy&paste it and may not even need to adjust any names. Of course that will do wonders for code-quality!
faust (Score:2)
> available at no cost to verified teachers, students, and maintainers of popular open source projects
You can check out any time you like, but you can never leave.
tempted to try it (Score:2)
It's definitely an ambitious project, they are making a lot of claims. I'm interested to see what they can actually deliver, and I might spend $39 to find out.
GitHub Copilot (Score:5, Funny)
"Your plastic pal who's fun to be with."
Copilot is not useful (Score:2)
LOC bad (Score:4, Insightful)
Being able to churn out more code like that is bad. This is like forgetting that LOC is a terrible productivity metric all over again. There is a reason we don't use natural language to program computers. It's ambiguous. Understanding code is harder than writing it, and you have to read and understand all of it, because the computer understands none of it.
They're poisoning the workforce (Score:3)
"Popular"? (Score:2)
Re: (Score:2)
News Flash (Score:1)
Microsoft owns all your code.
what could possibly go wrong ? (Score:2)
Get used to security advisories that essentially read "they asked for it".
That ChatGPT can spit out (mostly) working code is pretty cool. However, all the code I've seen so far is at best junior level. And much of it is full of bugs.
So yeay for getting all those bugs fed back into future programs. The security industry is probably ordering a few crates of champagne right now.
Help with git itself (Score:2)
A lot like AI actors (Score:2)
The recent Hollywood strikes paid a lot of attention to AI. Actors were worried about AI actors, writers were worried about AI writers. These both have one important thing in common with AI programmers. In each case, AI can, and probably will, take over the jobs of paid extras in the movies, as CGI has already started to do. It will take over some low-level writing jobs where quality isn't so important. And it will take over some low-level programming jobs that don't require a lot of skill. For example, rep