Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming

More Than Half of GitHub Is Duplicate Code, Researchers Find (theregister.co.uk) 115

Richard Chirgwin, writing for The Register: Given that code sharing is a big part of the GitHub mission, it should come at no surprise that the platform stores a lot of duplicated code: 70 per cent, a study has found. An international team of eight researchers didn't set out to measure GitHub duplication. Their original aim was to try and define the "granularity" of copying -- that is, how much files changed between different clones -- but along the way, they turned up a "staggering rate of file-level duplication" that made them change direction. Presented at this year's OOPSLA (part of the late-October Association of Computing Machinery) SPLASH conference in Vancouver, the University of California at Irvine-led research found that out of 428 million files on GitHub, only 85 million are unique. Before readers say "so what?", the reason for this study was to improve other researchers' work. Anybody studying software using GitHub probably seeks random samples, and the authors of this study argued duplication needs to be taken into account.
This discussion has been archived. No new comments can be posted.

More Than Half of GitHub Is Duplicate Code, Researchers Find

Comments Filter:
  • Dupes? (Score:3, Funny)

    by Tablizer ( 95088 ) on Thursday November 23, 2017 @07:02PM (#55613105) Journal

    You you don't don't say say.

  • by brian.stinar ( 1104135 ) on Thursday November 23, 2017 @07:09PM (#55613149) Homepage

    Yeah, it can be rough to learn how to use Git submodules...

    Honestly though, the few times I've directly integrated with someone else's code, it hasn't exactly been library-ready. There was a lot of massaging that had to be done the last time I did this, so a straight up duplication of their stuff was actually not a bad idea (AFTER I submitted them a PR to try and help manage this.) Their application wasn't designed as a library though, so I'm not sure what the right thing to do when you library-ify someone's code actually should be.

    • by Anonymous Coward

      Forks. That's the major reason for all the duplicate code. Actually, that's rather how git is supposed to work. The fact that it's only on the order of 19% unique files is surprising more that the number of unique files are so high. The other surprising part is just how badly we are when it comes to code that we're still working things as above. I can't count the number of times I've seen programs and realized I want to make a trivial change and how it's simply not possible without grabbing a bunch of

    • Yeah, it can be rough to learn how to use Git submodules...

      Or, maybe they're using subtrees :)

      Honestly though, the few times I've directly integrated with someone else's code, it hasn't exactly been library-ready. There was a lot of massaging that had to be done the last time I did this, so a straight up duplication of their stuff was actually not a bad idea (AFTER I submitted them a PR to try and help manage this.) Their application wasn't designed as a library though, so I'm not sure what the right thing

  • by Baron_Yam ( 643147 ) on Thursday November 23, 2017 @07:17PM (#55613167)

    Richard Chirgwin, writing for The Register:

    Given that code sharing is a big part of the GitHub mission, it should come at no surprise that the platform stores a lot of duplicated code: 70 per cent, a study has found. An international team of eight researchers didn't set out to measure GitHub duplication. Their original aim was to try and define the "granularity" of copying -- that is, how much files changed between different clones -- but along the way, they turned up a "staggering rate of file-level duplication" that made them change direction. Presented at this year's OOPSLA (part of the late-October Association of Computing Machinery) SPLASH conference in Vancouver, the University of California at Irvine-led research found that out of 428 million files on GitHub, only 85 million are unique. Before readers say "so what?", the reason for this study was to improve other researchers' work. Anybody studying software using GitHub probably seeks random samples, and the authors of this study argued duplication needs to be taken into account.

    • That's the hilarious part; duplicating code is also most of the purpose of github!!

      Wetness detected in local river!

      • That's the hilarious part; duplicating code is also most of the purpose of github!!

        Wetness detected in local river!

        How about reading the point made in TFS?

        The researchers did this study because Github is used as a source of data for identifying trends in computing. As they say, this duplication of code skews the results, and anyone wanting to draw serious conclusions from this data needs to account for this.

        The important data isn't the headline, it's... well... the data. I'm hoping there will be less (virtual) printing of sensationalist "JavaScript is the best language in the world" headlines due to this prompting peop

        • Thanks for pointing that out, I had no idea that the word "wet" fails to describe the local river with the maximum known precision! Golly.

          • Óbh-óbh.

            Look, this is more like pointing out that you're measuring the total length of the world's rivers wrong when you measure the source of the Rio Negro and the Rio Amazon from source to sea, because for a fair portion of that length, the Rio Negro is the Amazon. If hydrological researchers were making such a fundamental error, someone would have to point it out.

            But code researchers were making a completely analogous error, and it needed quantified. And now it is.

            • It is kind of like that, except in your example there is one mistake that goes away when you apply the fix, and in the story, it is still really fuzzy and the remaining code might even still be mostly copied.

              So it is like if you didn't have maps of the rivers, and didn't know which ones overlap, and so the data is complete crap, and then you find a fragmented map and now you know where some parts of a few of the rivers are. It is progress towards a good goal, but the data is still crap so far.

  • 70% is a lot more than half. In this case the difference between half and 70% is a casual 129,000,000 duplicated files.

    Kudos for not going in mega-clickbait mode, but still, "nearly 3/4 or more than 2/3" would be a better title.

    • by zifn4b ( 1040588 )

      70% is a lot more than half. In this case the difference between half and 70% is a casual 129,000,000 duplicated files.

      Kudos for not going in mega-clickbait mode, but still, "nearly 3/4 or more than 2/3" would be a better title.

      The files aren't duplicated with modern clustered file storage technology. They're only logically duplicated. That's why I don't see why this topic is of interest.

  • If half of the code is duplicate does that mean it is just a duplicate of the other half? If so then how would you know what the duplicate is and what the original is? Unless you count the duplicate code in with the original code in which case only one quarter of the code is a duplicate of the other quarter. Or maybe in my post thanksgiving carb haze I am over thinking this?

  • by FudRucker ( 866063 ) on Thursday November 23, 2017 @07:45PM (#55613257)
    put all the code in there and link it to the associated github accounts, providing the code is 100% identical it should work, but they must consider forks and even one line of code in one file will make a lot of difference in the compiled software
    • This could be a lot easier if you had content-addressable storage that refers to objects by their SHA1 hash.

      • You mean like a git repository?

  • by Zaiff Urgulbunger ( 591514 ) on Thursday November 23, 2017 @08:05PM (#55613309)
    Do they mean (obv. I didn't read TFA) code is duplicated in non-forked code, or are they just observing that lots of projects will be forked by other users in order that they can play with it and post their pull requests to them?

    'cos if it's the latter, then that's kind of obvious isn't it?
    • They're saying, if you do research on software using github for your data, you have to take file duplication into account in your formulas.

      The problem, IMO, is that a lot of the rest is duplicated from somewhere else, but only one time on github, so the data is still polluted by duplication.

    • Do they mean (obv. I didn't read TFA) code is duplicated in non-forked code

      Yes they do mean that. The summary should've mentioned this. From https://dl.acm.org/citation.cf... [acm.org]:

      (abstract) [...] This paper analyzes a corpus of 4.5 million non-fork projects hosted on GitHub representing over 428 million files written in Java, C++, Python, and JavaScript. [...]

  • by Anonymous Coward

    I wonder how much is just people trying to avoid dependency hell?

    Because let's face it, when I just want "that one bit" of some gargantuan framework / solve-all / codeball-from-hell then I'd rather spend five minutes of disentangling and integrating than a lifetime playing in "follow the library".

  • Pull requests (Score:5, Informative)

    by manu0601 ( 2221348 ) on Thursday November 23, 2017 @08:36PM (#55613379)
    No surprise here, this is how this stupid thing works: in order to submit a one-line bugfix, one have to fork the repository, patch, commit, pull request.
    • It's true that git stores snapshots.

      However, if you make a one-line change, it's not going to store new copies of every file in the repository. It only stores a new and old copy of the one file that changed.

      https://git-scm.com/book/en/v2... [git-scm.com]

      So yes, there is some duplication, but not the entire repository for each change.

      • No shit, Sherlock. If you thought anyone here needs to be told that then I hope that you were drunk instead of assuming that everyone else here is at your intellectual level.

    • No surprise here, this is how this stupid thing works: in order to submit a one-line bugfix, one have to fork the repository, patch, commit, pull request.

      You don't have to fork it on github unless you want to use github's internal mechanisms. You can submit patches using any of the other mechanisms too, like a PR to an external repo, or a git-send email and so on and so forth.

      It is however rather convenient.

      • You don't have to fork it on github unless you want to use github's internal mechanisms. You can submit patches using any of the other mechanisms too, like a PR to an external repo, or a git-send email and so on and so forth.

        I must be unlucky, but every time I did that, I was answered to send a pull request.

  • by no-body ( 127863 )

    reused/recycled code. One would be stupid to event/develop everything from the very beginning yet again...
    - haven't looked at the study though, no time..

  • by barbariccow ( 1476631 ) on Thursday November 23, 2017 @11:32PM (#55613767)
    Makes sense... it's called a fork. Several of my projects are forked more times than they contain files..
    • Comment removed based on user account deletion
      • > You can clone/download all what you wish and enjoy it on your own machine, but why having publicly accessible codes which have been basically developed by other people

        There are a couple major reasons to make your version of the project accessible on the internet. Maybe the most important is so that other people can see your pull requests. As an example, I used to do a lot of work on some software called Moodle, which is used by many schools. Moodle has a mature development process, so any changes to

        • Comment removed based on user account deletion
          • > corresponding file stops being identical

            Yep, the two or three or four files I change are no longer identical. The other 4,997 files in the project haven't changed, they are identical in both versions (forks). GitHub, presents my version of the *project*. It doesn't only show the differences and force users to download from someone else's fork, then apply my changes. They can just download my version of the project. (GitHub can also show the differences, if that's what someone wants to see.)

            That doe

          • If you're asking WHY do folks fork and NOT modify, it's to "lock" a version, and to be able to build in an automated way. Granted, git supports this via checking out a specific commit, but for some reason a LOT of folks find it better to fork it, and then clone off that fork. The only advantage I can think of is it protects you from the original deleting the project altogether.

            So imagine if you're developing a commercial software that uses LibraryA. You write it to how LibraryA looked when you pulled it and

  • by account_deleted ( 4530225 ) on Friday November 24, 2017 @03:32AM (#55614283)
    Comment removed based on user account deletion
  • I'm doing my bit to keep the stats up though, There are no 'duplicates' of any of my code ;-)

  • That's like calling identical twins "duplicate twins" and saying we should drop half of them in any study of population genetics.

    If two code files are the same, that's not just noise - a person made that happen for some purpose. It makes no difference whether you find that "bad" or "sloppy" - it's a legitimate part of the in-use population.

    Now, that doesn't mean some studies shouldn't still drop them - for example, if I'm studying the *writing* of code, I might want a sample of unique stretches of code that

  • Also this could affect the surveys of what programming languages are most used.
    At worst the current surveys only shows in which language programmers do most copy-paste code.
  • a lot of bots and stupid people use the fork button to bookmark or make themselves look legit and most forks go nowhere. so yeah.

If all else fails, lower your standards.

Working...