Microsoft Introduces GVFS (Git Virtual File System) (microsoft.com) 213
Saeed Noursalehi, principal program manager at Microsoft, writes on a blog post: We've been working hard on a solution that allows the Git client to scale to repos of any size. Today, we're introducing GVFS (Git Virtual File System), which virtualizes the file system beneath your repo and makes it appear as though all the files in your repo are present, but in reality only downloads a file the first time it is opened. GVFS also actively manages how much of the repo Git has to consider in operations like checkout and status, since any file that has not been hydrated can be safely ignored. And because we do this all at the file system level, your IDEs and build tools don't need to change at all! In a repo that is this large, no developer builds the entire source tree. Instead, they typically download the build outputs from the most recent official build, and only build a small portion of the sources related to the area they are modifying. Therefore, even though there are over 3 million files in the repo, a typical developer will only need to download and use about 50-100K of those files. With GVFS, this means that they now have a Git experience that is much more manageable: clone now takes a few minutes instead of 12+ hours, checkout takes 30 seconds instead of 2-3 hours, and status takes 4-5 seconds instead of 10 minutes. And we're working on making those numbers even better.
Meh... (Score:4, Insightful)
There aren't THAT many repos with over 3 million files in them.
The great majority of projects I've been on have been around the 100k-300k range and doing a build (to properly test the product) required ALL of them.
And even then, once you've got all of them the first time, GIT does the diffing automatically so it "scales" already.
Maybe MS could put some of their vast R&D efforts to to something more useful... like having their free Visual Studio Code editor handle files bigger than 1gb?
Re: (Score:2)
If your repo has 3 million files in it, you have bigger problems. Solving those seems better than trying to mitigate them.
Re: (Score:3, Informative)
And if you have a million [acm.org]?
Re: (Score:2, Funny)
million
Billion, you fucking moron. lurn 2 rite.
Re: (Score:3)
The link is apparently slashdotted so I can't view it, but I think you misread it. The ACM link apparently says there is a billion *lines of code* not a billion files in one repo. Big difference! The OP would appear to be right.
Re: (Score:2)
Hmm. It appears the ACM cannot write headlines. The article finally loaded for me and it seems the headline is plain wrong, at least if the article is correct. It does say a billion files, and no where talks about lines of code. Sigh.
Re: (Score:3)
The ACM article headline is correct. The post that mentions billions is correct. You just missed it in the article.
Fourth paragraph (emphasis added):
Re: (Score:2)
Don't be an ass.
They were referring to file count, not lines of code.
The repository contains 86TBa of data, including approximately two billion lines of code in nine million unique source files.
Re: (Score:2)
Oh noes! It turns out the moron is you!
Re: (Score:2)
Oh noes! It turns out the moron is you!
Yep. I'm not generally that rude to other people. I had various attacks of stupidity and brain malfunction today.
Re: (Score:2)
I meant a billion in my other post.
Re:Meh... (Score:5, Interesting)
Microsoft's repos *are* that large. That's why they implemented this.
Microsoft Office's repository is over 1 TB in size. Yes, terabyte. For *office*. They absolutely cannot (could not, I suppose now) use Git on it.
Re: (Score:2)
Why are they that large in the first place?
Do they also store all design files and compiler-generated files in the repo?
Re: (Score:3)
They likely store their comments as separate files - one per comment.
(no, really... has no one in Redmond ever heard of making their shit modular?)
Re: (Score:2, Funny)
Re:Meh... (Score:5, Funny)
all right, you've clearly nominated yourself to untangling a 1TB repository. get on it bud.
Re: (Score:2)
In all seriousness, maybe they *should* get a team together and 'rip the bandage off' now, before another decade elapses and the thing gets even hairier...
Re: (Score:3)
But if multiple applications in Office share a library, where do you put that library so that the build process for each Office application can see it? Are submodules or subtrees a good choice, and if "yes," which is more appropriate?
Re: (Score:2)
But if multiple applications in Office share a library, where do you put that library so that the build process for each Office application can see it? Are submodules or subtrees a good choice, and if "yes," which is more appropriate?
You make that library a specific project, releasable on its own schedule, with a known distribution system that everyone can access for headers and binaries, and everyone uses releases of that project.
I did that under SVN at a previous position. I had 1 large Qt-based project that generated about 30 static libraries, about 20 standard C/C++ static library projects, a common headers project for the standard C/C++ static libraries, and about 10-50 programs that used the libraries and headers. All-in-all, i
Re: (Score:2)
But if multiple applications in Office share a library, where do you put that library so that the build process for each Office application can see it? Are submodules or subtrees a good choice, and if "yes," which is more appropriate?
Microsoft experimented with the submodules approach for Windows. Didn't work:
"We started down at least 2 failed paths to scale Git. Probably the most extensive one was to use Git submodules to stitch together lots of repos into a single “super” repo. I won’t go into details but after 6 months of working on that we realized it wasn’t going to work – too many edge cases, too much complexity and fragility. We needed a bulletproof solution that would be well supported by almos
Re: (Score:2)
As opposed to something like EFF's HTTPS Everywhere project, which stores its FAQ in its Git repository. If you want to suggest a change to the user manual, you have to fork the project on GitHub, clone your fork to your local PC, make changes, commit and push them to your fork, and then make a pull request on GitHub. Not having to spend bandwidth (and potentially pay overage fees) on cloning the whole thing to your local PC would make it easier to suggest changes.
Re: (Score:2)
That's fine if you want to make one change, not so fine if you want to make several changes for which the maintainers have suggested that you make separate pull requests. The only way I'm aware of to merge upstream changes into your fork is to clone the whole project, pull from upstream, commit your merge to your fork, and then push to your fork [github.com]. Or did you mean deleting and recreating the fork for each pull request?
Merge conflicts in GitHub web UI? (Score:2)
[Submitting a separate pull request from each branch of your fork to the upstream project] can be done entirely from the Web UI as long as you're only making small changes that touch one file at a time
I was asked to make three pull requests to HTTPS Everywhere, each to make one small change to a different section of the same FAQ. Because one of the changes would reorder and then combine two sections, I fear an error message that my pull request "has conflicts that must be resolved". GitHub's page about merge conflicts [github.com] states that for many "merge conflicts, you must resolve the merge conflict locally on the command line."
Did they just turn git into svn? (Score:5, Insightful)
The whole point of git is that you have identical copy on your machine. Why take away git's biggest advantage?
Re: (Score:3)
Re: (Score:3)
Microsoft are just getting efficient. They have simply skipped "Embrace".
No they didn't. For one thing, Git has been supported in TFS for four years now. And then there's this:
"Among them, we learned the Git server has to be smart. It has to pack the Git files in an optimal fashion so that it doesn’t have to send more to the client than absolutely necessary – think of it as optimizing locality of reference. So we made lots of enhancements to the Team Services/TFS Git server. We also discovered that Git has lots of scenarios where it touches stuff it really does
Re:Did they just turn git into svn? (Score:5, Insightful)
The whole point of git is that you have identical copy on your machine. Why take away git's biggest advantage?
Because it's biggest advantage is also one of it's greatest inefficiencies and frankly on a large project chances are you may not need it all. The whole point is you have an identical copy on your machine of what you're working on
It's the hook to make your repositories break (Score:3, Insightful)
The whole point of git is that you have identical copy on your machine. Why take away git's biggest advantage?
Because it's biggest advantage is also one of it's greatest inefficiencies and frankly on a large project chances are you may not need it all. The whole point is you have an identical copy on your machine of what you're working on
So buy a bigger disk. They're cheap.
Why did they do it? It's obvious: it's the bait on the hook to get you to break git and your open source projects (even CURRENT ones
Re: (Score:2)
So buy a bigger disk. They're cheap.
Not if you want both the speed of an SSD and enough capacity for your project in a laptop that's practical to carry.
Re: (Score:2)
It will be a good idea to look for a laptop will many storage slots e.g. at least two M.2 slots, at least one of which accepts both M.2 PCIe and M.2 SATA ; either a 2.5" or one more M.2 ; heck UFS memory cards might be big as well a couple years from now (perhaps M.2 to UFS adapters will be a thing)
Re: (Score:2)
Why did they do it? It's obvious: it's the bait on the hook to get you to break git and your open source projects (even CURRENT ones) that compete with them.
Sounds like a non-starter for distributed development to me. I imagine this is to make git work differently in a corporate environment where for the average developer if the master repo/server goes down it's not your problem. And perhaps for infosec reasons on proprietary code, who made a complete copy of the source code. This seems more like Microsoft adapting to use open source tools instead of their own proprietary tools like TFS.
Moore's law and partitioning repositories. (Score:2)
So buy a bigger disk. They're cheap.
That's not the problem. It's the time to process all those files every time you run commands like checkout status diff etc.
Are YOU really having speed issues now?
If not, don't expect to as your project grows, either. As long as the Moore's law variants apply and you don't add developers at an exponential rate, the machines will improve exponentially, wihch is faster than the repository grows. (Even if you DO add developers exponentially the output per developers drops
Re: (Score:2)
partition your repositories at project, application/subsystem, or API boundaries Git works fine if you have, say, one repository for the compiler support / standard library or vendor's SDK, another for your project's application, maybe a third for your-stuff specific libraries shared among multiple projects.
Unless you need the guarantee of atomicity to ensure that a change that happens to break the ABI between "your project's application" and "your-stuff specific libraries shared among multiple projects" doesn't end up breaking anything else.
You glue them together in the makefile common inclusions.
I'm interested. Can you link to an example of these "makefile common inclusions"? And how well does it work when the "your-stuff specific libraries" include C or C++ inline functions [gnu.org], or when different projects build the "your-stuff specific libraries" with different compi
Re: (Score:2)
Because it's biggest advantage is also one of it's greatest inefficiencies and frankly on a large project chances are you may not need it all
In this case call it something different. Git is known for that.
Re: (Score:2)
Re: (Score:2)
Unlike BitKeeper's equivalent, git submodules are not transparent. If you're using submodules, then you constantly have to be aware of the submodule work flow. You also need to decide up front how to split your repo. About the only thing I miss from svn is being able to keep everything related to a project in one big repo, and either check out the whole thing or some small sub-project, depending on what I'm working on.
That's a good thing. SVN made svn:externals a little too transparent, and it foobar'd plenty of people that just didn't know how it worked. Making people aware of it is probably a good thing.
That said, I do find having to do the "git submodule " thing really annoying.
Re: (Score:2)
When I use svn I have a copy of my branch on my local machine. I may not have every other branch or every part of the repo, but I have what I'm working on. I'm not sure what this is for other than companies that can't find a way to partition their version control between products.
Re: (Score:2)
Make a shallow clone, then. It will have everything you need to hack on the current code and to push it back.
Not having the history breaks any advanced git workflow, though. The reason git won over svn and such is bisect, rebases and so on; svn is hardly better than a stack of daily tarballs.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
If you want an identical copy, just mirror the GVFS path to a non-GVFS path, and there's your local copy.
Re: (Score:3, Insightful)
The whole point of git is that you have identical copy on your machine. Why take away git's biggest advantage?
"A Clone now takes only minutes instead of 12+ Hours!"
Ja, that's because you're NOT making a copy.
Re: (Score:3, Insightful)
No, the whole point of git is that every file version is immutable and referenced by a globally unique hash. This means that it doesn't matter where the actual data is located - until you need the actual data for some actual reason. This model has been copied by countless systems since git, because it is extremely robust and has multiple benefits, and none of those other systems expect the local user to download the entire database before he even begins work. Nonetheless, such systems can also support do
Re: (Score:2)
Forcing every developer in the same office to separately download a complete copy of the full history is inefficient. But then git does have a way to reference objects files from another path.
For large (but probably not Windows large) git repos, you could add a "git alternate" reference to a network share for your ancient history. So long as you are careful in how you manage that folder, and never remove anything from it, this can work quite well.
Giving each team a low latency, local mirror of this folder
Re: Did they just turn git into svn? (Score:5, Interesting)
> Why take away git's biggest advantage?
Because "clone now takes a few minutes instead of 12+ hours, checkout takes 30 seconds instead of 2-3 hours, and status takes 4-5 seconds instead of 10 minutes."
That is problem is not unique to Git. JÃrg Sonnenberger [sonnenberger.org] tried importing the NetBSD repository into Fossil [sonnenberger.org], and "the rebuild step which (re)creates the internal meta data cache took 10h on a fast machine." There are ways to make Fossil skip the rebuild on clone, which results in a suboptimal DB, but it still takes hours to clone. NetBSD's project history goes back something like a quarter century; it's going to take time to pull and organize all that.
DVCSes are great when you can afford their associated costs â" namely, the very advantages you refer to â" but for very large repos, those costs can be very high.
Do you really need every single version going back a quarter century? And if you do, do you need it 5 minutes after the initial clone?
One idea that's come up on the Fossil mailing list is to do a shallow clone initially, then trickle the back history in over time. I'd like a DVCS that gave me the past 30 days of history at the tip of every open branch, then over the next day or so back-filled the rest.
Re: (Score:2)
> Fossil was meant to be a 'lite' DVCS
Fossil was meant to support the needs of SQLite, one of the most popular and actively-developed code bases in the world. If Fossil can meet its needs, chances are good that it can meet your project's needs, too. There are very few NetBSD-scale projects out there, compared to the number that are plenty fast under Fossil.
And yes, I'm aware that you could list hundreds of projects at that scale, but I believe I could find millions software projects smaller than that. If
Re: (Score:2)
The whole point of git is that you have identical copy on your machine. Why take away git's biggest advantage?
The issue is that it doesn't well with how VS works which was based on how Visual Source Safe (VSS - Microsoft's version of CVS) worked, and it did locks per file as it pulled each file from the repository when you opened it.
Honestly, that's really the only reason I can see for why MS would want this. It makes it fit back into that old, broken model of locking files and tracking changes. Perhaps it has some benefit for how they track who did what/when, but it's not really something that is broken in git
Ah nostalgia (Score:3)
While a vfs sounds like a great idea, I think in theory it's only of use for very, very large repos. Even then I wonder if the exact same issues that made Clearcase suck would make it suck even with Git.
Re:Ah nostalgia (Score:5, Informative)
Then you had a piss-poor release engineer who didn't understand how to construct config specs based on a stable baseline, label & promote stable builds regularly, and use clearmake properly, or manage dependencies and allow you to do a clean, fast local build.
I love git, and I work with it daily, and the monorepo craze baffles the shit out of me, to be honest. But I used and supported ClearCase for 14 years at a large financial services company, and I can assure you that the problems you're complaining about are not limitations of the tool - they are limitations of your team's release engineers. ClearCase has many failings, but the issues you're describing simply reflect poor implementation and design choices.
It stemmed from fundamental concepts cribbed from Apollo's DSEE environment. HP's acquisition of Apollo prompted what would then become the ClearCase team to leave Apollo/HP and form Pure, then they combined with Atria to form PureAtria, then Rational acquired PureAtria, and then IBM acquired Rational -- so ClearCase was a thing long before it was IBM software, and the features you're griping about were extant long before the IBM acquisition. The IBM era mostly saw them continue to focus on jamming ClearCase into their "Application Lifecycle Management" toolset, Rational Team Concert, wrapping everything in a ghastly blue Eclipse RCP client, and making it more of a pain in the ass to use.
Dynamic views as you're talking about were not - and never were - intended for use across WANs, their Admin & Deploy guides specifically stated that it required a fast connection to a local server. If you wanted WAN connectivity, you either used RTC (Rational Team Client) to pull web views, or you used snapshot views, or you ponied up for MultiSite licenses and set up a sync scheme so that each site could have local copy on a VOB & View server they had a fast connection to.
Again - poor implementation by your release team. It's like complaining that a hammer makes a giant hole in the drywall when you put screws in with it - it doesn't mean there's a problem with the hammer, it means there's a problem with the operator. If you use the tool in a way it's not intended to be used, then don't be surprised when it does a shitty job.
Re:Ah nostalgia (Score:5, Insightful)
The fact you needed a release team and release engineers to manage a clear case implementation is why its considered one of the worst systems out there, remembered with hatred by almost everyone who used it. A version control system should be easily set up by one admin in an hour or two, and then usable without reams of documentation by any of the engineers. ClearCase failed that.
Re: (Score:2)
Then you had a piss-poor release engineer who didn't understand how to construct config specs based on a stable baseline, label & promote stable builds regularly, and use clearmake properly, or manage dependencies and allow you to do a clean, fast local build.
Oh they had plenty of release engineers, and that sort of demonstrates what bullshit Clearcase was. It was so slow that every site needed its own set of engineers, own set of servers and own set of mirrors to replicate each repo. Something no sane source control system has ever required. Then they had to have scripts to periodically sync changes back and forth. Two teams at two sites had to sit around and wait for changes to appear, and of course view specs couldn't be shared, and occasionally syncs failed
Re: (Score:2)
That's what cron is for. And by the way, nothing magically makes my git changes appear at a remote site, either - somebody at that remote site has to pull them into their local copy.
And does every site need a team of engineers and expensive equipment and extra software licences to perform this feat? No. Clearcase did. It was an awful tool.
If git were used at in its stead there would have been one server somewhere that everyone from any site would push and pull from. It wouldn't matter where in the world it was located because the performance would be fine.
Re: (Score:2)
Let me take two guesses as to why you might see a monolithic repository:
First, all applications with the potential to be shipped together may rely on common libraries, and the build process needs to know how to combine the libraries with the source code specific to each application. I'm under the impression that the logistics of this are similar when everything is in one repository.
Second, paid hosts of private Git repositories used to bill users per repository, not (say) per gigabyte of storage or data tra
Re: (Score:3, Informative)
I had to use Clearcase as my source control system for one company I worked for. The idea was you set up a view spec (a bit like a branch), mapped a drive letter to it and you never had to pull again because it would always reflect that branch. Your local changes went over the top and when it was time to commit you could merge up and commit. In practice what it meant was the source code was constantly changing under your feet, and binaries were constantly stale or in a mystery state because you didn't know what they were compiled against. And because this was IBM software it was unusably slow across WANs, memory hungry and enjoyed triggering random blue screens.
While a vfs sounds like a great idea, I think in theory it's only of use for very, very large repos. Even then I wonder if the exact same issues that made Clearcase suck would make it suck even with Git.
To be fair to IBM, ClearCase had this behavior before the three mergers that made it part of IBM. (Pure + Atria -> PureAtria, PureAtria + Rational -> Rational, IBM + Rational -> IBM)
I actually liked the concept of "wink-in" where derived objects that came from the same source objects and build environment could just be pulled from someone else's build instead of rebuilt. But the system as a whole required a zippy network.
I don't hold out hope that a vfs on top of another scm solution would be eve
Re: Ah nostalgia (Score:3)
Re: Ah nostalgia (Score:2)
Just do a shallow clone.
Ah, Microsoft (Score:2, Interesting)
"Hey, how can we do what GitHub does, only stupider?"
Author! Author! (Score:2)
Just curious what the author of GIT has to say about this. He can point out the truth with absolute authority.
(Reinvented a square wheel? Solved a non-problem? Cured a symptom?)
Re: (Score:2)
Linus Torvalds, creator of git, recommends linux, exclusively, I think. [...] When git came out, how much crying there was over the absence of a gui. What horror, command line interface ONLY!
Because running on Windows vs. GNU/Linux is orthogonal to GUI vs. command line, "Where's the X11 front end?" is still a valid question.
Split Your Repo (Score:3)
Re: (Score:2)
Sometimes it just isn't all that simple. As an example, we have one product that comprises several Windows services as well as an ASP.Net front-end. Each of those services have a multitude of DLL that are run-time configurable. As it is, we make an extended effort to share as much code as possible which would cause issues if we were to breakup the repo into several smaller repos. So, if we had several smaller repos, and there is a fix/enhancement to one of the shared/reused components, then you are prone to
Re: (Score:2)
So to link against the DLL you need a header file...correct? How do you get the latest header definition from one repo to another? Yes, it is easy enough to build against an older version of the DLL but that is not the goal of the exercise here. Sorry, there is no easy way to accomplish just what I want which is a valid desire to make a single change to a component and expect it to ripple through the whole build system _WITHOUT_ human interaction other than the check-in process of the one single file.
Re: (Score:2)
Re: (Score:2)
Commits are atomic within a repository. If you split the repository into multiple repositories, commits to things in separate repositories become no longer atomic.
Re: (Score:2)
Every version control system provides atomic commits.
Except CVS, VSS, ... I think what you meant to say was that every new, sane version control system provides atomic commits.
Microsoft as sensible as ever ... (Score:2)
Lately they stole the name Neon from the KDE distribution, now they steal the name GVFS from GNOME. Who's next? Stealing something from the cinnamon desktop? Or maybe Some eXtended Filemanager for windows CE (XFCE)?
Re: (Score:2)
Don't forget that they name products after common terms ".NET" and "SQL Server." They even conflict internally: they have two tools name ICE: Image Configuration Editor [microsoft.com] which configures Windows embedded operating systems, and the Image Composition Editor [microsoft.com] which seams together panoramic images.
Re: (Score:2)
Too large company to keep the overview ... and external names? Who cares ...
Re: (Score:2)
And The Server Component Is...? (Score:2)
But what's being unsaid throughout this is whether this works with a standard Git server, or whether it only works with a special Microsoft-kluged server. While the former is vaguely interesting, the latter merits only a derisive snort.
Rinse repeat? (Score:2)
Polytron's tool chain supported partial local builds back in the 80's. We used Polymake and PVCS to build Comshare's EIS. If you changed just one C file, that was all that compiled on your system. Polymake basically had two paths it looked at for all dependencies and their lib command had a nice rep
Did MS release GVFS under an Open Source License? (Score:2)
If MS released the GVFS under an Open Source License, then MAYBE their recent posturing re Open Source and Linux has some sincerity to it.
If they did not then it is probably more Embrace, Extend, Extinguish.
softcodeer
Re: (Score:1, Funny)
Might as well elect Donald Trump as president while you're at it.
Re: (Score:2)
MS? Doing a nice GUI? Bwahahahahahaha...
Re: (Score:2)
If they could ever fully move to the new GUI before making wholesale changes to the design language, maybe it could be nice.
Re:MS Linux ??? (Score:4, Funny)
Wait a second.
MS just invented an efficient way to checkout the Linux kernel on windows, so you can get the kernel sources, compile it, and then run Linux and ditch Windows ?
That's great !!
Re: (Score:2)
MS just invented an efficient way to checkout the Linux kernel on windows
Unless you go into the Windows registry to make the underlying file system case sensitive, that probably won't work. The Linux kernel sources sometimes have files in the same directory that only differ by case, like
net/netfilter/xt_hl.c
net/netfilter/xt_HL.c
includes/uapi/linux/netfilter_ipv4/ipt_ecn.h
includes/uapi/linux/netfilter_ipv4/ipt_ECN.h
It's not just a problem for the kernel, by the way. Having both "Makefile" and "makefile" in the same directory isn't unheard of. GNU make will then default to the
Re: (Score:2)
This worked just fine with Ubuntu for Windows 10 app.
Re: (Score:2)
This worked just fine with Ubuntu for Windows 10 app.
No, it doesn't. Ubuntu under Windows 10 runs on WSL kernel emulation, not the linux kernel proper.
Re:MS Linux ??? (Score:5, Informative)
That's really bad naming practice.
It's consistent naming for that project. ... Windows.
Any kernel configuration for netfilter with match support gets lower case names, and with target support it gets upper case names. In some cases there is support for both.
And the only real problem with this is
Re:MS Linux ??? (Score:4, Informative)
It may be consistent, but it is terrible.
Better would be:
xt_match_hl.c
xt_target_HL.c
Just because you can, doesn't mean you should.
Re: (Score:3)
It may be consistent, but it is terrible.
[...]
Just because you can, doesn't mean you should.
If you grew up with and are used to case sensitive file systems, and aren't aware of limitations in other systems because they've never been part of your work and life, why is this terrible?
The practice of Makefile + makefile is far from uncommon. With Makefile being the "production" one, and makefile having local modifications.
If I remember correctly, one language used to have file names like Net::NIS and CRC::CCITT too, until porting was startted, and someone discovered that this would break in some oth
Re: (Score:3)
Wait a second.
MS just invented an efficient way to checkout the Linux kernel on windows, so you can get the kernel sources, compile it, and then run Linux and ditch Windows ?
That's great !!
Seeing as how the only purpose of IE/Edge is to download Chrome/Firefox, I guess they figured that was the next logical step...
Re: (Score:2)
Wait a second.
MS just invented an efficient way to checkout the Linux kernel on windows, so you can get the kernel sources, compile it, and then run Linux and ditch Windows ?
That's great !!
And they use Android on Visual Studio and develop Ximarron with an Apache style license for Linux, Android, and MacOSX development ... on VS 2015 community edition and VS 2015 also uses GIT too.
Yes, MS is getting with the times since Windows is not what it once was. Amazing isn't it when competition is allowed again.
Re: (Score:2)
Yeah.
it would be better to use BSD or Hurd.
But let's face it, the world uses Linux.
And it's not developped by a finnish kid any more :
https://lwn.net/Articles/65463... [lwn.net]
(None) 3.9%
(Unknown) 3.5%
The remaining 92,6% is big corps.
Re: (Score:3)
Would be a dream come true. Ditch the abomination (Windows) and do like Jobs did by putting a nice GUI on top of a "Unix".
Yes it's fashionable to bash MS here. However, like IBM MS got nicer when losing their monopoly.
Anyway like any organization or company they make great and shitty software. MS makes great office and development tools. THeir operating systems and browsers are mediocre at best.
With GNU they make great operating systems and development tools but shitty office
Re:MS Linux (Score:4, Interesting)
You must have never used their enterprise Dynamics CRM and Dynamics NAV software.
If you can get it to run at all, half the shit is broken. Hell, the 2013 edition of CRM actually told you NOT to install the newest version of IE because it was "unsupported at this time." Yeah. IE (11?) didn't support CRM. Now I've got to explain to my clients why Windows Update completely broke their brand new system they paid thousands of dollars for.
Another great "feature" of CRM 2013 was a completely broken IMPORT system. So if you're trying to import anything other than mind-numbingly simple data like "addresses." You have to add stuff with timestamps, dates, and so on. You surely don't want ALL USER MESSAGES to lose their order and timestamps, right? TOO BAD. Even though CRM supports setting the timestamp, for certain record types the importer is completely broken and they never cared to fix it. So the "simple" solution? All you have to do is create a C# plugin, based on non-compiling code from an obscure blog. Oh wait, you can't just write a C# plugin. You have to use their HUGE SDK, their tools to "attach" the plugin to CRM and even that requires hours of reading manuals to figure out the right triggers. And if something goes wrong? ENJOY ZERO USEFUL ERROR MESSAGES. And yes, I turned on tracing (Which requires CHANGING THE REGISTRY in various places.) and debug mode.
Or how about SQL 2014/2015, which STILL doesn't properly support DPI scaling. The hallmark of Windows 10, and if you use a high resolution with a small laptop screen, random dialog boxes will not only be shrunk and force you to squint to read them... no... that'd be too easy. Some of them are so broken that you can't physically view all of the contents of the dialog AND YOU CAN'T SCROLL TO SEE IT. The dialog dimensions are shrunk and the data is to the right of a window you can't resize!
THANKS MICROSOFT. I love fixing your shit at my job while having to explain to clients that Microsoft's "It Just Works (TM) if you stay within the MS ecosystem!" is all a bunch of bullshit and the "It works" trademark is actually paved with the blood of IT workers.
Microsoft could make great products. Too bad they never bother to finish any of them.
Re: (Score:2)
Dynamics NAV (it was originally called Navision) used to be rather nice to work with... until, err, Microsoft starting making changes to it (around 2008-ish?). You could actually watch it turning to shit with each new version put out post-acquisition. I think I stopped bothering when I left the company I was working with, and went out of my way to tell subsequent employers to avoid the hell out of it.
Re: (Score:3)
Exactly their OSes are mediocre.
I bought a 4K screen and good GOD what a nightmare. I hate Apple ALOT, but give Apple Kudos no problems when Retna hit MacOSX in 2011. It is freaking 2017 so who the hell uses 100 DPI anymore?! Really, a cheap ass phone has a better screen than a $900 PC.
Re: (Score:3)
Eh, you bought a 4K monitor? Joke's on you. Sorry, but I think it's expected to be crappy, unless you're a dictatorship that obsoletes all hardware/software every few years (Apple) or have only legacy-free DPI independant GUIs and software (Android, web)
Well, something to blame MS for, and which might be the source of some scaling crappiness.. They still don't allow fonts anti-aliasing without RGB Cleartype? I just hate that. Especially when I just want to use Windows 7 or something on a CRT monitor, which
Re: (Score:2)
Why is joke on me? TV is for 4 K. Phones are 2 and 4K. Everything else that is cheaper is beyond 1K. I thought a freaking PC should be more advanced than a stupid TV or phone.
I don't get fonts rant? No one cares about great fonts at 1K. Worse, people sit far away from TVs and use tiny bitty screens on their phones and yet sit close on zoomed in PC screen where you can see pixels. Come one Microsoft .
Oh and you wonder how bad it is and which version of Windows? I have the latest Windows 10 and the GDI is so
Re: (Score:2)
What everyone really wants : bring out a "Windows 11" that can run the Windows XP GUI, down to Luna themes support, give us the Quicklaunch back, don't require an i7 and SSD just to run the desktop and updates. (Biggest task there is to junk that Windows Update system to replace it with one that instantly finds the updates)
If we want to launch the UWP runtime and grid of squares? Let us run it when we want from the quicklaunch, desktop or start menu.
Bring back the old file manager, just add tabs in it!
Keep
Re: (Score:2)
Re: (Score:2)
You mean, like Office Open XML?
Re: (Score:2)
With what did Office Open XML (OOXML) clash? The free office suite was using Open Document Format (ODF) at the time.
Re: (Score:2)
git is not a company.
I respectfully differ - Microsoft is, of course, git. The gittest, actually.
Re: (Score:2)
Are you talking about Gnome's Virtual File System, or something else?
https://en.wikipedia.org/wiki/GVFS
Re: (Score:2)
I'm pretty sure AC #53795299 is referring to the MTP backend of GNOME Virtual File System [debian.org].
Re: (Score:2)
"Linux KVM" as in "If you want to run Windows GUI apps and Linux GUI apps at once, buy a desktop PC with Windows and put it on a KVM with another desktop PC running Linux." Or do even low-end Intel CPUs support VT now?
Re: (Score:2)
You failed to answer the question. Because you know he's right.