Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Open Source Microsoft Programming Windows News

Microsoft's CoApp To Help OSS Development, Deployment 293

badpazzword writes "Microsoft employee Garrett Serack announces he has received the green light to work full time on CoApp, an .msi-based package management system aiming to bring a wholly native toolchain for OSS development and deployment. This will hopefully bring more open source software on Windows, which will bring OSS to more users, testers and developers. Serack is following the comments at Ars Technica, so he might also follow them here. The launchpad project is already up."
This discussion has been archived. No new comments can be posted.

Microsoft's CoApp To Help OSS Development, Deployment

Comments Filter:
  • by DoofusOfDeath ( 636671 ) on Thursday April 08, 2010 @08:14PM (#31784048)

    Ask me about CoApp, I'll tell ya everything ya wanna know.

    How do I know that MS won't file a software patent related to this work?

  • by vux984 ( 928602 ) on Thursday April 08, 2010 @08:16PM (#31784070)

    You do know that the work agreement that you signed during orientation stated that Microsoft owns any software that you produce on your own time, as long as Microsoft may compete against said software at some point in the future?

    Perhaps that was some of the red tape that needed to be cut. Guess what, you can run things past management, and get legal to sign off on something that amounts to an agreement between the employee and employer that a given project belongs soley to the the employee. I don't know about microsoft specifically, but lots of companies are amenable to this sort of thing.

    Sometimes there are legitimate concerns that have to be resolved... often its just a matter of jumping through the required hoops.

  • by nurb432 ( 527695 ) on Thursday April 08, 2010 @08:19PM (#31784100) Homepage Journal

    And more *windows* users, more windows license, more vendor lockin, and fewer alternative OS's.. Ya, real nice of them to 'help' us out. No thanks.

  • by Anonymous Coward on Thursday April 08, 2010 @08:39PM (#31784258)

    Curiously, given microsoft's recent wrist slappings by the EU, fostering the development of "Competing" products could help microsoft, rather than harm it;

    Take for instance, the recent "Browser choice" screen. If Microsoft had been fostering a package downloader at that point in time, then they would have not needed to do anything to comply with the EU. Their OS would already have IE by default, and "Offer" a nice little package handler for those "Other Browsers".

    If the EU were to press, and try to stick MS with the stigma of not actually wanting any other browsers to run on their OS, by making users use a round-about way of getting their browsers of choice, MS could point the finger right back at how much capitol they invested into the alternative software ecosystem, and how they leveraged their power to help bring FOSS and the package manager to their OS.

    In short, creating a package manager like this is a good way for MS to be more two-faced than ever.

    Not that I am gonna complain; ALL corporations are two-faced, and a well supported package manager, and better acceptance of the win32 platform (Not just windows, there are attempts at FOSS Win32 platforms.) by the FOSS community is a good thing all around.

    I just dont think MS is overly concerned that it will compete with their software ecosystem at this point, and is more convinced that government regulators are the bigger threat.

  • by grcumb ( 781340 ) on Thursday April 08, 2010 @08:39PM (#31784264) Homepage Journal

    Ask me about CoApp, I'll tell ya everything ya wanna know.

    Garrett Serack CoApp Project Owner

    Okay, serious questions:

    Assuming that you've looked at APT and similar packaging tools, and given that you're still convinced that there's a 'Windows Way' (your term) to handle deployment that differs from Linux best practices, how do you plan to address:

    • Package Repositories - This is one of the main strengths of Debian and related distros. Do you think it's even possible to replicate this level of community control in Windows? I know you've mentioned decentralisation, but have you considered the implications of such an approach? What is the cost of failure to affect consistent, formalised management of package builds?
    • Dependancy Management - This issue is largely done and dusted on Linux, but remains a dog's breakfast on Windows (albeit not as frustrating today as it was in the mid-90s). In the absence of centralised repositories and the Unix toolchain philosophy, how do you propose to cope better with dependancies?
    • File locations - How do you propose to manage the proper placement of libraries etc. when the conventions concerning where to put such files are not nearly as well defined on Windows? I'm suggesting here that you need cultural leverage rather than technical answers. You need to change perceptions, not toolkits.
    • Security - Do you think it's even possible to replicate one of the main strengths of Linux package repositories: the ability to curtail security risks such as malware and flawed code?
    • Scripting Interfaces - Say what you like about make and other command-line utilities, but as a busy sysadmin, I consider GUI package management a waste of my valuable time. If I'm going to deploy regular security updates, for example, I want to know that I can script every aspect of the operation. Even the tab-completion features in aptitude make it many times more efficient than a point-and-click interface. What is the potential for scripted deployment/management of packages under your system? Why?

    I guess it's clear by now that I'm suggesting that what Windows needs is not another new way to do things. Package management in Debian, for example, is vastly more advanced and sophisticated than anything on Windows, and yet you feel the need to do things the 'Windows Way'. Don't you think you'd be better off learning from others who have been dealing successfully with package management for over a decade now?

    These are all serious questions and I expect to be challenged by your replies. I applaud your courage in taking on this huge task. I also think that you're going to need to learn a lot more humility than you've demonstrated so far if you want to achieve something better than a new brand of anarchy in packaging.

  • by h4rr4r ( 612664 ) on Thursday April 08, 2010 @08:53PM (#31784404)

    How are you handling dependencies?
    Will this be the standard windows every app carts around all its own libs, wasted space and outdated/insecure funland?

  • by h4rr4r ( 612664 ) on Thursday April 08, 2010 @09:04PM (#31784500)

    Cygwin at least gives you a usable CLI environment on windows. Who installs this type of server software and does not install Cygwin?
    A server without proper gnutools is painful to administer.

  • by h4rr4r ( 612664 ) on Thursday April 08, 2010 @09:05PM (#31784510)

    All but the last one are fine. I have some windows boxes I have to deal with and I sure as hell do not want to be stuck using some GUI IDE just to build the latest $foobar.

  • by His name cannot be s ( 16831 ) on Thursday April 08, 2010 @09:08PM (#31784536) Journal

    If it does, so be it.

    I've spent the last couple of years at Microsoft working to make PHP better on Windows, and validating PHP apps including CMS systems like Drupal on Windows. Seems to me they want some competition.

  • by martin-boundary ( 547041 ) on Thursday April 08, 2010 @09:44PM (#31784792)
    Having read your blogpost, I can see what you're trying to do, but as a Linux/Unix developer, I have zero interest in running through Windows like hoops. *But* I do go to great lengths to follow POSIX standards, and make sure that my autoconf tarballs are clean, and I don't expect this to change any time soon (or even not so soon).

    If your target audience is like me, then you're best off creating an automated conversion tool that can take a standard tarball and create an MSI package (or whatever) to your specifications with minimal human intervention. Ideally, this ought to extend seamlessly to the "make check" incantation, which is an important sanity check for cross platform development, since merely compiling the source successfully is no guarantee of correctness.

    Note that doesn't mean that you have to accept *nixish directory names etc, it just means that when such a tool sees a standard tarball construct, it knows how to convert it to something sensible for the Windows platform.

    As you pointed out yourself in the post, standard tarballs just work (mostly). You can gain a lot by reusing this property as a foundation for your project, rather than expecting people to adapt to your own design.

  • by Animaether ( 411575 ) on Thursday April 08, 2010 @10:07PM (#31784964) Journal

    My intent is to completely do away with the practice of everybody shipping every damn shared library.

    If you only succeed in getting windows folks to learn this lesson you should be made a saint.

    The major problem with this is that, as mentioned, Windows doesn't have a package manager, and Microsoft keeps telling developers that they cannot expect a user to have internet connectivity.

    So when you compiled your application with Visual Studio 2008 SP1 with the ATL update installed - which means every user of your software will have to have the Visual C++ 2008 SP1 ATL runtime redistributable package installed as well, you're left with scant few options.

    The most reasonable of which are:
    A. If you're distributing something boxed, to include the redistributable package on the media (CD/DVD/USB stick/whatever).

    B. If you're distributing something via downloads:
    B.a. Include it because - again - you're not supposed to assume the user will have connectivity.
    B.b. Don't include it, but detect whether the user has it installed and has internet access, and then offer to download it and install it (silently or otherwise).

    Of course for option B.b., Microsoft further seems to suggest that you do not link to -their- download pages (after all, the URLs could change, etc.) but instead host the binaries yourself.

    The only reason, thus, that Windows developers tend to include or download shared libraries at runtime, is simply because there -isn't- a package manager for Windows.

    So don't blame the developers - blame the lack of a package manager. Which I fully welcomed the last time a topic hinting at a package manager popped up on /.
    Unfortunately it seems like they would be two rather separate projects?
    http://it.slashdot.org/story/10/03/24/189248/Microsoft-To-Distribute-Third-Party-Patches [slashdot.org]

  • by His name cannot be s ( 16831 ) on Thursday April 08, 2010 @10:20PM (#31785036) Journal

    think you had no choice to choose the BSD license instead of the GPL. Had you chosen GPL, it is likely the project would have been immediately rejected by Microsoft.

    That's not true actually.

    I didn't tell anyone what license I was going to use until a few days ago, by which time they'd already signed the agreement.

    In addition to that; as a Microsoft employee for Microsoft, I've contributed code to GPL, LGPL, BSD, PHP and Apache licensed projects.

  • by shutdown -p now ( 807394 ) on Thursday April 08, 2010 @10:58PM (#31785256) Journal

    Visual C++ has had correct - i.e., standards compliant - scope for variables declared in a for-loop since VC++2003 by default (you can still have the old behavior explicitly enabled by compiler switch).

    Before that, you could control it with a switch, though default was non-standards compliant, and MFC headers wouldn't compile if you turned it on - which shouldn't really concern you if you're compiling portable code, right?

    In practice, this means that there was only one release of VC++ which was non-compliant by default - VC++2002. The one before it, VC6, was released in 1998, before the final ISO C++ specification came out, so it's kinda silly to hold it against it. If you recall the original story, the "wrong" behavior was actually part of the draft spec at some point - they've been going back and forth on it.

    Also, it is really a minor problem by itself, since you can trivially work around it by doing:

    #define for \
      if (false); else for

    or compiling with the equivalent -D compiler flag. This will ensure correct scoping, and will not affect anything otherwise (the compiler will, of course, optimize away the always-false branch).

    In contrast, g++ 2.95 (which was the stable version of g++ until mid-2001 - assuming you consider g++ 3.0 stable) didn't even have proper namespace support - it did parse namespace { ... } and using declarations correctly, but pretty much just ignored them, and just dumped all identifiers into global namespace. That is something that is not anywhere near as easy to work around.

  • by His name cannot be s ( 16831 ) on Thursday April 08, 2010 @11:14PM (#31785372) Journal

    I do have one question. Why, exactly, do you think that this sort of approach is likely to be easier than doing what Apple did and simply exposing a Posix API that is actually useful?

    Because, even if we could get a great POSIX experience on Windows, it leaves out Windows developers.

    One of my goals is to get Windows developers in the OSS game.

    On top of that, there is a hell of a lot of non-POSIX open source software on Windows that needs fixing too.

    Look at it this way: Would you respect someone who told you the best way to get FireFox running on Linux was to use some sort of Windows emulation layer... Like WINE? no, because FireFox *can* compile for Linux. Same thing with nearly all Open Source I encounter. I want to get the OSS quality and experience on Windows to exceed commercial developers... it needs the most love.

    Like I tell people:
    Working as an open source software developer at Microsoft is like being a preacher in Vegas. I figure I'm in the single most important place in the universe that I can be.

  • by dudpixel ( 1429789 ) on Thursday April 08, 2010 @11:17PM (#31785388)

    How do you go about handling different versions of a library?

    Will we eventually see the day where Microsoft has a central location for shared libraries in Windows (writable only by "root") and also a decent package management system, you know, like apt/rpm?

    This isn't a flame, just pointing out some things that would make Windows fantastic for me. I really really really love the directory structure and package management of linux, and the benefits that it brings. If Microsoft could bring some of that goodness to Windows, I may be tempted to switch...no really. Just think, it could reduce the "clutter" that inevitably builds up in a windows system over time (often requiring the 6-monthly reinstall), and updating your entire system would be possible from a single app. Sorry if this sounds like a troll - it really isn't intended to be.

  • Re:It won't.... (Score:3, Interesting)

    by shutdown -p now ( 807394 ) on Friday April 09, 2010 @12:17AM (#31785736) Journal

    Actually, this will be mighty handy for developers trying to use OSS libraries on Windows. Right now, it's a mess, if you've got more than a few, and they have mutual dependencies - you get all kinds of wonderful problems with precompiled binaries, such as having them compiled with different compilers (MinGW vs VC++), or with different compiler switches that break ABI compatibility, etc.

    And compiling from the source is fun because you have to deal with all the trivial things such as include & library paths by yourself. Oh, and don't forget that a lot of OSS stuff has makefiles generated by autoconf, and many autoconf scripts just freak out on MinGW right away.

  • by styrotech ( 136124 ) on Friday April 09, 2010 @12:20AM (#31785752)

    As an admin that maintains both Linux and Windows systems, this sounds really cool. Hopefully the guys writing the Tomcat AJP connectors for IIS will use it (that stuff can be a nightmare).

    To me though the initial setup is never the main problem (except with AJP/IIS hehe), it's the ongoing maintenance and patching of 3rd party stuff that suffers the most on Windows.

    Sure Windows Update / WSUS make all the MS stuff easy, but 3rd party Windows apps are a nightmare to keep up to date network wide. They all have their own separate update mechanisms that mostly require an admin being logged on to work.

    I've love to see Windows Update and WSUS allow 3rd party repos (eg the equivalent of adding stuff to /etc/apt/sources.list) so that practically everything could be patched via Windows Update / WSUS without admin intervention on each machine.

    I don't know if your work will end up tackling all that, or one day get incorporated by the existing patch mechanisms, but I can still dream :)

    Best of luck anyway.

  • by Anonymous Coward on Friday April 09, 2010 @01:20AM (#31786070)

    (Posting anonymously because I somehow managed to exceed 50-posts-per-day limit for my karma.)

    Correct me if I'm wrong, but why would "windows.h" have any problems with for-loop scoping either way? IIRC, the problem was strictly with MFC and ATL, for both of which there are much better options available, in any case.

    I may be wrong here - this problem is a really old one, and I only vaguely recall last time I hit it. If I am (or if you really need MFC/ATL), more general solution in this case - since you're already using the preprocessor - is to use that "#define for ..." trick for your code, but skip it for "windows.h". VC++ has a non-standard way of doing that in form of push_macro and pop_macro pragmas, so you can do something like this:

    #ifdef _WIN32
        #pragma push_macro("for")
        #undef for
        #include <windows.h>
        #pragma pop_macro("for")
      #endif
    portable-code;
      #ifdef _WIN32
          xxxxx
      #else
          yyyyy
      #endif
          more-portable-code;...

    Not very nice, but it's still not that much boilerplate, and it scales well. You can do even better by making your own header which just does #include "windows.h", wrapping it in push_macro/pop_macro, and then using that everywhere.

    I don't dispute that it isn't problem, anyway. I do recall it being rather annoying back in the day, but then doing C++ back then was generally annoying, because standard compliance was lacking all over the place. I recall being similarly frustrated by e.g. lack of std::vector::at() in g++ standard library - God knows why it wasn't there. Or, getting back to VC6, advanced template magic such as partial template specialization was very much hit-or-miss. Oh, and no RVO, which really is a perf killer. And so on. I dare say that, against this background, the for-scope issue is really just a minor part of the overall bleak picture, with a relatively trivial workaround.

  • by Sun ( 104778 ) on Friday April 09, 2010 @04:07AM (#31786786) Homepage

    I have a plan for allowing any publisher to publish packages in the CoApp ecosystem, provided they meet two qualifications:
    - They must be able to host their repository meta-data on an SSL protected connection.
    - All packages must be digitally signed with a certificate that chains back to to a commonly-accepted CA.

    Doesn't seem like a very good solution. The whole point of APT is its ability for ANYONE to open repositories, including digitally signed repositories.

    If, for whatever reason, you don't like PGP, that's fine. Go with X509. Just don't force a SPECIFIC root CA - allow the package user to choose which is his CA of choice (one or more). This way, for example, a company can set up a local repository to push to its own employees.

    Same goes with where you host this. Your answer did not make it clear whether any server can be configured, or just MS's servers.

    Shachar

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...