Please create an account to participate in the Slashdot moderation system


Forgot your password?

Ask Slashdot: Should Developers Install Their Software Themselves? 288

Paul Carver writes "Should developers be responsible for installing the software they develop into production environments? What about System Test environments? I'm not a developer and I'm not all that familiar with Agile or DevOps, but it seems unhealthy to me to have software installs done by developers. I think that properly developed software should come complete with installation instructions that can be followed by someone other than the person who wrote the code. I'd like to hear opinions from developers. Do you prefer a workplace where you hand off packaged software to other teams to deploy or do you prefer to personally install your software into System Test and then personally install it into production once the System Testers have certified it? For context, I'm talking about enterprise grade, Internet facing web services sold to end users as well as large companies on either credit card billing or contractual basis with service level agreements and 24x7 Operations support. I'm not talking about little one (wo)man shops and free or Google style years long beta services."
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Should Developers Install Their Software Themselves?

Comments Filter:
  • My answer is mostly 'no'. The only benefit I see is that developers will become more motivated to have a simpler and better installation process. And that's a pretty nifty benefit.

    But I've been on both sides of the fence. I've rarely been a developer who did anything less than thorough testing before declaring something 'done'. But I know that I'm an incredible rarity in that way. And on the devops side of things, the less ability developers have to push things, the more likely decent QA will get done before stuff ends up in production. But developers frequently also give you installation instructions that are unrepeatable special case installs with rollback instructions that make no sense.

    I think that one good way to balance this is to have a preliminary test environment into which developers are allowed to push things. They are given limited rights to this environment, basically just the ability to upload some software and run a deployment script. This encourages them to write functioning deployment scripts. But it prevents them from shoving things into production because it just 'has' to go out today and it's such a small, low-risk thing. Of course it'll work!

  • by Bigby ( 659157 ) on Monday September 24, 2012 @11:16AM (#41437255)

    This question was not directed as consumer application. It is direct at Enterprise applications. InstallShield just won't do it.

    Developers should not install it. Nor should they help install it. If the Configuration Management team cannot do it themselves, then they need to send it back to the developer for better packaging or instructions.

    This is to the developers benefit. When new environments are set up, they shouldn't have to contact the developer to deploy the application to those new environments.

    And as always. Titles with questions are typically answered with a "no".

  • My persepctive (Score:5, Insightful)

    by Sparticus789 ( 2625955 ) on Monday September 24, 2012 @11:17AM (#41437277) Journal

    As a systems administrator, nothing frustrates me more than when a developer sends me an e-mail that says "install this".

    First, they do not always say what the software is supposed to do, so I cannot prepare for any security requirements. I am rarely told if it needs a port opened, I have to check the security logs to see if the software is trying to communicate through the network.

    Second, while I may have the ability to fix their software, I prefer not to mess with their code or configuration. Since I may not know what their software is supposed to do, I may get it running I do not know if it is operating properly.

    Third, if you are asking me to install alpha or beta versions on a live system, it's usually a bad idea. I have no problem installing it on a test server or a VM, but I hate putting it on a production box.

  • Title says it all - giving a developer access means they can deploy undocumented "hacks" and "quick fixes", usually meaning to document and normalise them later on.

    Forcing the, to hand off installation and maintenance to a second team means documentation is enforced, standards are enforced and quick fixes are better vetted.

    For the record, I wear both hats in different situations for different clients - as a developer I don't care about the production environment, and I like it that way. I care about the bugs the production environment uncovers, so the UAT environment should be identical, but even then the developer shouldn't have to care about it - that's the systems admin teams job.

  • by SQLGuru ( 980662 ) on Monday September 24, 2012 @11:21AM (#41437365) Journal

    If it's enterprise class, you have a team responsible for the stability of the servers (support) that is not the same as the developers. Sometimes, you even have a team responsbile for just deployments (depends on size of the org). The developers should have access to install in DEV, but not to TEST. This allows you to test your deployment process (and backout process) before you touch your PROD environment. []

    Even if not financial in nature, one of my former employers lumped it all unde SOX compliance.

  • by nick_danger ( 150058 ) on Monday September 24, 2012 @11:24AM (#41437405)
    There was a time in my career that I'd have said yes, the developers make the most sense as they are the only ones that really understand the process. But now I know that's exactly the problem with having developers doing the installs. For a production system you need to have a well defined process that produces repeatable results. The only way to ensure that is to have a separation of duties, whether it's an administrator that's being intelligent hands for a human-readable script, or is simply kicking off a developer provided computer readable script.
  • No. (Score:4, Insightful)

    by dremspider ( 562073 ) on Monday September 24, 2012 @11:26AM (#41437447)
    When the developers leave and their is no documentation and the thing blows up... No one will know how it works. With handing the product and the documentation off to someone else this provides a final check on the documentation to ensure that the documentation doesn't suck. Developers tend to intimately know their product well and therefore will be likely to leave out steps in the documentation, because they know how to do it anyway. I have seen this a number of times. When they leave it takes reverse engineering to figure out what was done. I am a big proponent of documentation. Here is how I think it should be done:

    -Development happens where they are able to test using a test environment
    -Developers hand off everything to the System Admin (SA) who will install it. They then install it on a test environment as well.. If there are issues found work with the developers to solve the issue, correct the documentation and proceed to step 3.
    -Install in production.

    The only issue with this is step 1 and 2 can sometimes become filled with accusations. SAs think the product sucks and Developers think that the SAs are idiots who need everything spelled out for them. It becomes a lot worst when the developers are contracted out (which is common). This needs to be avoided, both parties should see themselves as working together to create a better product.
  • by leuk_he ( 194174 ) on Monday September 24, 2012 @11:28AM (#41437477) Homepage Journal

    Creating the flawless installers costs time.
    Time means money. If the software is only installed twice (on test system and production), a good flawless installer is very expensive. If it is installed a lot of times (more than 5 systems), automation of the installation will pay itself.

    Generally it is good practice to keep the developers out of the production environment.

    However there must be exceptions.
    -Emergency fixes.
    -If there is no good team of maintainers, you can actually look at the installation logs, and understands them, the developer might be the better option. A good maintainer, who only blindly runs a script is not the best option.
    -Uptime might be more important that the principle.

  • by Zontar_Thing_From_Ve ( 949321 ) on Monday September 24, 2012 @11:29AM (#41437505)
    I work for a (mostly) US based Fortune 500 company who for various reasons I would prefer not to name. Short story - if you are a big enough company that external auditors come to visit you (we are) and your Dev people install code, even in test environments, you may fail an audit in the USA. Trust me, it is bad for business to fail this kind of audit. I'm in a system admin type job and a bit isolated from the workings of our Dev group, but it's my understanding that we run Agile or some variation of it and we absolutely do not allow Devs to ever install code in test or production environments.
  • by Rhaban ( 987410 ) on Monday September 24, 2012 @11:29AM (#41437509)

    Developers should not install it. Nor should they help install it. If the Configuration Management team cannot do it themselves, then they need to send it back to the developer for better packaging or instructions.

    As a (web) developper, I strongly agree.
    I'll just add that the Configuration Management team should have some knowledge about the software and the environment they manage.

    I've often seen software come back because sombody did'nt have a clue what their job was. ("prerequisite: apache 2.x" should be enough for anyone: I don't have time to write a doc about how to install standard software, especially when I don't know the target server configuration)

  • by bluefoxlucid ( 723572 ) on Monday September 24, 2012 @11:29AM (#41437513) Homepage Journal
    If the developers don't dogfood it, they won't understand wtf is the problem. Make them install that shit themselves, make them watch the end user install it, make them fix it when the end user fucks it up because they don't know the formula for the secret sauce. Get that secret sauce shit out of there and give me something straight forward.
  • by Todd Knarr ( 15451 ) on Monday September 24, 2012 @11:32AM (#41437571) Homepage

    You should have at least 3 environments beyond the personal ones the developers use to develop and unit-test code: development common, QA and production. Development common should be where dev does integration tests, including installation. If developers are responsible for creating the installation tools, they should be doing the installations there so they can debug those tools. If someone else is responsible for writing them, the devs should be working with them to make sure the tools do what the software needs to install correctly. You can't get the installation tools right if you don't test and debug them, and when they interact with the software the developers are the best ones to figure out what's wrong and what's needed to fix it.

    Developers should not be doing the installation into the QA environment. They should be handing the installation tools over to QA and letting them run then according to the deployment instructions. That's the only way to confirm the instructions were really complete and that everything works per the documentation, by putting it in the hands of people who didn't write it and letting them deploy it. That way when it comes time to deploy in production you've got some assurance that the deployment will work because it has worked before.

    Now, if things do go wrong you need dev involvement. They wrote the software and the tools, they'll often recognize exactly what's going wrong where Ops and QA won't. They're also the ones most likely to be able to give you a quick fix to the problem that'll get production up and running without having to back out and try another day. If you've already invested the time bringing the systems down and deploying the new version, it makes no sense to revert to the old version and waste more downtime tomorrow re-doing the deployment if the only problem is a path in a config file having the version number in it in QA but not in production and a quick edit of the newly-deployed copy of the config file will clearly fix the problem.

  • by rthille ( 8526 ) <web-slashdot AT rangat DOT org> on Monday September 24, 2012 @11:34AM (#41437595) Homepage Journal

    Of course the devs need to install it into their test environment. And QA needs to install it into their test environment, but Ops needs to install it on the production servers.

  • by JASegler ( 2913 ) <[moc.liamg] [ta] [relgesaj]> on Monday September 24, 2012 @11:49AM (#41437807)

    I've been in companies that practiced it both ways.

    Company A) Developers can never ever access production no matter the reason. The end result in that situation was bugs that couldn't be reproduced on the desktop or in the QA environment. The problems went on for months until I had a lucky break of a developer moving jobs into the system admin role of the production environment. When he looked things over he discovered the previous admin had not configured things in production properly. To the point of lying about it when I had sent a previous check list of things to verify. If I had access to the systems the problem would have been resolved in a few days rather than months.

    Company B) Developers own the software and hardware from end to end. In my current company we have to package the software up into a deployment system and deploy it that way. However we do have full access to all the systems. Can/do we do hacks and quick fixes? Yes, if the situation warrants it. But in the end it has to get rolled into the official distribution for it to be correct. Can it be abused? Yes. But that is why the culture of the company become very important. In the end you either trust your developers to do the right thing or you don't. If the company can't trust the developers to have ownership of their code and systems.. Well then at least for me I would say I'm working at the wrong company.

    FYI, I enjoy working at company B far more than I ever did at company A. Given a choice I will never go back to an environment where developers don't have access to production.

  • by landoltjp ( 676315 ) on Monday September 24, 2012 @11:52AM (#41437873)

    I'm a build and configuration manager (and a LONG long time software developer), and I completely agree.

    If developers want to employ new (as in industry-wide, or new as in company-wide) technology, they are free to install, configure, test, and prove out the functionality and feasibility of the software on their own machine. But that should be the limit of their reach.

    Any shared / integrated environment from QA on up to Pre-Prod and Prod, should be off-limits from a developers' sticky fingers. If there is an installation and / or configuration problem that arises with the application, AND a developer is needed for support, then they should absolutely be called in, but from a CONSULTATIVE role only. I.e., their fingers never touch a mouse or keyboard.

    It's common (if not expected) for developers to build an intimate working relationship with the technology they're using. In the world of "familiarity breeds contempt", it means that a developer can overlook something that occurs "obvious" to them, but not to everyone else. How many times has a developer been "called in" to fix an installation problem, and the fix wasn't documented or proved out? (not to say that integrators are saints in this area, but they damn well SHOULD be). Or a developer has access to a test region, and just "hops onto the machine" to tweak a parameter that caused a test suite to fail?

    QA (and QC Testers) need to count on the stability of a machine, it's known state. If a test is failing 100% of the time, it should keep failing. Or if it's passing 100% of the time, it should keep passing. Having the parameters of an integration machine change without the knowledge of the QA team (e.g. so that changes can be scheduled in, and updates to the testing suites can happen), then the validity of test runs is nullified, testing costs go through the roof, thus adding to the pressure to "skip the tests" and ship / deploy. Ick.

    The instructions / script for installing a package on a machine should be EASILY understood by an installer who is skilled in the practice of software installation, and no more. (ie, not written for a senior engineer, not written for the janitor). Enough information to have it properly installed and configured, some basic troubleshooting, and a clear escalation path should issues not get resolved. Skip the 65 pages of configurable parameters if all the installer needs to alter are 12 parameters on the target machines. but don't skip ANY of those 12. If one is missed, find out why. If 10 extra are there, see which ones are needed for the different regions and skip the rest.

    My personal line in the sand is the Developer integration area; that stays with the code monkeys. It's important to be able to test out package installs, and this is the type of machine upon which to do it (which is not to say there is a single DIT - have multiple, including one just to test out package installs if need be). QA regions and beyond are under tight control. I work in banks a lot, so Pre-prod and Prod are under a metaphorical armed guard.

    Once the installation and config documentation is tested by the developers, the docs get thrown over the wall to the integration team (optimally, QA should be involved in a doc review to make sure that what's in the Doc is what is required, no more and no less). For a Waterfall / SDLC methodology, this documentation review and handover is one of the gating steps. For an Agile / Scrum / XP methology, this can be considered a single story, where the success condition of the task (story, etc) is working installation (works) and usable documentation (has been tested).

    The key is not to go bat-crap crazy on it, but to ensure repeatability and workability. It would be GREAT if the install could be automated (or run unattended, or have little or no intermediate steps requiring human intervention) so as to reduce integration errors, but that is dependent on the requirements of those managing the QA regions and above.

  • by ccguy ( 1116865 ) on Monday September 24, 2012 @12:18PM (#41438377) Homepage

    If Dev and Prod aren't identical, then their Configuration Management team has failed.

    They usually aren't hardware wise for budget reasons, hardly ever are configuration and environment wise (considering other running programs as part the environment) and they definitely aren't identical user base wise.

    While I understand devs shouldn't modify anything in prod, denying access to the system to see in real time what the fuck is going on when something doesn't work is just asking for the solution to take 10x, frustration, finger pointing and longer hours (usually the developers and often unpaid).

    Seriously, nothing wrong in telling the developer "come down with me, I'll start the damn thing in front of you, you can check a bit of the system to you go back with a clue".

  • by miltonw ( 892065 ) on Monday September 24, 2012 @12:23PM (#41438469)

    Sorry, no. I couldn't disagree more. I've worked in places like that, were developers were unable to get close to the production servers, things wouldn't work in production (but worked fine in dev) and we were unable to do anything except send builds with more and more debug info, working late nights to get things done, with a client more and more pissed each day.

    Then it turns out that contrary to what they said, dev and prod wasn't identical, in a number of important things such as library versions.

    If the install team is unable to install it by fuck's sake *get a developer see the installation process by himself* so he can come back just once with all the data he needs.

    Then your process is screwed up. The solution isn't to open production to developers but to get your process fixed. Part of the process must be ensuring that the production environment is regularly and consistently copied over to development to ensure the problem you describe can't happen. That's the solution.

    I have no problem with giving developers read-access so they can look at the production environment to "gather data" but developers must not be able to ever install or change anything in production. That's just good sense.

  • by Imagix ( 695350 ) on Monday September 24, 2012 @12:30PM (#41438597)
    You didn't understand the beginning of my post. Having the developer watch is not the same as having the developer do the install. Read-only vs. read-write. I'm OK with dev being able to read-only. They can't "infect" production with their assumptions if they can't change anything. If they find something that needs adjusting then they should be adjusting their own environment to replicate the problem (or simulate it if necessary), construct the fix, then communicate the fix to those who should be doing changes to Test and Prod. If Dev and Prod aren't the same for whatever reasons, then the powers-that-be have to understand the costs that are incurred by not having them the same. You save the $5k on using cheaper hardware in Dev, but cost them $50k in downtime because that difference causes a bug to be exposed in Prod. (Picking numbers out of thin air). And yep, sometimes that $5k savings does end up really being $5k in savings as the difference in hardware had no impact on the environment. It's something that needs to be considered. (And yes, I'm very firmly on the development side of the fence, and my software gets installed into large production environments that I will never see, and in some cases am not legally allowed to see. Something about foreign nationals not allowed to touch or see any hardware that controls satellites...)
  • by Sir_Sri ( 199544 ) on Monday September 24, 2012 @12:56PM (#41439043)

    depends on the software....

    In terms of, for example, the web database etc. mentioned in the question you're into a lot of installing an SQL server (that's separate from a developer team generally, but it's a 30 minute job if you know what you're doing), and then building the database. Once you built the database you don't necessarily have an 'installer', you just send them an image of the whole installation (or the whole physical machine).

    If you're a company that specializes in making billing software databases then sure, you might have an install script and setup parameters than your installer guys can execute.

    There's sort of a hierarchy of techie people. Not everyone at a higher level can do the low level jobs, but one person at a higher level might be able to do all of the lower level stuff in 20 minutes that would take them weeks. This isn't 'consumer' product, in enterprise you might send out a 'linux guy' (who doesn't know linux scripting, that's an advanced linux guy) to do a linux install, but he can be given detailed, line by line instructions to follow from developers. In enterprise 'you' as the development firm can be responsible for getting it installed and setup. And you have co-op students who are basically high school grads following instructions for installations, or you can have a partially automated lab with software engineers running deployment tools to hundreds of machines at a time, it just depends on how big an outfit you are and the sort of customers you have.

  • by miltonw ( 892065 ) on Monday September 24, 2012 @01:16PM (#41439381)

    I have often worked as a contractor for major corporations, and I have worked with contractors as a system administrator. I was not accusing you of anything but I was criticizing the environment you were forced to operate in. If the development environment gets so out of sync with production that installs succeed in development and fail in production, the solution is not to allow developers to change production but to correct the environment.

    If production doesn't match development (and/or test) then the whole process is broken! Development is invalid and testing is invalid. Both development and test must duplicate production or absolutely nothing done in development or tested in test can be trusted.

    As a developer, years ago, I actually modified a running production environment and I was very lucky that nothing went wrong. I understand that it happens and, in broken environments, sometimes it must be done, but that doesn't make it safe or right.

  • by miltonw ( 892065 ) on Monday September 24, 2012 @03:54PM (#41441789)
    Too bad you haven't had any experience in the real world. How can someone trust tests done in a different environment than production? I especially like your testing success criteria: "if your software doesn't suck ass then it won't break". LOL!

    That an application "doesn't break" is only the very first test in a long list of criteria. Many applications that "don't break" will still corrupt data or cause other existing applications to fail.

    Development done in a different environment that production is a waste of time and testing in a different environment is useless. Good luck with your method.

Recent research has tended to show that the Abominable No-Man is being replaced by the Prohibitive Procrastinator. -- C.N. Parkinson