Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Python

Ask Slashdot: Taming a Wild, One-Man Codebase? 151

New submitter tavi.g writes "Working for an ISP, along with my main job (networking) I get to create some useful code (Bash and Python) that's running on various internal machines. Among them: glue scripts, Cisco interaction / automatization tools, backup tools, alerting tools, IP-to-Serial OOB stuff, even a couple of web applications (LAMPython and CherryPy). Code has piled up — maybe over 20,000 lines — and I need a way to reliably work on it and deploy it. So far I used headers at the beginning of the scripts, but now I'm migrating the code over to Bazaar with TracBzr, because it seems best for my situation. My question for the Slashdot community is: in the case of single developer (for now), multiple machines, and a small-ish user base, what would be your suggestions for code versioning and deployment, considering that there are no real test environments and most code just goes into production ? This is relevant because lacking a test environment, I got used to immediate feedback from the scripts, since they were in production, and now a versioning system would mean going through proper deployment/rollback in order to get real feedback."
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Taming a Wild, One-Man Codebase?

Comments Filter:
  • first thought: (Score:5, Interesting)

    by Tastecicles ( 1153671 ) on Thursday September 20, 2012 @02:31PM (#41402669)

    rectify the testbed lack.

    'cos there's nothing more likely to cause immediate termination of your employment than a bit of rogue code taking down the bread of the business.

    Test it first.

    • by Anonymous Coward

      The scripts are irrelevant if not ran on the real environment, the test environment would have to be a clone of the production environments. Good luck with that with the described environment! He could test each piece of the scripts in testing - which he probably does - but that only gets you so far and tells you that there's no typos.

      • [...]the test environment would have to be a clone of the production environments. Good luck with that with the described environment![...]

        There is stuff like Puppet [wikipedia.org] (for declaratively deploying "services") and Vagrant [wikipedia.org] to provision Virtualbox guests.

        Downsides:

        • It's only really efficient when your production environment can be provisioned with Vagrant/Puppet as well and no manual work is done on these guests. The way the question is formulated, I suppose that is not the situation.
        • Virtualbox is only usable for desktop usage. I would love something similar and simple for KVM
      • Re:first thought: (Score:5, Insightful)

        by luis_a_espinal ( 1810296 ) on Thursday September 20, 2012 @05:28PM (#41404869)

        The scripts are irrelevant if not ran on the real environment,

        Well, that's an oxymoron. Any program, large or small, is irrelevant if it never runs on the intended target platform. That's no excuse for having a test server, however feeble compared to production it might be.

        the test environment would have to be a clone of the production environments.

        A clone does not have to be equivalent in terms of hardware or data. A good example is a test db box for testing your SQL scripts. Such a box can have the exact same software, OS and patches, and with equivalent database configuration and schemas, but on lower-cost hardware and with a fraction of the data. As long as a test bench can provide a reasonable, objective measure of comfort of your code, that is all you need. You do not need an absolute guarantee (as there is never one anyways.)

        Good luck with that with the described environment!

        Yeah, because the task is so hard, he might as well give up, right, right, right? Let's do the paralysis-by-analysis chicken dance, shall we?

        He could test each piece of the scripts in testing - which he probably does - but that only gets you so far

        Which is better than nothing, and it is always better to carry tests, however little they might be on a test/sacrificial box than on production. It's not rocket science man.

        and tells you that there's no typos.

        No. It can also tell you that it will not do something bad, like deleting all records in a table, or initiating a shutdown, or filling up the /tmp partition. Better to detect such things on a mickey mouse test box than on the real thing. It might not catch bugs that are triggered by the actual characteristics present in a production environment, but it will most likely catch up bugs (annoying or fatal) that are not dependent on such characteristics.

        Ideal? No. Better than nothing? Hell yeah.

        • a test db box for testing your SQL scripts [...] can have the exact same software, OS and patches, and with equivalent database configuration and schemas, but on lower-cost hardware and with a fraction of the data.

          I too maintain a test environment, but I've run into two problems with creating a useful "fraction of the data": First, testing code on a fraction has led to misconceptions about scalability to a far larger data set. Second, how would a substantial fraction data representative of real data be created if the real data contains people's shipping addresses or other PII?

          • "how would a substantial fraction data representative of real data be created if the real data contains people's shipping addresses or other PII?"

            Do you really have to ask? You either clutter the fields or clutter their relationships:

            Exhibit A:
            * John Doe | Lexington Av.
            * Betty Lamarr | Main St.
            becomes
            * John Doe | Main St.
            * Betty Lamarr | Lexington Av.

            Exhibit B:
            * John Doe | Lexington Av.
            becomes
            * Nhjo Ode | Aevtginon Lx.

          • Re:Scalability (Score:5, Informative)

            by theshowmecanuck ( 703852 ) on Thursday September 20, 2012 @07:24PM (#41405983) Journal

            testing code on a fraction has led to misconceptions about scalability to a far larger data set

            This is real. The solution is to manage expectations. If people know that the tests just show functionality and not scalability, and that scalability testing is required (when warranted), you should be good. More importantly if the decision makers know this, you are good.

            if the real data contains people's shipping addresses or other PII?

            Scrub the data. Addresses are not personal information though. The fact a specific person lives there might. Open a phone book (if you can find one now-a-days. They have reams of addresses as well as phone numbers tied to real people. This is public knowledge. Personal information involves things more like name, age, finances, medical records, etc.

            For the stuff that is real personal information, randomizing names to create fake people tied to real addresses is not hard at all (real addresses are often necessary when system tie into others where shipping or location are requirements). You can take real information and put it in a can and scrambled to make fake people. I think testers should be proficient enough to be able generate this kind of data.

            As to one other comment made by the OP:

            and now a versioning system would mean going through proper deployment/rollback in order to get real feedback.

            Versioning systems do no such thing if you don't use them that way. If you want a "proper deployment and rollback cycle" you can do that. Or not. But at least you'll be able to go back in time to find the code that actually worked if you need to. No coder should work without the safety net of version control. Whether it be CVS, SVN, GIT, it matters less what it is than whether you have one or not. Pick one and use it.

        • "A clone does not have to be equivalent in terms of hardware or data."

          Ahhh, developers...

          Do you know what's left to the systems guys when you have finished (hopefully thoroughly enough) testing your code? The hardware and data (and the integration).

          So, where do you think the hardest problems systems teams have to affront will come from?

          If you thought hardware and data (and integration), you hitted the mark.

          But they usually don't have the luxury to use the "hey, it works on my desktop" for a excuse.

          • "A clone does not have to be equivalent in terms of hardware or data."

            Ahhh, developers...

            I've been a systems and network administrator as well (in addition to being a software engineer and developer.) I know how shit looks and works on both sides of the fence.

            Do you know what's left to the systems guys when you have finished (hopefully thoroughly enough) testing your code?

            Non sequitur (to the question at hand.) Of course in a real, well-planed environment you will have data and hardware (not just the boxes, but the network, switches, proxies and firewalls) equal or equivalent to production as much as possible, with segregated dev, test, UAT and pre-production environments.

            However, the guy is a one-man sho

        • The scripts are irrelevant if not ran on the real environment,

          Well, that's an oxymoron

          It does not mean what you think it means.

          Cold fire is an oxymoron, or dry water, or dumb genius. An oxymoron is an inherently contradictory combination of terms.

        • A good example is a test db box for testing your SQL scripts. Such a box can have the exact same software, OS and patches, and with equivalent database configuration and schemas, but on lower-cost hardware and with a fraction of the data.

          I don't want to pick nits here because Luis is giving out a lot of very valid information and observations here. Just want to take it one step further.

          Mirror the production environment DB with an identical amount of data. The data doesn't have to match row-for-row. But

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Yes, by all means test. Then, deploy your tests into production by mistake, like on Wall Street or something, LOL. Seriously though, testing is good and I unit test right from the start; but there are no silver bullets.

      • Re:first thought: (Score:5, Insightful)

        by Stiletto ( 12066 ) on Thursday September 20, 2012 @03:36PM (#41403547)

        It's not a silver bullet, but lack of a test environment is sure to eventually cause disaster. It's by far the biggest problem mentioned above, even more of a problem than lack of version control.

        • by rvw ( 755107 )

          It's not a silver bullet, but lack of a test environment is sure to eventually cause disaster. It's by far the biggest problem mentioned above, even more of a problem than lack of version control.

          I would start with a versioning system. That's a lot easier to get working. You could get that working in one day. And it doesn't need a test environment. Yes it should, but it's not a requirement. You can use the trunk as the production codebase. The big advantage is that you can rollback easily. You can even code on the server itself, and then update the codebase from there. No, not the wisest thing to do, but it's possible and probably a lot wiser than coding on the server without versioning. And use com

      • by arth1 ( 260657 )

        testing is good and I unit test right from the start

        Out of curiosity, what tools do you use to unit test bash scripts?

        • by slim ( 1652 )

          Googled, found shUnit.

          I don't know how good it is, but it exists.

          In practice, I wouldn't nowadays use shell to write anything complex enough to need unit testing.

    • There's also the issue of professional pride: If I were in his situation and I got hit by a bus (see the Six Feet Under pilot), I would want someone else to be able to pick up where I left off. I would also want this replacement to compliment the quality of my work. You need a test environment to do this.
      • It's unlikely that an inheriting developer is ever going to compliment the quality of your code. If he did, he'd never get the green light to trash all your work and start again from scratch.
    • Re:first thought: (Score:5, Interesting)

      by ILongForDarkness ( 1134931 ) on Thursday September 20, 2012 @02:57PM (#41403053)

      rectify the testbed lack, like Yoda is it. I agree you need a testbed. Heck run a few vm's on a workstation. If you can't build a vm to test something it shouldn't be deployed IMHO.

    • Proclaim yourself the most interesting coder thinkgeek [thinkgeek.com] style.

      I don't often test my code, but when I do, I do it in production.

    • by jekewa ( 751500 )

      How does that meme go? Something like this, I think:

      I don't always test my code, but when I do, I test in production.

    • I would agree depending on the size of the domain and the time available.

      Dropping tools and working on a test suite that should realistically contain 10,000 tests to get adequate coverage is not going to happen with a 1 person team for example. Especially if the test harness is difficult to work with.

      My advice would be:

      Good luck and god be with you...

  • by MetalliQaZ ( 539913 ) on Thursday September 20, 2012 @02:32PM (#41402683)

    I don't understand how code versioning has to be coupled with deployment? You have no test environment, as you said... so just make releases and deploy them manually. Since you are going straight to production, you had better be there in person to roll it back if you screwed up. Right? So, SVN should be all you need...

    • Re: (Score:3, Insightful)

      by dgatwood ( 11270 )

      Git is a cleaner model in a lot of ways. In particular, the fact that you have a local copy of the entire repository makes it easier to roll back mistakes you make while editing the code. This isn't always important, but if you decide you're going to do a significant rewrite of some piece of code (and in particular, if you are ever remote while you're doing so), it helps a lot.

      • by inKubus ( 199753 )

        Here's what I did, pre-git:

        Create svn repo, e.g. svn.company.lan/systems
        Create structure ./trunk, ./branches, ./tags
        Create a directory for each hostname e.g. ./trunk/sql1, ./trunk/web1, ./trunk/web2, etc.
        Then you can svn import configuration directories on the host into the repo, e.g. svn import svn.company.lan/trunk/sql1 /etc
        Then check out svn co svn.company.lan/trunk/sql1/etc /etc
        From that point forward if you make changes locally you can svn ci OR you can make them externally (i.e. in a test environment)

        • by arth1 ( 260657 )

          I do something similar with svn, but the main problem with this is that Subversion doesn't preserve Unix ownerships, permissions, acls or attributes.
          A secondary problem is the .svn directories - some directories are parsed automatically by various systems and all files and folders there are acted on. Then a version control system needs to be external to the directory structure.

          • "but the main problem with this is that Subversion doesn't preserve Unix ownerships, permissions, acls or attributes.
            A secondary problem is the .svn directories"

            The main problem is that you've never been a C programmer.

            Or else you'd have ingrained the notion that any source needs to be configured, made and installed like, *always*, while sometimes one of the stages can be just a 'noop'.

            Permissions, ACLs and attributes is a matter of a 'make' stage; getting rid of the .cvs dir is a matter of an install stage

            • by arth1 ( 260657 )

              The main problem is that you've never been a C programmer.

              *Raised eyebrow*
              If anything, you're the newbie here, talking about "configure, make, make install". Writing a system that breaks in five years because compilers and headers move on while your tool doesn't is as stupid now as it was in the days of xmkmf. Same shit, different wrapping. Sure, macro languages and expert wrappers make things easy to write, but time consuming and incredibly hard to troubleshoot compared to an open system that doesn't depend on compilers and headers not moving on.

              Different sys

      • Git is a cleaner model in a lot of ways. In particular, the fact that you have a local copy of the entire repository makes it easier to roll back mistakes you make while editing the code.

        What does that mean? It's trivial to either check out or export any part of the repository from SVN. What does git bring to the table that I don't already have with SVN?

        • What does git bring to the table that I don't already have with SVN?

          A lot. (as does bzr, mercurial or any other distributed versioning system)

          • You don't need a central server (but you can have one).
          • You don't need to have a network available to check in changes.
          • You don't need to have a network available to roll back or switch to another branch. E.g. you could edit /etc/init.d/networking break stuff and roll back...
          • It is really fast - it is mostly local stuff.
          • ...
          • Hmm. I prefer the central server, actually, because that's only one place I need to back up. I'll look into git in the future though, there are plenty of problems with the Collabnet Subversion Edge/TortoiseSVN system that we're using that cause us to want to look into alternatives. For example, occasionally there will be a problem during commit where it will lock a file, stopping me from updating or committing further, but when I try to unlock it it just says that nothing is locked. The merge tool somet

            • The point is that - while in most professional environments you do want some central place to store the history of your code, you are not depending on that.

              The central server might be offline, the network might be down, you're on holiday at a tropical island without internet, the server crashed and will never be restored, the company you work for goes bankrupt - but you can still access the history, check out older versions, check in new stuff. And if/when the revered central server is finally available aga

            • by jgrahn ( 181062 )

              Hmm. I prefer the central server, actually, because that's only one place I need to back up. I'll look into git in the future though

              Huh? There is only one place to back up with Git too! And in fact the usual way to do backups is to 'git clone' that one and 'git fetch' from it regularly.

              The only complication is that in the other repositories you must remember to 'git commit' *and* 'git push', or you'll just add versions locally; they don't reach the central repo.

          • "You don't need a central server (but you can have one)."

            You *do* need a central server. Systems administration is not a matter of a bazaar but of a cathedral.

            "You don't need to have a network available to check in changes."

            You *do* need to have a network available or else your changes won't hit neither your staging nor your production environments.

            "You don't need to have a network available to roll back or switch to another branch. E.g. you could edit /etc/init.d/networking break stuff and roll back..."

            Yo

            • Oh boy. Language semantics :)

              You apparently need a central server in your particular situation according to your perception. But using Git does not make it necessary to have a central server (nice for tiny projects, experiments, initial stages of a project) however the main point is you can be offline when you do have a central server, you're not depending directly on the central server most operations are local and changes are available locally.

              And since you do not need that central server, you can start u

          • So when your workstation crashes it takes out both your working code/directories and the repository? Very convenient. Much simpler to crater that way. I like a different machine to keep code on. One that gets backed up regularly.
      • Bazaar does the same, or at least can.
    • by Skewray ( 896393 )

      I don't understand how code versioning has to be coupled with deployment? You have no test environment, as you said... so just make releases and deploy them manually. Since you are going straight to production, you had better be there in person to roll it back if you screwed up. Right? So, SVN should be all you need...

      I used to, as a single programmer, use SVN, but I found it nothing but a burden. It left files all over the place, and was really not convenient when no interlocking with another programmer is needed. Now I just make a tarball of everything at obvious breakpoints and store it away.

      • by jgrahn ( 181062 )

        I don't understand how code versioning has to be coupled with deployment? You have no test environment, as you said... so just make releases and deploy them manually. Since you are going straight to production, you had better be there in person to roll it back if you screwed up. Right? So, SVN should be all you need...

        I used to, as a single programmer, use SVN, but I found it nothing but a burden. It left files all over the place, and was really not convenient when no interlocking with another programmer is needed. Now I just make a tarball of everything at obvious breakpoints and store it away.

        As a single programmer, I find version control invaluable, and wouldn't tolerate such a working environment. I haven't used SVN so I cannot comment on it, but both RCS and CVS have been universally available since the 1980s, and don't "leave files all over the place". I quickly learned to ignore the RCS/ and CVS/ subdirectories.

        A modern version control tool like Git lets the single programmer name and describe her work progress; check what she did yesterday; review changes before commiting them; branch f

  • by Antipater ( 2053064 ) on Thursday September 20, 2012 @02:34PM (#41402723)
    Given the situation you describe, it won't be long before the whole system falls into corruption. Your only hope is to save two lines from every script on a USB stick, then flood the rest.
    • by psmears ( 629712 )

      Your only hope is to save two lines from every script on a USB stick, then flood the rest.

      Is that what's known as an ark-hive?

  • Simple answer (Score:5, Insightful)

    by girlintraining ( 1395911 ) on Thursday September 20, 2012 @02:35PM (#41402725)

    My question for the Slashdot community is: in the case of single developer (for now), multiple machines, and a small-ish user base, what would be your suggestions for code versioning and deployment, considering that there are no real test environments and most code just goes into production ?

    The simple answer is, "Whatever works best for you." You're the only developer for these projects. Unless your manager is giving you direction on a specific process or requirements, it's your ball game. You know how you work best -- pick your tools accordingly.

    • by tool462 ( 677306 )

      Pretty much. This is a very hard thing to answer in general terms. With only one developer, over-engineering the system can be very costly. You'll spend more time maintaining the dev/test/release environment than the actual code itself. But at the same time, some tools and scripts can be absolutely critical to the business and a bug could be disastrous enough that it warrants all the overhead of a more formal dev environment.

      What you do is going to depend a lot on the exact details, and may not even be c

  • A few things (Score:5, Informative)

    by jlechem ( 613317 ) on Thursday September 20, 2012 @02:36PM (#41402763) Homepage Journal

    1. Buy or get a machine to host SVN for version control. I work on my wife's company website and some basic management tools. SVN has saved my bacon on multiple times where I thought I had lost some code.

    2. Get a pre-production server and test your code! Sounds like you're living in the wild west and that shit flies until something goes horribly wrong and you're the guy who gets blamed.

    • Re:A few things (Score:5, Insightful)

      by jellomizer ( 103300 ) on Thursday September 20, 2012 @02:40PM (#41402811)

      If you can't get the hardware. Try to Virtualize a Test Envionment with like VM Ware or Virtual Box.
      At least you have something to play in before it you put it out on the open.

  • No real change (Score:5, Informative)

    by chthon ( 580889 ) on Thursday September 20, 2012 @02:37PM (#41402775) Journal

    You can still change everything in place. Then you can run the script and get feedback. When it works, you commit. When it doesn't, you remove the problem, check and commit.

    Or you can make your changes, review them and commit them, then do a run. When you have a problem, you commit again.

    It is not because you use a versioning system that you need extra formality. You can still work the way you used to, but now you have an extra safety measure due to the versioning system.

    Using trac is a way to better organise your problems. The main thing I can say about using trac effectively is that you always need to have a browser window open on it, and when you have an idea, or notice something, or have problem, then enter it immediately. Afterwards, take your time to look at new and open problems, classify them and process them.

    • And a few days after you put the changes in production and nothing has burned, make it a tag.

      Better yet, make it a tag before putting changes in production (TAGbeta) and a few days later (TAGrelease). Tags are cheap.

  • by turbidostato ( 878842 ) on Thursday September 20, 2012 @02:37PM (#41402777)

    You say that "now a versioning system would mean going through proper deployment/rollback in order to get real feedback."

    But then, no, it wouldn't.

    Storing your code on a versioning system doesn't mean but that: that you store your code in a versioning system, nothing more, nothing else.

    I'm starting to be an old fart so you can believe me when I tell I've already been in your position.

    Back then I used CVS and it didn't change my deployment procedures in the slightest -only that I had all those scripts in a single convenient place and I could look in past history when I found a regression or I wanted to look for the way I did something in the past.

    The most naive approach is you just got working just the way you are doing now, only that when you are confident on a script/set of scripts you check them in for posterity. You mainly develop in your own desktop and you push your scripts to the servers with an rsync-based script. A bit over this, you use a CM tool (say, puppet) so instead of pushing to the servers you push to the puppetmaster and then run a `puppet agent --test` on the servers: that way configuration becomes code and therefore, repeatibility.

    It allows for almost a novel but the basic idea is just the same: SCM is SCM is SCM; nothing more, nothing less.

    • by turbidostato ( 878842 ) on Thursday September 20, 2012 @02:47PM (#41402909)

      Oh, by the way, you really should listen to those that tell you *need* some development environment.

      Again, I've already been there, so I know you pain: even for the silliest development the developers will have their development environment but for us, systems people, it's expected that everything just fits in place at first try, no second chances. Of course, next heavy refurbish will be near to impossible because while being a good professional allows for more or less "clean" kaizen-style development, anything a bit dangerous is an almost impossibility because of lack of test environments (with luck, next "heavy test window" will be in three/four years when all the servers are decomissioned and new ones come in place) but that's the way it is, take it of leave it.

      The good news is that, while not a panacea, virtualization, even at desktop level (you surely need to have a look at vagrant[1]) allows for a lot of testing, impossible in the age or "real-iron only".

      [1] http://www.vagrantup.com/ [vagrantup.com]

    • by SQLGuru ( 980662 ) on Thursday September 20, 2012 @03:14PM (#41403267) Homepage Journal

      Another benefit of a versioning system is that you don't have to keep large chunks of commented out code. If it needs to go, delete it. It's in the history if you need to go back to it. This alone will clean up most of the spaghetti that a one-coder shop faces.

  • by Maximum Prophet ( 716608 ) on Thursday September 20, 2012 @02:46PM (#41402895)
    Quick! Rename all the files f1, f2, f3 etc, rename all the variables i1, i2, i3, etc and remove all whitespace.

    Keep a translation sheet on you at all times. Suddenly, you're irreplaceable.

    (:-) for the humor impaired. This is actually a riff on a joke from WKRP, when an engineer said he was replacing all the color-coded wiring with black wires for job security. (B.t.w. the engineer was played by one of the writers of the show)
    • Score:5 Funny (No mod points today, sorry. It looks like I have to comment to get mod points.)
    • irreplaceable == unpromotable.

      Granting it's a one man shop, not likely to be much in the way of upward mobility anyhow.

      • Unpromotable is not necessarily a bad thing, i.e. The Peter Principle http://en.wikipedia.org/wiki/Peter_principle [wikipedia.org]

        Ideally, you want to find your niche in the world where you can be most happy. Getting more responsibility than optimal is a bad thing, leading to stress and early death. Check out long term prospects of lottery winners to see what too much money will do to you.
        A steady income stream, with no chance of promotion may or may not be exactly what someone needs to be most happy.
        • Assuming your fine with unpromotable, how are you with 'must be on call 24/365'?

          I've lived it, the one thing worse for your stress then taking a promotion you don't want, is having 'them' put the worst air thief in the organization into the position to punish you for not taking the promotion. Now you have to manage the manager and the team without any authority just as self preservation.

          • Assuming your fine with unpromotable, how are you with 'must be on call 24/365'?

            I've lived it, the one thing worse for your stress then taking a promotion you don't want, is having 'them' put the worst air thief in the organization into the position to punish you for not taking the promotion. Now you have to manage the manager and the team without any authority just as self preservation.

            That's when you take a lateral, a job that you can do, but pays the same, either within the same company, or a new more exciting employer.

            Going from a bad job to a higher position that you can't do, isn't healthy. But, if you are close to retirement, and can stick it out, the promotion might get you there quicker.

            Basically, when faced with "Damned if you do", "Damned if you don't", choose the option that pays the most. However, if and only if you are comfortable, think long and hard about a job that

            • I'm long gone from there. They still call my simulation results database loader 'the firehose' largely because they can't match it's performance with newer technology. It must suck that the fastest database for results analysis is MS Access in 2012.

    • I wasn't familiar with the WKRP schtick, but I actually worked for a DOD subcontractor and saw a guy wire an entire harpoon missile controller using nothing but blue wire. It was for the test environment, of course, not for combat use. Hundreds of individual wires, all pale blue... most of them would be printed circuits in the real controllers.

      That is one of the experiences (building out the Internet was another) that convinced me there is no such thing as a non-trivial test environment. You cannot simul

    • What is "WKRP"?
  • by Anonymous Coward on Thursday September 20, 2012 @02:46PM (#41402903)

    Most of you whom have seen this may have read it in the Jargon File. It's relevant. The short answer is "you don't":

    The Story of Mel, a Real Programmer

    This was posted to USENET by its author, Ed Nather (utastro!nather), on May 21, 1983.

    A recent article devoted to the *macho* side of programming made the bald and unvarnished statement:

    Real Programmers write in FORTRAN.

    Maybe they do now,
    in this decadent era of
    Lite beer, hand calculators, and "user-friendly" software
    but back in the Good Old Days,
    when the term "software" sounded funny
    and Real Computers were made out of drums and vacuum tubes,
    Real Programmers wrote in machine code.
    Not FORTRAN. Not RATFOR. Not, even, assembly language.
    Machine Code.
    Raw, unadorned, inscrutable hexadecimal numbers.
    Directly.

    Lest a whole new generation of programmers
    grow up in ignorance of this glorious past,
    I feel duty-bound to describe,
    as best I can through the generation gap,
    how a Real Programmer wrote code.
    I'll call him Mel,
    because that was his name.

    I first met Mel when I went to work for Royal McBee Computer Corp.,
    a now-defunct subsidiary of the typewriter company.
    The firm manufactured the LGP-30,
    a small, cheap (by the standards of the day)
    drum-memory computer,
    and had just started to manufacture
    the RPC-4000, a much-improved,
    bigger, better, faster --- drum-memory computer.
    Cores cost too much,
    and weren't here to stay, anyway.
    (That's why you haven't heard of the company,
    or the computer.)

    I had been hired to write a FORTRAN compiler
    for this new marvel and Mel was my guide to its wonders.
    Mel didn't approve of compilers.

    "If a program can't rewrite its own code",
    he asked, "what good is it?"

    Mel had written,
    in hexadecimal,
    the most popular computer program the company owned.
    It ran on the LGP-30
    and played blackjack with potential customers
    at computer shows.
    Its effect was always dramatic.
    The LGP-30 booth was packed at every show,
    and the IBM salesmen stood around
    talking to each other.
    Whether or not this actually sold computers
    was a question we never discussed.

    Mel's job was to re-write

  • Create a git repository on 'production' and then a fork on your development machine. (Or a fork on a test machine would be better really, which you then fork to development)

    Do your development, checkin and then pull to test, execute there, if all goes well, pull to prod and execute there.
  • by MrSenile ( 759314 ) on Thursday September 20, 2012 @02:51PM (#41402953)
    Before it gets out of hand, I'd look to set up four things.

    1. Set up a proper split environment. Even if you don't have the hardware for it, set it up in such a way that when the hardware becomes available, you can move it appropriately. That being, a standard dev -> qa -> stress -> prod infrastructure.
    2. Set up a good revision control. I've started to really enjoy using GIT for this, as there's other software like gitolite that can give you fine-grained access control to your repositories. However, feel free to use subversion or any other well contained revision control platform.
    3. Set up a good method for deployment. My suggestion? Try puppet. It's free, and it's powerful, and if you get it configured, adding new systems to it is exceedingly easy to do.
    4. Packaging for your deployment. If you are installing a bunch of software (scripts, job control, etc) package it and give it a revision, then it's easy to upgrade systems with the 'new package', or revert it to the 'previous package' instead of having to manually copy around files or (re)editing them.

    Hope that helps.
  • Yea that's interesting actually, I just ran into this myself. We're putting a project together and when something breaks I end up doing small fixes and losing the changes across deployments (we only have 3 active) so its very small. But I feel your pain, I'm not totally convinced that a full SVN system is necessary but once you break down the problems it likely is. Given your closed infrastructure you may want to consider adding some phone home features to your scripts, something intelligent enough to auto
  • I know there are plenty of OpenSource tools out there, but I still prefer perforce. Also, recently (as of February) Perforce opened up its 2-user license to 20 users/20 workspaces! This is fantastic news!

    Check in your mainline (or migrate) to perforce under /depot/mainline
    Integrate to a non existent branch /depot/testing/VERSION, and check that in.
    Integrate /depot/testing/VERSION to a non existent branch /depot/release/VERSION, and check that in.

    Now with P4V, moving changesets from mainline to testing is a

  • Documentation (Score:3, Informative)

    by Hjalmar ( 7270 ) on Thursday September 20, 2012 @02:57PM (#41403051)

    Yes, set up a test environment. And implement some kind of versioning system, even if it's just "cp current_code old_code". You should always be able to fall back if you have a botched deployment.

    But one of the best things you can do is to start writing documentation. I like to write my documentation assuming it will be my replacement reading it, and so I try to include everything. Justify every unusual implementation detail, explain why each task was down the way it was. List bugs, and any code you had to write to work around it. The best part of documenting your project will be that as you work through it, you'll find things that no longer make sense and make them better.

  • If you work on a single server install RCS. You only need to
    Learn ci & co to start.

    If you work on many boxes you need a network friendly tool.
    The obvious ones are git and mecurial (CVS too).

    Simple cp works too.

    More important may be version tags and date time hints in the scripts.

  • by blackcoot ( 124938 ) on Thursday September 20, 2012 @03:02PM (#41403105)

    A great deal of the version wrangling you are facing is best done with a tool like Git.

    The bigger problem (development discipline) is much harder to fix.

  • You want something to track changes, deploy changes, and test software. Bazaar will track your changes.

    Chef is open source infrastructure management. The central server maintains a searchable database of your nodes and all of the scripts (recipes) that run on them. The nodes query this database and run the scripts that they are supposed to. This is similar to your environment now. You can also check your chef-repo into scm. This allows you to mess around with production and only commit back into scm when

    • Definitely Jenkins for code pushes. Not only can you decide how to push the code (even build and deploy through RPM), you can also use the Jenkins interface to manage testing and QA as well. Build can be distributed through virtual machines, and automation can be tied into something like chef or puppet. That includes cleaning and restarting virtual host images during testing, automating deployment milestones, etc.

      Also, the other benefit of using Jenkins is that you can manage future contributors through

  • I keep it in a Mercurial repository and use symlinks into the repository to deploy it. I also make free use of Mercurial's subrepo feature for tools that others wrote that are not yet found as packages on the Linux distributions I use.

    Yes, there is still a testing issue. For most of this code it's not a big deal because I'm the only user. I test it as I write it with a few simple hand tests and then it's good to go.

    If I were doing this for something where the code mattered to other people I would just add unit tests for various subsections as made sense. I would also start sectioning off the tools and making them into separate repositories of their own. I'd also make much sparer use of the sub-repo feature and instead have deployment scripts that handled making sure the correct version was in place.

    You still need test environments though for integration testing. And as the code grows, ad-hoc test environments stop being very practical. You should dedicate a VM or two (or even a machine or two) to replicating miniature versions of the real-world setups the code is expected to work in.

    Lastly, it's never too early to start using source control on your code. 98% of my code is under source control, even most stuff I think is 'throwaway' or ad-hoc.

    I would also strongly recommend Mercurial (or git (if you must)) over Bazaar. It's faster, and the mental model those two tools encourage is a much more accurate representation of what they're really doing. Bazaar lets you pretend that branching is still a big deal and takes some effort to resolve. It lets you continue to think in the model of centralized source systems even though it's not. You will be doing yourself a huge favor in productivity (yes, even for a single developer) to not use it and go for something that doesn't let you pretend anymore. Of those tools, I think Mercurial has a far more carefully thought out and better set of commands and options than git does.

  • and now a versioning system would mean going through proper deployment/rollback in order to get real feedback.

    not true. using a versioning system does not necessitate testing. just to be clear, testing is always necessary, and not enforced by any versioning system. you can use svn or git or cvs to keep versions of your files so when you do your testing on the production environment (shame on you) you won't have a stack of the same files with extensions like .bak, .bak.bak, .old, .delete, .undo, etc. sitting on your server.

    test because it's the right thing, the proper thing, to do. not because you think some tec

  • Keep the files per project in whatever production directory you want and start a Git repository in it. Version numbers are irrelevant and only a nuance, you have every version of every file with any (commit) comment you want now! Then add scripted backup (such as FTP) to a central location of course to recover from disasters if your production files get damaged.

    Add version number if you start rolling out to multiple sites.

    It's possible to exchange files between git repositories, or merge back changes made i

  • by slim ( 1652 ) <john@hartnupBLUE.net minus berry> on Thursday September 20, 2012 @03:31PM (#41403477) Homepage

    Forget that you're a lone programmer. Set up a proper environment anyway.

    This is going to seem like hard work, but once you've done the upfront effort, it will pay dividends.

    Do *everything* that you'd do if you were a team. There are plenty of books / web sites on the subject.

    Pick a version control system -- since you're starting from scratch, Git or Mercurial. Get your code into it.
    Pick a continuous build system -- Jenkins is popular and free.
    Write one unit test, and make Jenkins run it as part of the build process.
    Decide on some sort of repository for your build artefacts.
    Establish an integration testing box, and have your CI system deploy to that every build. Ideally use something like Puppet for this, and also use Puppet on your production machines.
    Write one integration test, and make Jenkins run it after deployment.

    You can dedicate a server to all of this, several servers, run it all on your laptop or in VMs; it really doesn't matter. But think ahead so that you can move it to dedicated machines later if you need to.

    Lots of work, but now you have a nice, confidence inspiring build / code management system.

    Once that's going, you can decide how to fix your lack of tests. One approach is to take a few weeks just writing tests. Another is to write tests as the need arises -- for new code as you write it; to demonstrate bugs before you fix them. Or somewhere in between.

    Python isn't my area, but there is probably an ecosystem of pythonesque tools for a lot of this stuff. pyUnit, code coverage tools, etc.

    You will have problems unit testing, since you won't have designed the code for testability. The choice is, live with fewer tests than might otherwise be possible, or refactor your design into something more unit testable. (IOC is unit testing's best friend)

  • by the eric conspiracy ( 20178 ) on Thursday September 20, 2012 @03:39PM (#41403585)

    Just get one of the inexpensive commercial subs for GitHub. This solves all sorts of issues. Remote backup, robust version system, issue tracking etc.

  • Take it out back and shoot it. If it's rabid, there is no cure.
  • Use Jenkins [jenkins-ci.org] for deployment. You can automate the entire process. For example, imagine automatically deploying after checking in a revision that contains the word "***DEPLOY***" in the commit comment.
    • "Use Jenkins for deployment. You can automate the entire process. For example, imagine automatically deploying after checking in a revision that contains the word "***DEPLOY***" in the commit comment."

      Now imagine how the hell you reach the confidence point when you can tag "***DEPLOY***" on a commit and then you will see why a CI tool (and I mean the CI *tool*, not a CI strategy) is of almost no value on the systems administration field (which is mostly what we are talking here) but to push to production in

  • by HornWumpus ( 783565 ) on Thursday September 20, 2012 @03:52PM (#41403767)

    You need to fire this cowboy. He doesn't think he needs to test his scripts.

    I know he seems irreplaceable. That should be a big red flag.

  • Well ... (Score:5, Insightful)

    by gstoddart ( 321705 ) on Thursday September 20, 2012 @04:21PM (#41404073) Homepage

    My question for the Slashdot community is: in the case of single developer (for now), multiple machines, and a small-ish user base, what would be your suggestions for code versioning and deployment, considering that there are no real test environments and most code just goes into production ?

    If I'm the people who run the company, I start firing people. If I'm the developer, I run like hell before anybody realizes what a complete mess I've made.

    No versioning, no test environment, live changes in production ... these are warning signs of something which has been cobbled together, and which continues working by sheer dumb luck.

    I had a developer once who edited a live production environment without telling anybody and broke it even worse -- he very quickly found himself with no access to the machines and being told that we no longer trusted him with a production environment.

    Having worked in highly regulated industries where the stakes are really high, I've had it drilled into me that you simply have no room whatsoever to be doing this kind of thing that ad hoc.

    Glad you're starting to use something. But the risk to your employer of all of your stuff tanking and becoming something you can't recover is just too great. From the sounds of it, if you get abducted by aliens or hit by a bus, your company would come to a screeching halt.

    • Re:Well ... (Score:4, Insightful)

      by turbidostato ( 878842 ) on Thursday September 20, 2012 @07:57PM (#41406287)

      "If I'm the people who run the company, I start firing people."

      Unless, of course and as it is usually the case, it is the one running that small company the one that set the policy to start with.

      "If I'm the developer, I run like hell before anybody realizes what a complete mess I've made."

      Unless, of course and as it is usually the case, the guy is a professional, understands the trade-offs and such does (more or less) the boss that thinks the resulting mess is the most cost-effective way to run his business (and, up to a point, it usually is).

  • I commit everything to an SVN, then use jenkins to manage updates. Once you create the jenkins job all you have to do in the future is run it. and you can string jobs together to that if the change needs to be pushed to a number of servers it is still one click.
  • My personal programming hero, D. Richard Hipp, works with a very small team on SQLite (which you may have heard of). He uses his own, home-grown SCM called fossil [fossil-scm.org]. It probably doesn't scale to a zillion contributors but, like all of Hipp's work that I'm aware of, it's super clean and easy to use. Sounds pretty great for your use case.

    And, as other people on this thread have already said: your habit of throwing stuff into production without testing it is similar to playing Russian Roulette with your co
  • Because I never, ever want to rely on anything you build this way. You are headed for a disaster, unless you 1) set up a test environment, and 2) use a revision control system.

    Really, anything less than that is just a complete waste of everyone's time.

  • I've done a little bit of environment taming in my day.

    Everybody's already told you the "right" things to do. They're all right. Thing is, you need to get there somehow, and you're looking for a path from here to there. At least, I think that's what you're asking.

    You already have bazaar. Good tool. Don't worry about bzr versus cvs versus hg right now. You picked something. Run with it.

    I suggest a quick shell script that replaces your editor with "edit; check-in; offer to push". Create another quick script (

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...