Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Security IT Technology

Secure Programmer: Keep an Eye on Inputs 157

An anonymous reader writes "This article discusses various ways data gets into your program, emphasizing how to deal appropriately with them; you might not even know about them all! It first discusses how to design your program to limit the ways data can get into your program, and how your design influences what is an input. It then discusses various input channels and what to do about them, including environment variables, files, file descriptors, the command line, the graphical user interface (GUI), network data, and miscellaneous inputs."
This discussion has been archived. No new comments can be posted.

Secure Programmer: Keep an Eye on Inputs

Comments Filter:
  • Windows & Belkin (Score:1, Interesting)

    by ForestGrump ( 644805 )
    Doesn't windows throw random crap around?
    This unpredictable event seems to screw with programs when the ram is low.

    Also, those microsoft security holes we have seen in the past year of 2003 are no confidence to M$ security.

    Lastly, Belkin routers are no good for security. AFter all, they hijack your http requests and direct you to somewhere you didnt want to go!

    Grump.
    • by AuMatar ( 183847 ) on Tuesday December 30, 2003 @04:17PM (#7838250)
      There are no controls on Windows inputs. Any process can send any message to any other process. Talk about insecure.

      You could probably majorly screw up a progoram by sending it random message numbers. It'd react as if you were sending random menu and other commands. Hmm, that sounds like a fun prank to play...
      • Re:Windows & Belkin (Score:3, Informative)

        by AtrN ( 87501 )
        See this paper [wisc.edu]. I remember reading the original document in (CACM [acm.org] IIRC) and was pleased to see it updated showing just how far "forward" we've come.
      • by Anonymous Coward
        There are no controls on Windows inputs. Any process can send any message to any other process.

        Well, not quite. There are ways of isolating programs, but it's very rarely useful. (In fact, I've never done it, but I know it's possible.)

        But why bother with all that when you can just install a system-wide hook? It's quite easy to actually inject code into another process. Once you've got that you can muck with data or intercept system calls to your heart's delight.

        What it comes down to is that if you d
      • Re:Windows & Belkin (Score:3, Informative)

        by kasperd ( 592156 )
        Any process can send any message to any other process. Talk about insecure.

        Accourding to http://security.tombom.co.uk/shatter.html [tombom.co.uk] it is much worse than just that. Not only can anyone send such a message, but the messages can even force the receiver to execute arbitrary code.
  • by Anonymous Coward
    You'd be wise to add Cross Site Scripting attacks to your list of things to protect against.
    • by borkus ( 179118 ) on Tuesday December 30, 2003 @04:40PM (#7838442) Homepage
      A big issue for many web programmers is failure to realize that forms and web interfaces that you provide the user aren't the only way to interact with your application. A lot of them pay attention to JavaScript validation and maxlength attributes rather than check the data on the server.

      New developers working on applications open to the internet often aren't used to developing in an evironment where programmers that don't work for their employer can access their app. All it takes for one dishonet person who knows slightly more than you to hack your app.

      • by mcrbids ( 148650 ) on Tuesday December 30, 2003 @05:04PM (#7838755) Journal
        forms and web interfaces that you provide the user aren't the only way to interact with your application.

        So true, so true. For example (in PHP)

        <?
        if ($login='Admin' && $pass='19ak129')
        $secure=true;
        if ($secure)
        { // do something very important.
        }
        ?>

        In many cases this script's security could be bypassed by adding "&secure=true" at the end of the URL!

        I prefer to generate or define a set of values that are acceptable and check with in_array().

        EG:

        <?
        $acceptable=array('a', 'b','na');
        if (!in_array($acceptable, $_REQUEST[check]))
        die ('Sorry. Input in field "check" is invalid');
        ?>

        Or by using a regex. Assume that the input must be a number:

        <?
        $match="/[0-9]+/";
        if (preg_replace($match, '', $_REQUEST[number]))
        die ('You must put in a number');
        if (strlen($_REQUEST[number]>5))
        die ('Number you have entered is out of range');
        ?>

        You can oftentimes functionalize these so that it's as simple as:

        <?
        if ($error=Valid_Integer($_REQUEST[number]))
        die($error);
        ?>

        Simple methods that can greatly enhance security!
        • Yeah, web applications were a mess, and bloody complex to do rather basic tasks. Fortunately, most platforms are getting better, and more conservative with age. For example, your PHP URL trick wouldn't work in a recent default installation of PHP where "register_globals" (the automagic system that makes all variables from HTTP POST, URL's, cookies, and sessions all the 'same') is "Off".

          I guess the moral of the story is that the web is young, and web platforms are even younger. With any luck, many of th
          • Magical!? Yes. It's really easy in fact. Simply do NOT use direct user input within an SQL statement. That seems really restrictive but it's not - it simply requires that you push back CHOICES to the user in creating your form... all sara john ...then you use the values (validated to be only numbers) to back-fill your SQL statement. If you are really feeling risky, then at the very least make sure that every character you recieve is [A-Za-z0-9 ], length verify it to make sure it matches the leng
            • What about addslashes()?

              Since addslashes() is only truly useful when magic_quotes are off, I wrap it all in a function that checks the status of magic_quotes. Defined as something like:

              dbprep($string, $length=-1);

              So that if $length>0 the input cannot be longer than $length.

              I do the same for databased output, EG: dbout().

              Combine addslashes with a few others, such as htmlentities(), and perhaps a regex or something to check for [a-zA-Z0-9], and you have *very* powerful input validation.

              Am I missing s
          • by Admiral Burrito ( 11807 ) on Tuesday December 30, 2003 @09:04PM (#7841256)
            BTW, anyone know of some magical code to block SQL injection vulnerabilities?

            Use placeholders. PEAR DB supports them, as do other database abstraction layers. As long as you _always_ use placeholders you will be safe against SQL injection.

            If you can't depend on PEAR DB (or similar) to be installed / at the correct version, you could quickly build yourself a function that takes a variable number of arguments: a SQL statement containing '%s' (for strings) or %d (for numerics) followed by potentially hostile arguments. Run each of the arguments through mysql_escape_string (or equivalent for your DB) then build your SQL statement using sprintf. Note: I haven't tested that approach; use with caution.

        • The option to register global variables is off by default since PHP 4 and for a very good reason: it's potentially insecure, as you demonstrated, and it also creates very sloppy code. Ever tried to debug a program you haven't written that uses those? You get variables that have never been declared being used, and it can be a pain to determine where exactly they come from. From an included file? $_POST? $_GET? Then you can get conflicta and it becomes an horrible mess - I can't understand why register_global

        • You're nuts if you use auto-global variables in PHP like that.

          Use $_GET,$_POST,$_COOKIE,$_SESSION, etc. They're your friends. No more URL-hacking fun.
        • Not to ignore the rest of your comment, but wouldn't initially setting $secure to a null value solve that problem?

          <?
          unset($secure);
          if ($login='Admin' && $pass='19ak129')
          $secure=true;
          if ($secure)
          { // do something very important.
          }
          ?>
      • I've seen this sort of thing at work all the time. Applications which rely on security through obscurity - I point out to the developers that the factors that they consider secret are in fact easy to find out and the usual response is that nobody else would think of that.

        In a controlled network environment in which the cost of tampering with the data is relatively small the most important requirement is traceability - if somebody breaks in you just fire them. On the other hand, if an internal hacker can
  • by Brahmastra ( 685988 ) on Tuesday December 30, 2003 @04:12PM (#7838190)
    I believe code reviews with a large enough group of people to be extremely useful. Yeah, it takes time and you get some irritating comments from a few people about how there is a space between something or comma between something, but when multiple eyes look at it, someone always catches something you didn't. A few hours of extra pain on the side of programmers can prevent pain for millions in the form of blaster viruses, etc.
  • by billstewart ( 78916 ) on Tuesday December 30, 2003 @04:13PM (#7838192) Journal
    One of the first lessons we learned in CS100 was to always validate the input, assuming that it might be bogus or actively malicious. I've been appalled over the last 25 years at the number of products, developers, and companies that don't understand that. Most of the internet security problems we've seen have been from inadequate handling of input data, typically the buffer overruns that are so easy to program in C if you're not paying attention.

    The article's worth reading, and really does justify it's "Level: Intermediate" label. Unlike when I was learning to program, there are lots of sources of input beyond your deck of punch cards (:-), and the author does a good job of explaining many of them, such as evil things that environment variables and file descriptors can be used for.

    • While I agree with you from a theoretical standpoint, I'm sure you are aware why things like this get overlooked in day-to-day corporate IT programming tasks.
      eg. Manager says, write a UI that accepts username & account, and then spits out user transactions . During design phase, you invariably make the code hack-able so its easy to test. ie. I could put in "*" for account and it would spit out transactions of ALL users, regardless of the username. This is a useful backdoor, especially in development tim
      • Maybe the testing methodology you cite isn't so useful then, if you have to change your code when you're done testing. Backdoors are only bad if you put them in in the first place. Test First Design [xprogramming.com] might be a better approach than Code an insecure backdoor as a test.
    • And the second thing I learned was to do things once and do it right. This means that input should happen in one place, the function should make sure the input will fit in the allocated space and contain only the proper data, and may even take an argument for the maximum size the calling fuction expects.

      What amazes me is that people try to optimize thier code by carefully minimizes thier input fuction. It is input. Input is slow. Go somewhere else to optimize. Create a good input fuction and leave it

      • >> I know people bitch about how screwed up C is, but with the proper debug tools I had few problems.

        I'd say you had few problems because unlike so many in this business, you actually fathom the concept of validation, and know how to use C without letting it walk all over you. I don't know why that's difficult but apparently it is...
      • C isn't screwed up. It's a really great language, nice and transparent, does exactly what you tell it with no surprises. You can shoot yourself in the foot [noncorporeal.com] if you want.

        The problem is that too many people aren't sufficiently careful, including the people who wrote the gets() I/O subroutine, so their Internet implementations typically resemble large quantities of foot-sized bright-colored bull's-eye signs marked "YOURS ARE HERE" and large numbers of guns and bullets of various sizes distributed to lots of

    • The last time I checked coders in corporations got promoted for churning out the largest number of apps with the largest number of features in the shortest amount of time. The people who promote them are legitimate users of the application so they don't think to try to break it and see what happens. The security break-ins occur years after the original coder is gone.

      Besides - leaving some small holes in your code guarantees future work - which is hard to come by in this day and age.
  • A truly "secure" program would have no inputs, but that program would be useless.

    Not necessarily. What about a program which calculates pi or runs some kind of simulation? The 'input' is in the form of constants compiled into the executable. Technically there is no input, but the program is hardly useless.
    • > What about a program which calculates pi or runs some kind of simulation?

      Ok, so maybe he failed to specify that such a program would be *relatively* useless.

      What good is a program that calculates pi if you cannot specify how many digits?

      What good is a simulation if you cannot specify which parameters to use, and how long you want to run it?

      Answer: not very good
      • A simulation can be perfectly useful though the data and lenght of operation has been hard coded into it. Another useful type of program without input are system monitors and watchdogs -- perhaps specifically for security purposes. If a language makes it quick and easy to create simulations (e.g. the 'R' language) with hard coded parameters, then the desire to reuse a program using different parameters may be lessened.
        • "system monitors and watchdogs" how are those programs going to watch something with out data being inputed in some form?
          • I don't think that every system call should be considered an input in this case. The kernel is not merely another process (though microkernels blur thus considerably), but an integral and originating part of every process. A monitor can be very useful without ever getting input from another process, but only accessing the kernel. Particularly, the stating of 'no input = secure' indicates said context as the kernel is nearly always assumed secure.
      • What good is a program that calculates pi if you cannot specify how many digits?

        What good is a simulation if you cannot specify which parameters to use, and how long you want to run it?


        That information can be contained inside a constant and compiled into the final executable.
        • > That information can be contained inside a constant and compiled into the final executable.

          Can, yes, can. But they wouldn't be very useful.
          • That information can be contained inside a constant and compiled into the final executable.

            Can, yes, can. But they wouldn't be very useful.

            Not very useful? Here are examples which require no input and which I think are useful:

            - a cpu meter (no input, just system calls).
            - an aquarium simulation screen saver.
            - one-off applications which produce a static output from hard-coded input.
            - complex mathematical calculations
            - true, false, bg, fg, ps, top, logout commands/utilities in Linux
            - a clock display app
            • by sir99 ( 517110 )
              Your examples don't take user input, but most of them do take input of a different sort. The point of the article was that input can come from unexpected sources like environment variables, and that an attacker can sometimes subvert these inputs. The cpu meter, bg, fg, ps, top, logout, and clock programs all take input, in the form of system and library calls. Some of them also read input from configuration files.
              • The point of the article was that input can come from unexpected sources like environment variables, and that an attacker can sometimes subvert these inputs.

                Yes, I completely agree. But the article also says that program which do not take any input are useless. It is *my* point to refute this claim.

                The cpu meter, bg, fg, ps, top, logout, and clock programs all take input, in the form of system and library calls.

                What do you propose? That software developers don't even trust the system calls of the
                • I just wanted to point out that you seemed to be using a more narrow definition of input than the article, which slants things a little differently.

                  What do you propose? That software developers don't even trust the system calls of the OS they are running on?

                  You can certainly trust most system calls, but one of the things the article mentioned was an attacker closing the std{in,out,err} file descriptors before execve(2)'ing your program. So on some systems, when you open(2) a file, you'd get the same han

                  • You can certainly trust most system calls

                    What if I'm writing a program to display a work under copyright or trade secret restriction? How can I be sure that the system calls haven't been patched with the cooperation of the owner of the machine to leak the work to a third party?

                    • You can certainly trust most system calls

                      What if I'm writing a program to display a work under copyright or trade secret restriction? How can I be sure that the system calls haven't been patched with the cooperation of the owner of the machine to leak the work to a third party?

                      You can't be sure that someone hasn't tampered with the machine. I don't think this is in the realm of what normal applications should have to worry about.

                      Nonetheless, if you do have trade secrets and you are worried about them
  • perl -T says it all (Score:3, Interesting)

    by DrJimbo ( 594231 ) * on Tuesday December 30, 2003 @04:14PM (#7838212)
    The Perl language has built-in "taint-checking" enabled via the -T command line switch which causes Perl to automatically keep track of all information that possibly came from a user input and not allow any of it to do anything harmful (basically end up on a command line or in a file name).
    • by Carnildo ( 712617 ) on Tuesday December 30, 2003 @04:20PM (#7838277) Homepage Journal
      The Perl language has built-in "taint-checking" enabled via the -T command line switch which causes Perl to automatically keep track of all information that possibly came from a user input and not allow any of it to do anything harmful (basically end up on a command line or in a file name).

      There are other harmful things that data can wind up doing that Perl can't check for. Things like being used as SQL queries, or the classic "pass the price as a CGI parameter" mistake. Taint checking is more useful as a reminder that you need to validate input than a way of keeping potentially bad input isolated.
      • Things like being used as SQL queries...

        In conjunction with the -T switch, you can also use the DBI placeholder methodology, which takes care of SQL-injection vulnerabilities:

        # isolating relevant Perl code
        @row_data = $database_handle->selectrow_array( 'SELECT * FROM customers_table WHERE lname=? AND email=? LIMIT 1', undef, $input1, $input2 );
        $statement_handle = $database_handle->prepare( 'INSERT INTO customers_table (lname,email) VALUES (?,?)' );
        $statement_handle->execute( $input1, $inp

      • There are other harmful things that data can wind up doing that Perl can't check for.

        This isn't just a Perl problem, its a problem with any language.

        Things like being used as SQL queries

        I don't see why people get so bunged up about SQL injection attacts. The main reason they're effective is because so few developers know the right way to move data between their programs and sql queries. There's a very simple solution to this problem that can be used in Perl and just about every language I've ever s

    • Not wishing to start a flame war but for PHP users, turn on safe mode [php.net]. That blocks exec() and similar "dangerous" functions. If needed you can turn them back on in <Directory> statements in apache config.

      Good time to mention magic_quotes_gpc [php.net] and register globals [php.net] as well.

      Of courses none of these are a replacement for good programming practises in the first place. magic_quotes can get annoying if you do filter input properly as it's easy to end up with double escaped strings (e.g \\\'test\\\' instead
      • PHP = bad architecture. Many of the popular PHPisms are just bad design and security misfeatures waiting to happen.

        The fact that features like addslashes, magic quotes and register globals exist show little thought goes into PHP's design.

        Your input filters should be designed to filter data so that your _program_ can cope with it, NOT so that other programs can cope with it.

        After your program processes the input, you use the necessary _output_ filters for each different output so that other programs can c
        • What part of "Not wishing to start a flame war" was it that went rocketing over your head. All of it along with the rest of the post apparently.

          DB quoting/filtering should be left to the Database API.

          What like using seperate commands for different [php.net] databases [php.net]. You can look the rest up yourself, I'm bored of reading the PHP manual for other people.

          Register globals is there to allow backwards compatibility. Everyone, especially php.net, shout from the hills about how insecure it is and how it shouldn't be
          • "What like using seperate commands for different databases"

            That's not what I'd call a database API. That's pretty silly and error prone. Take a leaf from JDBC or Perl DBI, etc, or come up with something better, not worse.

            PHP should at least have a DB API that supports bind vars or placeholders. Or some other API so that data is automatically escaped the way it should be, and the risk of data being treated as commands is low/nil and it is easier to audit and find uncompliant DB related code. If the PHP dev
            • There is the pear DB module, but I see your point. A "PDBC" or similar would be a good move.

              Personally I use a tiny SQL class that can be changed to allow different db to be used. Seems a tidier solution since the class is only about 20 lines.

              Can't argue that register globals was a bad idea in the first place, but allowing users to still run older software needing it is a good idea. Shame they keep breaking other major functions in minor version increments though really, kinda makes keeping the stupid fun
              • Pear DB looks decent enough. Actually it looks rather like Perl DBI. Getting more people to standardize on Pear DB and depreciating mysql_escape_string, addslashes etc would be a good idea.

                Breaking major functions in minor version increments is another sign of how serious the PHP devs treat PHP. Not very reassuring - esp if you need to update PHP for say a security issue (we had to do a few of those). Only a few people will have tests for their entire code, and even then you never really know and have to g
    • The Perl language has built-in "taint-checking" enabled via the -T command line switch which causes Perl to automatically keep track of all information that possibly came from a user input and not allow any of it to do anything harmful (basically end up on a command line or in a file name).

      Taint checking isn't perfect, may have bugs in its implementation, and can't cover all possible cases. Taint checking is a wonderful tool, but it should only be one layer in a multi-layered defense. Don't view it as

  • by Dr. Bent ( 533421 ) <ben&int,com> on Tuesday December 30, 2003 @04:22PM (#7838293) Homepage
    It is a widely accepted engineering maxim that systems should be designed so that it is difficult to use them improperly. This is why (for example) a 110 volt plug will not fit in a 220 volt outlet. Developers who are concerned about the quality of the software they make would do well to follow this rule, and not just for security reasons. You should verify input data as early and as rigorously as possible wherever you can. Take advantage of things like XML validation and text box constraints to make it hard for users to enter bad data. And always follow the Fail-Fast principle...if something goes wrong: Complain! Loudly!. Don't let the user continue working if something has gone wrong. It's better to crash than to produce an erronous result.

    Just a little advice from a developer who's made enough mistakes to know better.
    • "This is why (for example) a 110 volt plug will not fit in a 220 volt outlet"

      You do not have any children, now do you? ;-)

    • Take advantage of things like... ext box constraints to make it hard for users to enter bad data

      That's good and necessary advice, but it's not sufficient, depending on your environment. If you're programming for the web, then you absolutely cannot rely on such things. Of course you should always set such constraints in the HTML where possible, but you *still* have to validate the inputs fully in your code.

      In case the reason why isn't obvious, it's because URLs are very easy to hand craft. There's no way
      • I'm not an experienced software engineer, but I think it would be good practice to first give the HTML pages a "development" mode where no input validation is done at that stage. Even better, all input should be entered from text boxes, so that you can give your program arbitrary input without handwriting URLs much. Front-end checking (such as client-side javascript) can be added at the same time or after the core engine is debugged, but it is still better to preserve the "development" mode in the code (f
        • That's pretty much the way we work - we never rely on the client having javascript enabled, as that could potentially prevent some people from using the site. We don't have a specific "development mode", but on any normal project the back-end code and HTML development are done by different people.

          Basically, we have interface developers who take the Photoshop images that the designers/art directors produce, and use them as a blueprint to create the HTML pages. They also produce any javascript that we do use
        • That's a good point. I have seen developers mistake javascript for sufficient input validation. The proper use of validation in javascript is to simply give a legitimate user a proper error message quickly without actually needint to perform a transaction with the server that will fail. The server must still re-validate the input.

      • Well, the problem is that those checks in javascript happens on the client side and can be overridden by those smart enought to know what they are doing.

        So you must allways assume that even if you used javascript that something may be a problem with input data. One must always check the input in the server side cause one never know what a evil computer using criminal might send in.

        Too many use javascript for validation even hough it's useless for it. One always have to do validation in the server end any
    • by Rich0 ( 548339 ) on Wednesday December 31, 2003 @01:31PM (#7846303) Homepage
      Keep in mind that the 110 and 220 plugs are designed to defeat accidental mixups. Computer input validation is generally designed to do the same. Hardening software against an attack is more analagous to giving your engineer the task of designing a plug and outlet such that it is physically impossible to plug anything but that one particular plug into the outlet, with the understanding that somebody with a good knowledge of engineering will try to defeat the design.

      Software is required to do a lot more than any physical security measure in existance. Your webserver could come under attack by any electronic measures that you could conceive of by a host of trained software engineers in another country. Chances are that the most a bank vault is designed to handle is a dozen guys with small arms, rudimentary safe-cracking gear, and some small explosives. If the US Army showed up with an M1 tank and 1000 tons of C4, the safe wouldn't last long. However, such a large-scale intrusion is unlikely to escape the watch of the police for long. On the other hand, a remote attack against a webserver can run for months without much being done to the attackers if they're in a rogue nation.

  • And why should anyone be surprised? In this age of "I read a book on VB last week and now I'm a software engineer!" type environment?
    I am not surprised that simple things like this are rehashed over and over. This is more suited to the programmer group of people who will sort data based on string comparisons, instead of learning how to use a real algorithm to do it, or keep writing static forms, instead of learning how to use a loop with a db backend - because they don't understand true programming c
  • What is so interesting about this article?

    I just wrote a document about secure programming and I found dozens of better articles about the exact same things like: here [slashdot.org]
  • by Eryq ( 313869 ) on Tuesday December 30, 2003 @04:30PM (#7838359) Homepage
    Perl programmers interested in writing secure scripts should *definitely* know about the -T (taint checking flag).

    From the FAQ:

    As we've seen, one of the most frequent security problems in CGI scripts is inadvertently passing unchecked user variables to the shell. Perl provides a "taint" checking mechanism that prevents you from doing this. Any variable that is set using data from outside the program (including data from the environment, from standard input, and from the command line) is considered tainted and cannot be used to affect anything else outside your program. The taint can spread. If you use a tainted variable to set the value of another variable, the second variable also becomes tainted. Tainted variables cannot be used in eval(), system(), exec() or piped open() calls. If you try to do so, Perl exits with a warning message. Perl will also exit if you attempt to call an external program without explicitly setting the PATH environment variable.
    • Perl's taint checking was (at my last check) pretty easy to get around, allowing users to essentially bypass the taint checks quite easily. Also, perl 5.6.0 doesn't do taint checks (even with -T) on lists passed to exec() and system().
      • by Eryq ( 313869 )
        IIRC, the goal of -T is not to prevent you from *ever* using tainted data; it was to prevent accidents: running shell scripts with insecure $PATH variables, etc.

        Untainting a variable by extracting a subpattern usually means "I know what I'm doing: I promise that I'll extract a safe substring from this". (Whether the developer *actually* knows what they're doing is, sadly, not detectable by Perl).

        (As for -T not affecting a list passed to exec()/system(): that does seem odd, but maybe there's some Larger P
      • The purpose of taint checking is more as a debugging tool than an absolute check on illegitimate data. This fits with the general Perl view that the function of the language is to assist the programmer rather than to constrain him. Thus if you turn on taint checking, Perl will stop you from doing things directly using tainted data, but it lets you "launder" the data by running it through a regexp. This isn't a perfect solution, but anything that was radically better would be all-but impossible to code.

  • by Anonymous Coward
    Michael dropped the soap again. Be gentle.
  • Ya you can talk about inputs to programs and how misc. and unwanted data get in there but watch for buffer overruns because thats what can really kill your program.
  • Is news to others. Many "Programmers" out there write code that does not do any error checking or catching and the result is all the crapware that we see today. We were all warned in our programming classes about memory leaks and buffer overflows, but they are still very prevalent in today's software. Perhaps we should all look harder at our code before selling off one it as a final product.
  • Dividing (Score:3, Interesting)

    by chrootstrap ( 699364 ) <(moc.oohay) (ta) (partstoorhc)> on Tuesday December 30, 2003 @04:38PM (#7838421) Homepage
    The recommendations on dividing the program into unsecure and secure binaries to handle setuid access in GUI's can very properly be extrapolated to non-graphical programs. This is a very good strategy for allowing relatively wild programs access to important facilities and can involve many types of IPC including memory-mapped files (with proper protection) and sockets. To really secure a client program that needs access to criticals, put it in a chroot jail and have it communicate with an outside process through (e.g.) a socket. Separating programs into safe and unsafe sections and applying different security techinques to each is far more effective, imo, than trying to secure a single, large application. It can also provide many other benefits of encapsulation, etc. The security onus shifts to handling client requests in the secure section which is usually much more easy to do.
  • by Anonymous Coward
    Java [cgisecurity.com]
    XML [cgisecurity.com]
    .NET [cgisecurity.com]
  • How about when the applicacion is on the web? Javascript? Server roundtrips?
    • How about when the applicacion is on the web?

      When you're developing web applications, you've got to deal with the input in a few ways to be sure you can trust it:

      Do the user a favor and validate the input on their end using javascript.

      Repeat the validation on the server. Yes, that's right, repeat it. They could have javascript turned off, or could be deliberately bypassing the validation.

      When validating input, don't simply check that you're getting what you expect. Also confirm that you aren't getting
  • Very good writer! (Score:1, Insightful)

    by Anonymous Coward
    Another excellent article by David... oddly enough, I was reading his Program Library HOWTO (http://www.dwheeler.com/program-library/) just the other day to learn about dynamic loading libraries in Linux.
  • Watch the USER! (Score:3, Interesting)

    by anubi ( 640541 ) on Tuesday December 30, 2003 @05:32PM (#7839110) Journal
    As long as we have no control over what the user tries to install and run in his machine, we are always going to be vulnerable to trojans.

    The proliferation of proprietary formats we are seeing that all do basically the same thing, like send sound files over the net, or view video clips, are encouraging mass downloads of programs from third party providers. These programs may well do what they said they would do, but with all this DMCA crap going on, its getting harder and harder to see if they are doing a little extra that wasn't in the bargain, like doing zombie work on the side to assist in little capers the originating author needs to pull off.

    What firewall or systems programming can stop a deliberately malicious program installed by an ignorant user? Say the program "demands" access to the internet for "verification/auto-update", then you have to set the firewall to allow this program access to the net. Now what happens? Its like giving car keys to a valet parking agent. You only have to trust he's only going to do what he says he will do. To add insult to injury, consider you generally have signed any recourse you have when you click that "I agree" button that confirms you have read and understood the EULA.

    What irritates me so about these "plug-ins", "macros", and "scripts" is that they are indeed executable. Nothing says the malicious person coding these things is gonna follow the rules. He is free to code some really nasties in assembler if he so desires. The state of music file distribution I find really disturbing. We have an MP3 format which is generally well understood, yet it seems everybody jumping on the bandwagon wants to use proprietary formats which are not generally understood, leaving us all open to the risks resulting from ignorance.

    As a public, we aren't helping much. We agree to any damn thing they print in the EULA. As a public, we should INSIST that if we are to be kept ignorant by law how something works, if that something does something malicious, then its maker should have full responsibility for the problems it generated.

    Basically I am proposing a trade. If you want the protection of law to keep the public ignorant, then you waive indemnity.

    We have a patent system and copyright system in place. Both were implemented on the concept that the work was to be in the open. Why aren't encrypted work also known as "trade secret" and not afforded protection by copyright or patent? Basically, any work encrypted would be considered a "trade secret", not in the open, hence not eligible for protection by the patent or copyright system at all? But to make this happen, its gonna take the will of a lot of people to pressure the legislators to enact this. Pressure as in "if you do not do this, start polishing your resume.".

  • by merlyn ( 9918 ) on Tuesday December 30, 2003 @05:34PM (#7839155) Homepage Journal
    I wrote a similar article recently for SysAdmin magazine [stonehenge.com], although the focus is more about Perl.
    • I often find that the best way to talk about security practices is by illustrating general points with specific counterexamples. (Actually, this is a helpful technique in general - see, for example, the book "Counterexamples in Topology")

      For example, when talking about being aware of \0 characters, you could mention something like this: a friend of mine once wrote a jukebox-like web application that allowed people to queue requests on his machine. There was a certain input parameter that was restricted t
  • by pcause ( 209643 ) on Tuesday December 30, 2003 @05:49PM (#7839405)
    The Kernighan & Plauger book "Elements of Programming Style" dated 1979 talked extensively about the need to validate all inputs to subroutines and from the user. This is *not* new, it is just that few programmers have the discipline to follow the rules.

    The issue is making *no* assumptions about anything. The programmer *thinks* the file will be written be another piece of code that a team member is writing. But that program has a bug. or three years from now, other programs are creating the file and don't know abut some verbal discussion about field data. It takes great dligence and paranoia and management that allows you the time in the schedule to do this.
    • Yes, I am a believer in defensive programming, but I am not sure that defensive programming is the golden hammer. Verity Stob made a remark about taking a sick program and filling it with try-catch blocks to try to recover from every possible error condition -- I believe she called it "nailing a corpse on a tree" or some such thing. And her other remark was "the only place we seem to get exceptions is in destructors, so what's the point?" That had me on the ground in tears of laughter because destructors
  • by MellowTigger ( 633958 ) on Tuesday December 30, 2003 @06:31PM (#7839922) Homepage

    The article is interesting, and they are right to point out the many dangers of relying on environment variables. Where I work (unidentified to protect the incompetent), programmers are not allowed access to the unix command line. Instead, all user exits are trapped, and programmers are forced to navigate through a homegrown menu system.

    This menu system relies on an environment variable ${WHATCANIDO} to store a list of permissions available to that user. Of course, I changed my .profile to add my own extension to the permission list. I even nicely dated, initialed, and described my change. ;)

    export WHATCANIDO=world_domination:$WHATCANIDO # 2000/10/31 tw Too easy

    So now when I get frustrated with the absurdity of this arrangement, I just take echo the environment variable to remind myself why I'm right and they're wrong.

    > echo $WHATCANIDO
    world_domination: [deleted]

  • by G4from128k ( 686170 ) on Tuesday December 30, 2003 @06:40PM (#7840029)
    Somewhere along the line every application must trust something. At the very least, BIOS settings and environment variables that are owned by deeper layers of the OS must be trusted because they are inaccessible or indecipherable at the application layer. Reaching too far would break encapsulation and create brittle dependencies. An application can only check the variables and direct inputs that it has access to.

    I don't argue against validating inputs. Certainly all of the direct inputs to an application should be assumed to be untrustworthy unless a secure checksum validates that the inputs are indentical to some previously validated inputs. Checking inputs (or environmental variables) of immediately adjacent processes is probably also warranted (as a redundant "brother's keeper" policy).

    The real problem comes if the OS has a faulty validation methods. (And I won't get into the neccessity of trusting the hardware or bugs such as those that plagued the early Intel 586.00001 processors) If I check the validity of a user, filename, or geographically localized data format (e.g., a date), then my application is dependent on the quality of the OS's validator (and a lack of intervening malware).
  • Almost everything in this article only applies because of hacker languages like C and C++, which Linux and FreeBSD use for virtually everything. It is so easy to forget to double-check bounds, input format, pointers, and all the other usual suspects. It's bizarre how programmers will use these error-prone languages for marginal performance gains just because their ego and haxor status is on the line. Sure, the kernel and drivers need to be in C. Sure, a Java VM needs to be in C. Sure, C++ is a good lan

    • by BattleTroll ( 561035 ) <battletroll2002@yahoo.com> on Wednesday December 31, 2003 @12:27PM (#7845762)
      "But almost nothing else should be written in C/C++"

      What world are you living in? Blaming poor technique on the tool used is moronic. There are ample examples of poorly written, poorly secured Java code the invalidate all of the premises in this rant. I've seen hard coded passwords baked into java source that were visible through a 'strings' call. Someone forgets to obfuscate his or her classes, and the entire structure of the program is available through a reverse compiler. Sure, the JVM protects one from buffer overruns and the like but don't for one minute think that programming in Java prevents stupid errors from exposing you to vulnerabilities.

      Not to mention there are areas where java is not the silver-bullet you describe. If you need precise control over your memory allocation, java is not the tool to use. If your application requires precise timing, java is not the tool to use. Need to control over the placement of allocated memory? Writing your own transport layer? Need hooks into the kernel?

      The prime directive still holds true - use the correct tool for the job at hand. Follow the lemmings of "this tool is the only one you need" at your peril.

      • But a dangerous tool can still be called a dangerous tool.

        An important difference between writing in C or some other unsafe language, and writing in Java/Perl or some other safer language is that, a simple mistake in the former tends to let the "attacker run arbitrary machine code of his own choice" whereas the latter tends not to.

        Of course there will be "attacker runs arbitrary SQL code/etc" but these are at least at a higher level and can usually be handled with safe interfaces/APIs. Whereas C is full o

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...