Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Security Software

IEEE Guides Software Architects Toward Secure Design 51

msm1267 writes: The IEEE's Center for Secure Design debuted its first report this week, a guidance for software architects called "Avoiding the Top 10 Software Security Design Flaws." Developing guidance for architects rather than developers was a conscious effort the group made in order to steer the conversation around software security away from exclusively talking about finding bugs toward design-level failures that lead to exploitable security vulnerabilities. The document spells out the 10 common design flaws in a straightforward manner, each with a lengthy explainer of inherent weaknesses in each area and how software designers and architects should take these potential pitfalls into consideration.
This discussion has been archived. No new comments can be posted.

IEEE Guides Software Architects Toward Secure Design

Comments Filter:
  • by drkstr1 ( 2072368 ) on Friday August 29, 2014 @03:20PM (#47786469)

    Here it is for anyone who didn't bother to RTFA

    1. Earn or Give, but Never Assume, Trust
    2. Use an Authentication Mechanism that Cannot be Bypassed or Tampered With
    3. Authorize after You Authenticate
    4. Strictly Separate Data and Control Instructions, and Never Process Control Instructions Received from Untrusted Sources
    5. Define an Approach that Ensures all Data are Explicitly Validated
    6. Use Cryptography Correctly
    7. Identify Sensitive Data and How They Should Be Handled
    8. Always Consider the Users
    9. Understand How Integrating External Components Changes Your Attack Surface
    10. Be Flexible When Considering Future Changes to Objects and Actors

    • Number 5 is the most important. It is about defending against bad input. When an object (some collection of functions and a mutable state) has a method invoked, the preconditions must be met, including message validation and current state. A lot of code has no well defined interfaces (global states). Some code has state isolated behind functions, but no documented (let alone enforced) preconditions. The recommendation implies a common practice in strongly typed languages: stop using raw ints and string
      • by gweihir ( 88907 )

        Sorry, but no. For example, one of the most important threats these days in the banking industry is data leakage. No amount of input data validation is going to help one bit there. These aspects are all critical. Mess up one, and all is lost. That is what makes software security so difficult: You have to master the whole problem space before you can produce good solutions. Incidentally, there are rules "11: Always consider the business case" and "12: Do a conclusive risk and exposure-analysis and rate and d

        • By what technical means do you prevent data leakage though? You need to specify what the system (and its users) will NOT do. Defending against bad input (and bad state transitions) is the foundation for everything else, because there is no technical means of enforcing any other security property otherwise. The game of the attacker is to reach and exploit states that are undefined or explicitly forbidden. Think of the Heartbleed bug as an example of a failure in 5 mooting 6. Bad input causes a web serve
  • I read the featured article, and I see ways that publishers could misuse some of the recommendations as excuses for profit-grabbing practices that plenty of Slashdot users would detest.

    For example, some organizations will claim a real business need to store intellectual property or other sensitive material on the client. The first consideration is to confirm that sensitive material really does need to be stored on the client.

    Video game publishers might take this as an excuse to shift to OnLive-style remote video gaming, where the game runs entirely on the server, and the client just sends keypresses and mouse movements and receives video and audio.

    watermark IP

    I'm not sure how binary code and assets for a proprietary computer program could be watermarked without needing to separately digitally sign each copy.

    Authentication via a cookie stored on a browser client may be sufficient for some resources; stronger forms of authentication (e.g., a two-factor method) should be used for more sensitive functions, such as resetting a password.

    For small web sites that don't store financial or health information, I don't see how this can be made affordable. Two-factor typically incurs a cost to ship the client device to clients. Even if you as a developer can assume that the end user already has a mobile phone and pays for service, there's still a cost for you to send text messages and a cost for your users to receive them, especially in the United States market where not all plans include unlimited incoming texts.

    a system that has an authentication mechanism, but allows a user to access the service by navigating directly to an “obscure” URL (such as a URL that is not directly linked to in a user interface, or that is simply otherwise “unknown” because a developer has not widely published it) within the service without also requiring an authentication credential, is vulnerable to authentication bypass.

    How is disclosure of such a URL any different from disclosure of a password? One could achieve the same objective by changing the URL periodically.

    For example, memory access permissions can be used to mark memory that contains only data as non-executable and to mark memory where code is stored as executable, but immutable, at runtime.

    This is W^X. But to what extent is it advisable to take this principle as far as iOS takes it, where an application can never flip a page from writable to executable? This policy blocks applications from implementing any sort of JIT compilation, which can limit the runtime performance of a domain-specific language.

    Key management mistakes are common, and include hard-coding keys into software (often observed in embedded devices and application software)

    What's the practical alternative to hard-coding a key without needing to separately digitally sign each copy of a program?

    Default configurations that are “open” (that is, default configurations that allow access to the system or data while the system is being configured or on the first run) assume that the first user is sophisticated enough to understand that other protections must be in place while the system is configured. Assumptions about the sophistication or security knowledge of users are bound to be incorrect some percentage of the time.

    If the owner of a machine isn't sophisticated enough to administer it, who is? The owner of a computing platform might use this as an excuse to implement a walled garden.

    On the other hand, it might be preferable not to give the user a choice at all; or example if a default secure choice does not have any material disadvantage over any other; if the choice is in a domain that the user is unlikely to be abl

    • "How is disclosure of such a URL any different from disclosure of a password? One could achieve the same objective by changing the URL periodically." I believe the article is saying that you don't just blindly allow the use of URLs without verifying that the caller is within an authenticated session. This has nothing to do with changing passwords.

      "Google tried this with Android by listing all of an application's permissions up front at application installation time. The result was that some end users ended

      • I believe the article is saying that you don't just blindly allow the use of URLs without verifying that the caller is within an authenticated session. This has nothing to do with changing passwords.

        A newly installed web application has to create a first authenticated session that lets the founder set his own password (or set his own e-mail address in order to recover his password) and grant himself founder privileges. The URL of this first session is effectively a password (or more properly a substitute for a password), though I'll grant that it should be disabled through other means most of the time.

        But if you don't want any app to do anything, why do you have a device capable of running apps?

        I see at least two problems.

        The first is that Android's permissions are far too coarse-grained. SD

  • While the brochure referenced is nice, anybody that needs it has zero business building anything security-critical. It does take a lot of experience and insights to apply the described things in practice in a way that is reliable, efficient and secure and respects business aspects and the user. Personally, I have more than 20 years of experience with software security and crypto, and looking back, I think I became a competent user, designer and architect only after 10 years on this way. The problem here is

    • I'd say security failure is partly due to incentive alignment failure for developers.

      Bad security design is a problem that's going to bite, but usually a little later, after version 1 is out the door and everyone's paid.

      Not meeting the pretty much arbitrary and insanely optimistic delivery schedule is going to bite developers right now.

      Corners will be cut, even if some of the developers know what SHOULD be done.

      In general, almost every architectural aspect of software, including security, (well-factoredness

      • by gweihir ( 88907 )

        While that certainly plays a role, it is a minor one. It does stand in the way of solving things, but if you do not have developers that can do secure software engineering competently (and that is the normal case), then giving them too little time and money to do secure software engineering does not matter. The other thing is that people that actually understand software security are much less likely to declare something finished or secure than those with only a superficial understanding of things. Software

This universe shipped by weight, not by volume. Some expansion of the contents may have occurred during shipment.