Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security

Graphing Randomness in TCP Initial Sequence Numbers 145

Saint Aardvark writes "This is neat: Graphic visualization of how random TCP Initial Sequence Numbers really are for different OSs. It's a great way of seeing how secure a TCP stack really is. Cisco IOS is great; OS9, OpenVMS and IRIX aren't. Posted to the ever-lovin' BugTraq mailing list." This is a follow-up to the previous report.
This discussion has been archived. No new comments can be posted.

Graphing Randomness in TCP Initial Sequence Numbers

Comments Filter:
  • amazing (Score:5, Funny)

    by Phosphor3k ( 542747 ) on Wednesday September 11, 2002 @08:11AM (#4236078)
    He must be running a server with no tcp stack. heh.
  • by Tinfoil ( 109794 ) on Wednesday September 11, 2002 @08:18AM (#4236109) Homepage Journal
    I propose a new flag in the standard TCP/IP packet. We shall call this the Slashdot Flag. The general purpose of this flag is to state whether or not the bandwidth limits of the server can handle the requirements a Slashdot posting can impose. If the flag is set false, Slashcode will automatically generate numerous, random, 'this page has been slashdotted' posts requesting a link to a mirror.

    That being said, the page *is* finally loading up so I'm going to go look at some pictures now.
  • by Quixote ( 154172 ) on Wednesday September 11, 2002 @08:20AM (#4236126) Homepage Journal
    The story's barely out on /. and its already slashdotted.

    /. story submission page should have a checkbox: "Please mirror the contents of this page (including graphics, which Google doesn't cache) before posting the story".

  • Original report (Score:5, Informative)

    by Caine ( 784 ) on Wednesday September 11, 2002 @08:22AM (#4236132)
    Original report here:

    http://razor.bindview.com/publish/papers/tcpseq.ht ml [bindview.com]
  • that Linux is apparently beneath their contempt. Do they know something we don't know?

    (To those tempted to reply that "they know it's secure", I'd like to point out that assumed security without testing is exactly what keeps getting MS in trouble)

    • You will find the original report here [bindview.com], and you might like to check out the linux section. Credit to a previous poster for that link, however.
    • ... Linux is apparently beneath their contempt. Do they know something we don't know?

      From section 3 of the linked article:

      "Several systems, such as Linux, use the same, satisfactory ISN generator as the one used a year ago, and because of that, are
      not covered here in any more detail.
      "
      • satisfactory ISN generator

        If Linux having an attack feasibility of 0.05% is satisfactory, compared with OpenBSD's 0.00% for example.

  • How about GNU/Hurd (I can't see if it's in the graph because of the ./ effect)? Last time I installed it (approximately six months ago) there was no random generation device...
  • by Nosher ( 574322 ) <simon@nosher.net> on Wednesday September 11, 2002 @08:36AM (#4236182) Homepage
    Lets face it: current computers and humans are both as bad as each other at randomness. The fact that computers have to "calculate" randomness is a bad sign in itself, and the humans that program these computers are almost utterly incapable of perceiving true randomness anyway. I'm waiting for the day when the national lottery comes up 1,2,3,4,5 with a bonus ball of 6. Society will crumble, public enquiries will be called for and conspiracy theorists will have something to bang on about for years. I think that barring the sudden development of Quantum x86 chips (at which point randomness becomes "real" and encryption becomes pretty much unbreakable [theregister.co.uk]), the only real solution for decent randomness must surely be TCP/IP seeding based on Lava Lamps [sciencenews.org]
    • Lets face it: current computers and humans are both as bad as each other at randomness. The fact that computers have to "calculate" randomness is a bad sign in itself, and the humans that program these computers are almost utterly incapable of perceiving true randomness anyway.

      Unless, of course, they're mathematicians, in which case they have a host of very powerful techniques for getting quite good evaluations of randomness, and a wide selection of sophisticated algorithms for producing really good pseudo-random sequences.

      In summary, you are both overstating the problem and ignoring the vast body of experience built up for dealing with it.

      You can also buy true random number generator cards off the shelf if you *really* can't live with a software solution. But be warned, these are suceptible to external influences (biasing them) and tend to be quite slow compared to PRNG techniques (even good PRNGs).
    • by thomasj ( 36355 ) on Wednesday September 11, 2002 @09:40AM (#4236602) Homepage
      Lets face it: current computers and humans are both as bad as each other at randomness. The fact that computers have to "calculate" randomness is a bad sign in itself [...]
      The funny thing is, that is really easy to construct a randomness hardware device. A zener diode can generate a lot of white noise just below its saturation point, so a circuit like this will do the trick:
      12V
      |
      R1
      |
      +-Z-/
      |
      R2
      |
      +-C1-/
      |
      C2
      |
      +-R3-/
      |
      SchmidtTrigger-/
      |
      Out
      For some reasonal values of the resistors and capacitors this would give a constant flow of ones and zeros that comes right out of the blue air (funny enough literally speaking) with more entropy than we will ever need.

      Cost: less than one dollar.

      • Absolutely. I'm sure there are other, numerous, ways of utilising the properties of "hardware" to generate something far more random than a programming algorithm could ever achieve. And this is the paradox - why, when it is so straightforward (and cheap) to get true randomness from the unstable, analogue properties of simple electronic devices, do they not feature more commonly as a basic mobo component (whither the random number generator DIMM module?), in the way that, for example, there's *always* a system clock (or at least timer) available. Instead, more effort has been invested in trying to emulate randomness with increasingly complex software-based algorithms that can never be really random precisely because they are programs.
        • Well, there are sources for this stuff on many mobos. The randomness is confined to the last couple of bits, so you'd have to take several of these together to get anything useful.

          The sources I'm referring to are the CPU and ambient temperature sensors and the fan RPM sensors. Now, once a system has been running for a while these numbers will tend to stabilize around a specific value for a given system and configuration (your fan speed and CPU temp shouldn't be fluctuating wildly!), but the last couple of bits ought to fluctuate some. (Depending on specific hardware and the driver that reads it.)
        • Intel's Random Number Generator [intel.com] is ranked quite high in a Google search for "hardware random number generator". They even have a FAQ [intel.com].

          The link to the Trusted Computing Platform Alliance on the left is ominous. When Intel speaks of "building block" do they have Palladium in mind?

      • The main problem is that this may not be as random as you may think. Many of these "random" fluctuations are actually fairly non-random, relating to electromagnetic fields around the circuit. So what may seem random one moment can become very non-random the next as the conditions around the circuit change. That being said, these kind of circuits could possibly serve as seeds to a random number generator. However, I'm unsure if it would be better to have a regular, dependable seed device such as a clock, or to have a semi-random, unreliable device such as the circuit you have proposed.
        • An interesting qestion is: is there such a thing as true randomness? How would you define it?

          Most people consider throwing a dice and reading the number on the top a random value. Is it? In theory, if you could measure the mass, stability, material etc. of the dice, the force which threw it, the properties of the table it is going to land on, the circumstances (such as wind etc.) you could calculate the result in advance. Similary, if you could observe these electromagnetic fields around this circuit, you could predict the output. It only requires good sensors and processing power.

          Now, next time go and measure the lottery balls and do some calculations before you select your numbers :-)
          • An interesting qestion is: is there such a thing as true randomness? How would you define it?

            Read up on "entropy" for an introduction.

            A sequence is random if, no matter what you do, you can't predict it with more than the randomly-expected accuracy.

            Most people consider throwing a dice and reading the number on the top a random value. Is it? In theory, if you could measure the mass, stability, material etc. of the dice, the force which threw it, the properties of the table [...]

            Some processes are truly random. Various quantum effects (as far as we can tell). Things like thermal noise in circuits (which is what most electronic random-noise generators amplify). You correctly point out that it's hard to cancel all non-random inputs to a system that measures a truly random variable, but you can get very close (and have a value that approaches truly random probabilities within known tolerances).

            In practice, even (good) pseudo-random sequence generators are close enough for most practical purposes.
    • There is a problem with using true random numbers for ISNs -- this is that the new ISN for a TCP connection (srcip/dstip/srcport/dstport) should not be in the range of the window (?) of an earlier instance of the same connection quadruple. Why? If oneof the endpoints gets rebooted and looses state while the connection is open to the other end, then it is important that the other end is able to recognize that the new SYN packet is a truly new connection (and so the old connection should be destroyed). Otherwise, the new SYN looks like a duplicate of the original SYN which has spent a long time wandering around the network. This is the reason that the ISN calculation was defined to use a clock in the original RFC.

      You may think that having a duplicate quadruple is unlikely, but that isn't true. The most common quadruples are: your ip, your port just a bit bigger than 1024, your http proxy server ip, port 80.

      Using a random local port also helps, though I don't know of systems that do that for TCP.

    • Lets face it: current computers and humans are both as bad as each other at randomness.

      Actually, computers can be quite good at randomness. You know about linux's /dev/random, right? It basically uses a very precise clock to measure the elapsed time between system interrupts, and uses the least significant bits. Since these interrupts are generated by events external to the computer (mouse movement, network events, etc.) the distribution is truly random.

      I'm waiting for the day when the national lottery comes up 1,2,3,4,5 with a bonus ball of 6.

      Why whould that number combination be a problem? It's just as likeley to occur as any other number set. In fact, if you are trying to pick a winning number, this would be a wise choice, since you are less likely to have to share the jackpot with someone else should you win (because most people believe that an obvious pattern like that is less likely to occur, and will avoid picking such sets).
      • Since these interrupts are generated by events external to the computer (mouse movement, network events, etc.) the distribution is truly random.

        Actually there was some discussion on the kernel mailing list recently about this. On, say, a rackmounted server you don't have mouse and keyboard interrupts, the only source of entropy is the timing of network events -- which in theory can be controlled by an outside entity (some other machine on the network). This leads to a theoretical non-randomness of /dev/random if some attacker is carefully controlling the timing of network packets.

        There was a patch offered for this but the side effects (taking much longer to generate an entropy pool) versus a practical assessment of the risk, were deemed not worth it.
    • I'm waiting for the day when the national lottery comes up 1,2,3,4,5 with a bonus ball of 6.

      Well, since the odds are only 1 in a million (literally) that it will ever happen, I wouldn't hold my breath.
      • ...while the odds for any other combination being picked are just the same...

        The german lottery works with 6 picks from 49 possible, and the odds for ANY combination pulled from this are 1 to 13 billion IIRC
    • I'm waiting for the day when the national lottery comes up 1,2,3,4,5 with a bonus ball of 6. Society will crumble, public enquiries will be called for and conspiracy theorists will have something to bang on about for years

      Maybe that's because the national lottery draws six balls plus the bonus ball....
      Still, it would stop anyone winning the jackpot.
    • Actually, getting true randomness isn't necessary. All you need is unpredictability and unrepeatability.

  • Is it Microware's OS-9 [radisys.com], or Apple Mac OS 9?
  • Lessons in RNG (Score:2, Insightful)

    by Anonymous Coward
    Posting anonymously because I'm not a whore.

    Given that the server is slashdotted, here are a few facts about pseudo-random number generators:

    Linear Congruential Generators are infamous for certain weaknesses, most notably that n-tuples fall "mainly on the planes": they lie on hyperplanes in higher dimensional space, depending on the additive and multiplicative parameters chosen.

    This doesn't mean that they are any worse for cryptography purposes, because even if you choose parameters that aren't as bad, once the generator parameters are determined and a seed is found, the sequence is deterministic.

    But, all is not lost. Modern generators often use shuffling techniques, where you keep track of a few dozen numbers at a time, and then pick one number to determine which of the pool to select, and a second number to replace that selected number. Even a poor LCG when accompanied by such a shuffling technique can perform well. Well, not a really poor one--IIRC randu had problems that shuffling would not fix. I believe the gnu lrand48 and friends use this shuffling technique, as well as CMUCL. I suppose this can be even better if you populate the initial pool of numbers from outside the pseudo-random sequence, so that the potential attacker has almost no shot at figuring out what you seeds are, but to scientists who aren't worried about cryptographic purposes, that is counter-productive. I believe that there are some generators that have been proven 'non-invertible'--you can not go backwards in the sequence except by performing brute force search. Whether or not TCP geeks use these is beyond my knowledge.

    But, all is still not safe. You have to be careful about how you change your random number into a usable number. Often people use the high-order bits (e.g., they multiply by some number and then round off). This can be a mistake (of course depending on what your generator really is, and what your purposes are).



    • Given that the server is slashdotted, here are a few facts about pseudo-random number generators:


      Interesting, but offtopic.

      The TCP standard forbids to use random numbers as the initial sequence number. If you use random numbers, you cannot guarantee that the sequence number for one (dest_ip,dest_port,source_ip,source_port) tupel are monotonically increasing.
      That monotonic increase, which should be faster than the network transfer rate, is needed to reduce the probability of data corruption from stale packets.

      The solution are one way hash functions, as described in RFC 1948 [faqs.org]
  • When I saw it in the Bugtraq mailing list.

    Extremely interesting, I'm probably just uninformed, but this has been one of the first examples I've seen where a 3d rendering has been used to express data in a way that makes any sense to me (I am mathematically challenged).
  • I got through fairly easily, but just in case it gets worse, Here's [homelinux.net] a mirror.

    It's just a 133mhz netbsd box on a home adsl line though, but I figured the more the merrier.

  • Why test NextStep? Because he still uses it? It will not and has not been upgraded in about 5 years unless you count Mac OS X as the upgrade (which it is).
    • It is always worth testing some bizarro platforms if only to show how much / little progress has been made since they were more common.

      One of the problems I have with the standard 3D card benchmarks is that they progress too quickly. My VoodooBanshee scored pretty well when it was bought, and I still use it in my 3rd machine, but I have no way of seeing how well it performs against the current crop because the benchmark tools are annual releases, and the scoring changes so much.

      It would be good if these had a popular old system from 1, 2 and 3 years ago to run the same tests on. It would probably result in more sales from us 'dont really know/care' guys because we'd suddenly know that we are only 22% as good as a new card costing just £150.
  • by ch-chuck ( 9622 ) on Wednesday September 11, 2002 @09:21AM (#4236410) Homepage
    't be cool to have a board with a bit of radioactive alpha source and a counter to make genuine random numbers. Like this [fourmilab.ch], or, ha, here's [std.com] one (3rd from the top) that proposes using disk drive air turbulance to generate random numbers!
  • Is is just me or was there no Linux graph? Or because it was listed in the previous test?? Even then, they just tested 2.2...
    • Re:Linux?? (Score:4, Informative)

      by raynet ( 51803 ) on Wednesday September 11, 2002 @10:04AM (#4236845) Homepage

      If you read the article is says:

      3. New evidence In this section, we review a number of operating systems that were either identified as not satisfactory in the original publication, or were not covered by our research at the time. Several systems,
      such as Linux, use the same, satisfactory ISN generator as the one used a year ago, and because of that, are not covered here in any more detail.
  • These operating systems were all 100% predictable- why was OpenVMS mentioned as being one of the worst? Frankly, it did poorly for an OS that has always been prsented as an example of great security, but I don't think the obsolete VAX platform represents the typical OpenVMS installation anymore. A test of OpenVMS Alpha would have been more useful. It's possible that there's a difference.
  • ...mostly because OpenVMS people tend to think, that 'their' OS is the most secure one on this planet (just like OpenBSD developers do, too).

    Compared to Standard-Unices, OpenVMS might offer superior security, mostly because of the privilege model it utilizes instead of giving all-powerful root privileges to many user space applications.

    On the other hand, we've got OSs which have much more sophisticated security than OpenVMS.
    First, there is IBM's AS/400, which has got a privilege model quite similar in extent to the one used in OpenVMS, but additionally it has object-based design, and therefore object-based security (type enforcement and such...). However, it lacks Mandatory Access Control, TCB, Trusted Path and some other things mostly required by military and/or government environments, and therefore it only achieves a C2 security rating.
    And then there are a couple of really secure Trusted Unices/Unix-style OSs, like Trusted Solaris, the Pitbull Addon for Solaris and AIX, Trusted IRIX, or XTS/400.
    Just talking about fine-grained privilege controls: Argus' Pitbull has got around 100 privileges, how many privileges are there on an OpenVMS box?

    No OS has ever received an A1 security branding. And the only OS which has ever received a B3 security branding, is actually a Trusted Unix Environment, something like a Unix clone with some proprietary security mechanisms built into the kernel (OpenVMS was B1 or maybe B2, iirc).

    ---

    Regarding secure TCP/IP initial sequence number generation, it does not take a Trusted OS to just generate secure sequence numbers.

    About two months ago, I compared initial sequence number generation on the following OSs using nmap:
    * Windows 95
    * Windows ME
    * Linux 2.2.x
    * Windows 2000 (plain)
    * Windows 2000 (with Norton Internet security installed)
    * OS/2 Warp Server Advanced 4.0 (default install)
    * Sun Solaris 7 x86 (with tcp_strong_iss set to 2)

    The results where pretty interesting and also a bit surprising:
    Windows 95 was worst (ok, that's not surprising ;-), nmap rating ~10
    Then came OS/2, which was not much better, nmap rating ~ 1000
    (BTW: does anyone have nmap results from OS/390 or OS/400?)
    Even Windows ME was a bit better than OS/2, but still far away from being secure, nmap rating ~ 8000
    There was little difference between Win2k with Norton's Firewall (~12000) and Win2k without the Firewall (~15000)
    Linux' results were quite good, nmap rating approximately some hundred-thousands or millions
    Solaris with tcp_strong_iss set to 2 seemed to offer really strong sequence number generation, so nmap just printed a lot of 9s

    ---

    Additional information:
    Here [nmap.org] is nmap.
    Here [argus-systems.com] is Argus Systems (EAL4 security for Solaris/AIX)
    Here [ibm.com] is IBM's AS/400
    Here [getronicsgov.com] is Getronics (B3 secure Unix Environment running Unix and Linux applications)
    And finally, here [compaq.com] is OpenVMS
    • ...mostly because OpenVMS people tend to think, that 'their' OS is the most secure one on this planet


      Well some no doubt do. But the bundled TCP/IP stack has been a poor relation for years, and the reaction of typical VMSers to TCP/IP problems is often "well the IP code was mostly written by UNIX guys, what do you expect"? However, anyone with a clue knows that basic Internet protocol improvements tend to appear first on BSD or Linux and work its way round.


      Anyway TCPIP 5.1 (unpatched?) is hardly the latest, even for VMS. It long predates the initial article for a start; it would be interesting to know how current versions look.

    • Oops, I got something wrong in the original post...

      Windows 2000 WITHOUT Norton's Firewall had an nmap rating of ~ 12000, Windows 2000 WITH the Firewall achieved the better rating of course, approximately 15000.

  • by Anonymous Coward
    What about LinkSys, Netgear, SMC, Assante, DLink and other home routers? How good are their sequence numbers?
    • by Anonymous Coward
      Heh...

      Most of them have constant or +1 ISNs. Some advanced ones have +64k.
      • Agreed, such devices tend to have poor ISNs, but then again, they are for home use, and the ports they serve only respond on the INSIDE. Outbound traffic passes thru with more-or-less the same ISN it started with.

        Unless you don't trust people on your home lan, it's not much of an issue. Yes, it should be done right, but the only people that can exploit this are those within your network. If they are in your home, they can do much worse than hijack your session as you configure the router.

        As for outbound traffic, if you connect to an outside website from an inside PC, it uses the ISN that the PC generated and doesn't change it or adds some simple fixed constant. It still retains all of the entropy of the original PC's ISN. Nobody from the outside should be able to connect to the configuration server in the "DSL router" device. Hence, nobody from the outside really sees the poor entropy of the DSLRouter's ISNs.

        Only higher-end firewall products, ie: the cisco PIX, attempt to mangle the ISN generation as they translate hosts. Most of the simple products do not, and certianly none of the $100 DSL routers do.

        Also good ISN generation is actualy important to more "commercial" grade routers, since these devices are sometimes deployed and administered remotely, generate tunnels, etc. Thus these routers/firewalls sometimes have exposed ports, or exposed client traffic on a public network as they are being reconfigured.

        Of course, many are only configured localy, or over a local LAN, which makes the risk a lot lower, but also users on corprate lans are generaly less trusted than those in your own home.

  • by Anonymous Coward
    Not being well versed in statistics and math in general, I was struck by the resemblance of some of these pictures to images that i've seen of far off galaxy's and star clusters. Could it be that we live in a very high resolution of a randomness graph from some other universe???
  • "...OpenVMS and IRIX aren't."

    You are overlooking the fact that most OpenVMS installations use third party TCP/IP stacks, generally Multinet [process.com] or TCPware [process.com] from Process Software [process.com] (the CMU stack being largely defunct now), which do not suffer from this defect. This is largely because the initial implementation of DEC's TCP/IP stack, UCX, was buggy as hell and lacked many features, although it is finally starting to catch up.

    Not that it matters much anyway. This predictable ISN weakness only threatens systems configured to trust others based solely upon their IP address (a bad idea). The only ways to crack a properly configured OpenVMS system currently involve (1)physical access to the console, (2) "social hacking" (tricking someone into telling you their password), or (3) packet sniffing for protocols which pass unencrypted passwords such as POP3 and telnet (easily solved by disabling such nonsecure protocols); three vulnerabilities which pose a threat to any OS, no matter how well designed. Nice having an OS which cannot be compromised via buffer overflow exploits (OpenVMS discards data from buffer overflows and raises an exception, always. Overflowing data cannot be executed).

    • OpenVMS can not identify data which came Buffer overflows, and therefore, OpenVMS can also be exploited via buffer overflows - this can be prooved by writing just a few lines of C code.

      The only difference to many other OSs is, that applications do not have more privileges than are required to run the application, while on Linux for example many applications (like Sendmail or FTP-Servers) have Superuser-privileges, and therefore can override Discretionary Access Control.

      I am almost absolutely sure, that it is possible to also run arbitrary code by exploiting buffer overflows on OpenVMS. But even if you could not, you can still modify data and pointert - that's enough to compromise security of a privileged program.

      There are also Unix operating systems which have a privilege model, and some of them have got a much more fine grained privilege model than OpenVMS *PLUS* Trusted Computing Base Controls, File Security Flags and many other things.

      So OpenVMS is by far not the most secure OS - personally, I think even OS/400 is more secure, because of its object-based design and its type-enforcment policies.
      • OpenVMS can not identify data which came Buffer overflows, and therefore, OpenVMS can also be exploited via buffer overflows - this can be prooved by writing just a few lines of C code.

        I would love to see such code, as this does not jive with what I've observed in my own socket coding. When a buffer overflow occurs on a VMS socket_read() call (at least under Multinet), the overflowing data doesn't seem to even get written to memory, let alone passed to DCL (unlike the situation in Unix and Windows where overflowing data gets passed to the shell).

  • Actually it was sent to the full disclosure mailing list members at least 14 hours before bugtraq members, but that's ok, some people like old news :)
  • a couple of wires suspended is a good brownian motion generator, like a nice hot cup of tea... :)
  • I'm seeing tons of people complaining how badly this site got slashdotted. I also remember from the last time, when it did too. However, after reading a few articles about "slashdotted" solutions, I clicked the link, and here it is...

    I could see what people are trying to mirror. I remember an article bitching about squid servers in ISP's, but I'm happy if I can get my stuff.
  • I noticed Tru64 is shown to be insecure... can HPCompaq invent a reason to sue the authors?
  • Film at eleven.

    The fact that they even have to mention that IRIX is insecure just shows how out of touch geekdom as a whole has become. Why even test IRIX for security holes? It's light years beyond swiss cheese.

  • Tester Seemed to forget about Linux. If he feels that Mac OS/9 or Unicos are more important to internet security then... hmmm
  • I didn't see Linux on there at all. Weird, considering it's a fairly major OS.
  • Sigh.
    Several systems, such as Linux, use the same, satisfactory ISN generator as the one used a year ago, and because of that, are not covered here in any more detail.
    It was tested the first time (a year ago) and was near-perfect, just slightly behind BSD. Since it wasn't likely to get better it wasn't included in the second round of tests.
  • RFC 1948 (Score:3, Interesting)

    by XNormal ( 8617 ) on Wednesday September 11, 2002 @03:01PM (#4239427) Homepage
    A TCP implementation that generates initial sequence numbers using a trivial time dependency may be secure against sequence number guessing attacks if it implements RFC 1948 [ietf.org].

    The idea is to add a bias to the sequence numbers that depends on the source address. A client will be able to predict his own sequence numbers but not the sequence numbers of others. The bias is calculated using a cryptographic hash of the connection ID and a secret value.

    A TCP implementation that uses RFC 1948 may still get a very poor rating for initial sequence number predictability from tools like nmap.

    Does anyone know any TCP stack that actually implements it?
  • It's a great way of seeing how secure a TCP stack really is.

    Yeah, right.

    Try the following: plot 500 points with point i having x coordinate Fib(i + 37) % 97 and y coordinate Fib(i + 97) % 97 (where Fib(i) is the i-th Fibonacci number). They look random, but in fact are totally predictable!

    Now imagine that someone got this right, and uses a crypto-secure PRNG to generate their TCP ISNs, seeding it with known-good random data. It would be nice to believe that this defeats all known TCP attacks! In fact, of course, their stack may be completely open to all kinds of attacks not involving ISN spoofing.

    The graphics are amusing, but not particularly informative except in the negative case. There is no substitute for real security. Testing can only prove a system insecure. ISN attacks are not the biggest worry in most TCP applications.

  • Any idea what software was used to make those graphs?
    It looks like a neat tool for visualizing sets of numbers.
    It reminds me of this awesome applet that shows frequency of numbers used on the net: numbers [turbulence.org]
  • Here is a case of "graphing randomness":
    [wisc.edu]
    http://www.cs.wisc.edu/~kovar/hall.html
  • nmap 3.0.0 supports a new feature called idlescan [insecure.org], which from the nmap-hackers posts I've gathered, idlescan scans ports indirectly without even touching the target machine. A "zombie" machine with weak sequence numbers is used to proxy the scan. Interesting stuff. Those with nmap can try it out:
    nmap -sI zombiehost victimhost

"If it ain't broke, don't fix it." - Bert Lantz

Working...