Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Bug IT Technology

Time's Up: 2^30 Seconds Since 1970 675

An anonymous reader writes: "In Software glitch brings Y2K deja vu, CNET points out a small wave of Y2K-like bugs may soon hit, though it gets the explanation wrong. It will soon be about 2^30 (1 billion, not 2 billion) seconds since 1970 (do the arithmetic). Systems that use only 29 bits of a word for unsigned/positive integers, or store time as seconds since 1970 in this format, may roll back to 1970. (Many systems that do not need full 32 bit integers may reserve some bits for other uses, such as boolean flags, or for type information to distinguish integers from booleans and pointers.)"
This discussion has been archived. No new comments can be posted.

Time's Up: 2^30 Seconds Since 1970

Comments Filter:
  • Yay! (Score:5, Funny)

    by jargoone ( 166102 ) * on Sunday December 21, 2003 @06:59PM (#7782362)
    This is the biggest computer-related time event since Y2K, which begun on January 1, 19100!
    • by siskbc ( 598067 ) on Sunday December 21, 2003 @10:44PM (#7783469) Homepage
      This is the biggest computer-related time event since Y2K, which begun on January 1, 19100!

      In standard /. fashion, I will overlook factual inaccuracies in the interest of pursuing my goal of correcting everyone's grammar. As such, I must tell you that Y2K *began* on January 1, 19100.

  • Some systems... (Score:4, Interesting)

    by NightSpots ( 682462 ) on Sunday December 21, 2003 @07:00PM (#7782367) Homepage
    And which systems are those?

    Any of the common architectures use 29 bits instead of 31?
    • Re:Some systems... (Score:5, Informative)

      by be-fan ( 61476 ) on Sunday December 21, 2003 @07:10PM (#7782444)
      On many dynamically typed languages (notably Lisp) some of the bits of an integer are used as 'tag bits' that distinguish integers from pointers from cons cells, etc. Some bits are also sometimes used to help out the GC.

      So maybe a Lisp Machine might have this problem? Of course, Lispers will tell you that they'd always have the sense to use a bignum :)
      • Re:Some systems... (Score:5, Insightful)

        by Anonymous Coward on Sunday December 21, 2003 @07:27PM (#7782545)
        Well, they wouldn't just have the sense to use a bignum - they'd have the sense not to override the default behaviour of the damn language, which would be to go to bignum if necessary. It would take effort to write a declaration to actually deliberately override the behaviour, and would be A Seriously Stupid Thing To Do. Doesn't mean that somebody, somewhere wouldn't do it, of course, but it wouldn't be the "common case" that there would be a problem waiting to happen, like in C.

        • Re:Some systems... (Score:5, Informative)

          by __past__ ( 542467 ) on Sunday December 21, 2003 @07:59PM (#7782739)
          they'd have the sense not to override the default behaviour of the damn language, which would be to go to bignum if necessary. It would take effort to write a declaration to actually deliberately override the behaviour, and would be A Seriously Stupid Thing To Do. Doesn't mean that somebody, somewhere wouldn't do it, of course
          Indeed, someone did, sort of. Namely the implementors of the SBCL [sourceforge.net] compiler (and they probably inherited it from CMUCL [cons.org]) who, generally, definitely do not qualify as stupid.
          "... and of course, CL transparently uses bignums when a numeric quantity exceeds the range of machine words, so we don't get overflow problems"
          * (decode-universal-time (+ (* 86400 365 50) (get-universal-time)))
          debugger invoked on condition of type TYPE-ERROR:
          The value 2635786389 is not of type (SIGNED-BYTE 32).

          This is because I didn't specify a timezone, so it asks unix for the default timezone and DST settings, and unix needs a time_t, which is 32 bits on this box.
          Dan Barlow, SBCL and the Y2038 problem [gmane.org]

          So even if Lisp tends to not have overflow problems, Unix and C will come back and bite you if you give them a chance...
      • Re:Some systems... (Score:5, Interesting)

        by Piquan ( 49943 ) on Monday December 22, 2003 @01:40AM (#7784210)

        So maybe a Lisp Machine might have this problem? Of course, Lispers will tell you that they'd always have the sense to use a bignum :)

        The Symbolics Lispms had wider words than PCs today. They used 36-bit words on the 3600s, with 4 bits of tag and 32 bits of data for numbers (or 8 bits of tag and 28 bits of data for pointers). They used 40-bit words on the Ivory, with 8 bits of tag and 32 bits of data for all types. So either way, the number is a 32-bit value. (This is why Lispms traditionally spec RAM in megawords, not megabytes.)

        That aside, like I mentioned in my other post, they said that all the date code is bignum-friendly anyway.

  • OH NO! (Score:5, Funny)

    by elite lamer ( 533654 ) <(moc.liamtoh) (ta) (kciwsyevrah)> on Sunday December 21, 2003 @07:01PM (#7782371) Homepage Journal
    SOCIETY AS WE KNOW IT WILL COLLAPSE!!!! I have to get bottled water and batteries ready! This will be a complete disaster--just like Y2K!
  • by digital bath ( 650895 ) on Sunday December 21, 2003 @07:01PM (#7782372) Homepage
    y2.003k?

    ...Run for the hills!
  • by twoslice ( 457793 ) on Sunday December 21, 2003 @07:02PM (#7782377)
    My two-bit computer ran out of time the moment it was turned on...
  • by Anonymous Coward on Sunday December 21, 2003 @07:03PM (#7782384)
    With some of the fashion's today (bell bottems, et al.)
  • yawn (Score:5, Funny)

    by Anonymous Coward on Sunday December 21, 2003 @07:03PM (#7782386)
    this has been a problem since 1970. is it news that c-net realizes it?
  • RTFA (Score:3, Informative)

    by g-to-the-o-to-the-g ( 705721 ) on Sunday December 21, 2003 @07:03PM (#7782387) Homepage Journal
    The bug effects older unix systems. So if your still using UnixWare, you may be in trouble.
    • Re:RTFA (Score:5, Informative)

      by cbiffle ( 211614 ) on Sunday December 21, 2003 @07:07PM (#7782421)
      Okay, I read TFA. Wrong.

      The article specifically states that Unices use unsigned 32-bit values to store the number of seconds since 1970. Unfortunately, it's wrong even in that respect, since most Unices have been using larger timevals for some time now.

      It's fun to bash SCO and all, but come on.
      • Re:RTFA (Score:5, Informative)

        by oGMo ( 379 ) on Sunday December 21, 2003 @07:28PM (#7782554)
        The article specifically states that Unices use unsigned 32-bit values to store the number of seconds since 1970. Unfortunately, it's wrong even in that respect, since most Unices have been using larger timevals for some time now.

        Actually, it's wrong in that POSIX states this value is signed, which is what causes it to be a problem we have to worry about before the next century. (If time_t was unsigned, various functions, such as time(2) could not return an error code. Similar deal happened with other types, such as size_t, which lead to the 2GB file problem for awhile.)

      • Re:RTFA (Score:3, Informative)

        by drinkypoo ( 153816 )
        Also, UnixWare is a relatively new Unix featuring many newer concepts missing in SCO Unix, the actual product which geeks love to hate.
    • Re:RTFA (Score:5, Funny)

      by Anonymous Coward on Sunday December 21, 2003 @07:11PM (#7782451)
      So if your still using UnixWare, you may be in trouble.

      So that means Linux is affected also, since its mostly copied from Unixware, right?
  • I could of course be wrong but I'm pretty sure there aren't 31-bit architectures. At least, these architectures are exceedingly rare if they do indeed exist.

    What I believe this article is referring to is that some software may have been coded to use a bit in integers to store extra info. This seems like a pretty bad idea though as it would have all sorts of interesting effects on overflow and such. It would seem like it would only be useful to a very very very tiny portion of software since the overhead in using this method as a general purpose solution would be terribly difficult.

    Sounds like it's just the story of yet another software bug...
    • in mainframes and other "big iron" in finance shops.

    • by cbiffle ( 211614 ) on Sunday December 21, 2003 @07:15PM (#7782475)
      Chances are pretty good that you interact with 31-bit machines every day -- namely, older (pre-64-bit) IBM mainframes. Even the new zSeries machines frequently run apps in 31-bit mode for compatibility with older systems.

      Using a couple of bits in an integer for data type is usually (in my experience) called 'tagged data.' I use it in Smalltalk VMs as an optimization -- the "objects" representing Integers are really just 31-bit integers with an extra zero-bit stuck on the LSB. (Object pointers have an LSB of 1, so you mask that to zero before using them and keep everything 16-bit aligned.)

      Essentially what you wind up with there is a tradeoff: you can perform simple arithmetic and logic on the Integer "references" without actually having to allocate an object to hold an Integer, but you lose a bit of dynamic range. In my experience, it's an acceptable tradeoff, and it lets you have all the advantages of a true OO system without the performance penalty of having to use an object for, say, every loop variable.

      So there's an example of why you do that. The aforementioned Smalltalk systems wouldn't be vulnerable to this date issue, however, as their integers will automatically convert themselves to arbitrary-precision numeric types as needed.
      • by Jamie Zawinski ( 775 ) <jwz@jwz.org> on Sunday December 21, 2003 @09:06PM (#7783027) Homepage

        One of the fun tricks you can do is use the bottom 2 bits for tagged data type, and then reserve two of those for immediate integers: one for even ints, and one for odd. That way, you get 3 tag types, with 30 bits of pointer, but you still get 31 bit integers instead instead of 30 bit. So the tags might look like:

        • 00: odd int

        • 01: even int
          10: pointer to object header
          11: pointer to array/string header

        Then you convert raw data to an int with >>1, and to a pointer with &~3 (you only need 30 bits of pointer if all objects are word-aligned in a 32 bit address space.)

        Lucid Common Lisp used this kind of system, and Lucid Emacs/XEmacs do something similar.

      • I have one! (Score:5, Funny)

        by soloport ( 312487 ) on Sunday December 21, 2003 @09:21PM (#7783092) Homepage
        I shorted A31 to ground with a screwdriver on my Motorola MC68060 board. It blew a pullup resistor on an open collector output driver. Now A31 is always low -- and I'm too lazy to replace the tiny little 100 ohm surface mount. It runs just fine as long as I don't address high memory.

        I just want to know: Does that count?
    • by Anonymous Coward on Sunday December 21, 2003 @07:17PM (#7782490)
      Linux 2.0.x and 2.2.x use 31-bit time_t struct's.
    • by Tom7 ( 102298 ) on Sunday December 21, 2003 @07:19PM (#7782500) Homepage Journal
      It's not uncommon to use some extra bits for tags in implementations of some high-level languages. For instance, in SML/NJ the 'int' type is 31-bits long and signed; all integers are represented shifted up one bit and with a 1 in the new ones place. This is to distinguish them from pointers, which (since they are aligned) always end in two 0 bits. The arithmetic primops account for this extra bit, usually with very little overhead since the instructions are simple and can be paired. (Other SML compilers do it in different, sometimes better ways.) Anyway, fortunately they are not dumb enough to use 'int' to represent time, so there's no problem there! I expect there are lisp implementations that do similar things.
    • by boa ( 96754 ) on Sunday December 21, 2003 @07:26PM (#7782538)
      > I could of course be wrong but I'm pretty sure there aren't 31-bit architectures. At least, these architectures are exceedingly rare if they do indeed exist.

      Of course you're wrong :-)
      The IBM OS/390 and Z/OS operating systems, which run on most IBM mainframes, are both 31-bit.

  • by Anonymous Coward on Sunday December 21, 2003 @07:05PM (#7782401)
    If 1K = 1024 then Y2K is 2048. We still have a ways to go on that one! :)
  • by Dreadlord ( 671979 ) on Sunday December 21, 2003 @07:05PM (#7782407) Journal
    ...is 2.6 affected by the bug??
  • by dagg ( 153577 ) on Sunday December 21, 2003 @07:08PM (#7782423) Journal
    perl -e 'print "seconds left: ", ((2**30) - time), "\n"'
  • OMG (Score:5, Funny)

    by Kludge ( 13653 ) on Sunday December 21, 2003 @07:08PM (#7782429)
    I was born just before 1970.
    I'm a billion seconds old.

    Holy shit.
  • by Anonymous Coward on Sunday December 21, 2003 @07:08PM (#7782430)
    How many of you programmers are storing your years using 4 digits? Yeah, that's what I thought, all of you. What happens when it's January 1, 10000? Hmmm? Yes, that's right, your software will fail. It will roll back to 0, which wasn't even a year!

    Now, I know what you're thinking. "There's no way someone will be using software I'm writing 8000 years from now." Yeah, and that's what programmers said 30 years ago about the year 2000. Be smart, and play it safe. Use a 5, or better yet, 10 digit year. What's a few bytes?
    • by Mr. Slippery ( 47854 ) <tms&infamous,net> on Sunday December 21, 2003 @07:36PM (#7782614) Homepage
      Be smart, and play it safe. Use a 5, or better yet, 10 digit year. What's a few bytes?
      I wrote the following in the RISKS forum a few years ago [ncl.ac.uk]:
      So maybe I'm an April Fool, but it seems to me that the Y10K issue is worth a little serious thought.

      There are areas of human endeavor in which 8000 years is not an extreme time span. At present, we deal with these long time spans only in modeling things like geological and cosmological events. But it is not unreasonable that within the next century, we may begin to build very high technology systems with mission durations of thousands of years - for example, a system to contain radioactive wastes, or a probe to another star system.

      Y2K issues have raised our consciousness about timer overflows, but it's quite possible that this may fade in succeeding generations. There's no reason not to start setting standards now.

      Perhaps all time counters should be bignums?

    • by Kohath ( 38547 ) on Sunday December 21, 2003 @07:40PM (#7782634)
      I don't know about you, but after 1/1/2000, I went back to using 2 digits.
  • deja vu? (Score:5, Funny)

    by fatgraham ( 307614 ) on Sunday December 21, 2003 @07:09PM (#7782437) Homepage
    IIRC, bugger all went wrong. No nuclear weapons randomly fired off in any direction, no computers melted (well, none of mine)
  • Y2K (Score:5, Funny)

    by KrispyKringle ( 672903 ) on Sunday December 21, 2003 @07:09PM (#7782441)
    I remember this. Talk about hype. I stumbled across a preparedness website a year or two later (one like this [qouest.net]) and laughed my ass off. Talk about a throwback to 1999 (notice the animated gifs and scolling text in the status-bar that lend a real air of authority). I think I even e-mailed the writer and asked if he did't feel stupid now.

    There was no reply, though. His computer probably thought my letter was from a century ago.

  • by lurker412 ( 706164 ) on Sunday December 21, 2003 @07:09PM (#7782442)
    The program in question was revised in 1997. Most companies already had kicked off their Y2K programs by then. The popular press was already starting to run end of the world warnings. OK, so it wasn't a Y2K problem as such, but how this company managed to ignore the problem at that time is truly baffling.
  • Did the math. (Score:5, Informative)

    by Yaztromo ( 655250 ) on Sunday December 21, 2003 @07:10PM (#7782446) Homepage Journal

    Okay -- I did the math, and 2^29 seconds since January 1st 1970 would have been up on January 4th, 1987.

    2^30 seconds since the epoch puts us into January 9th, 2004.

    Yaz.

    • Re:Did the math. (Score:5, Interesting)

      by happyduckworks ( 683158 ) on Sunday December 21, 2003 @08:31PM (#7782886)
      > Okay -- I did the math, and 2^29 seconds since January 1st 1970 would have been up on January 4th, 1987. I remember that day - the Common Lisp system I was using (on a Sun) all of a sudden stopped recognizing when files were out of date and needed recompiling. Yup, they used a couple bits for a tag and then interpreted the rest as signed...
  • by scdeimos ( 632778 ) on Sunday December 21, 2003 @07:11PM (#7782454)
    Its epoch is midnight 01-Jan-1904 and it uses an unsigned 32-bit integer to count seconds since then. That means it will run out at 06:28:15 09-Feb-2040.

    But, I'm sure Apple will have released a new Newton by then! :P

    (I don't suppose anyone's ported the Rosetta writing recognition system to other PDA's, just in case?)
  • Wrong writeup. (Score:5, Interesting)

    by crapulent ( 598941 ) on Sunday December 21, 2003 @07:14PM (#7782470)
    Could you be any MORE confusing? 2^30 is not 1 billion. It's 1,073,741,824. And the date as of right now is:

    $ date +%s
    1072051722


    So, yes, there is an issue with the date overflowing a 30 bit space. I'd hardly say it's relevant, any software that made such a braindead choice (why 30 and not 32 bits?) deserves to break. But it has nothing to do with a billion or anything else related to base 10. It hit 1 billion a long time ago, and it was covered then. [slashdot.org]
    • by Skapare ( 16644 ) on Sunday December 21, 2003 @10:07PM (#7783270) Homepage

      Actually UNIX is really using an effective 31 bits because of the fact that it defaults to a signed quantity, and hence the highest order bit is really a sign bit. So when the clock finally increments 0x7FFFFFFF (19 January 2038 03:14:07) to 0x80000000 the time will wrap back to 2,147,483,648 seconds before 1970, e.g. instead of being Tuesday 19 January 2038 03:14:08, it suddenly becomes Friday the Thirteenth (specifically Friday 13 December 1901 20:45:52).

      Those systems that are using an unsigned 32 bit time value can go on until Sunday 7 February 2106 06:28:15.

      If we were to switch to 64 bits, we could use a resolution of nanoseconds with all that extra space and still represent time until Friday 11 April 2262 23:47:16.854775807 before the sign bit becomes an issue (and negative values can represent time back to Tuesday 21 September 1677 00:12:43.145224192).

  • by teridon ( 139550 ) on Sunday December 21, 2003 @07:14PM (#7782472) Homepage
    that time should be stored as self-describing format, such as:
    header containing:
    2-bits (E) for # of bits for Epoch
    1-bit for whether the time is a floating point format
    if not floating, then:
    2-bits (N) for # of bits for the time
    2-bits (n) for # of bits for the resolution (1/2^n) (e.g. n=8 would mean 1/256 second resolution)
    if floating, then follow some IEEE standard representation.

    • What would this solve? First, why would you want to store it as floating point? Floating point numbers incur a loss of precision that you don't want (because once the date gets pretty big, you won't be able to measure small time intervals--try it in C; if you add, say, 4.3 to four billion, you'll get four billion), and in fact instead of rolling over, floating point numbers reach ``infinity''. Second, this still limits the size to whatever your maximum is here (I'm not sure I understand your resolution thin
  • by Rosco P. Coltrane ( 209368 ) on Sunday December 21, 2003 @07:17PM (#7782491)
    I'm bracing for the 2034 Y2K (or is it Y2KATF) bug, the one that'll overflow the Unix time() function.

    You think I'm trying to be funny ? well let's see : people were worried that systems built in the 80s and before would display the 99 Cobol date bug, and/or the 2-digit date bug in 2000. 1980 and before is 20+ years ago, and there weren't that many computers/microcontroller around during those 20 years compared to what's to come, and operating systems weren't very unified. Today in 2004, we have kajillions of Unix machines around : how much do you bet a lot of these will still be running 30 years from now ?

    This said, I'm not bracing quite yet to tell the truth ...
  • ObCalculation (Score:5, Interesting)

    by LaCosaNostradamus ( 630659 ) <LaCosaNostradamu ... m minus caffeine> on Sunday December 21, 2003 @07:20PM (#7782505) Journal
    2^30 = 1073741824s ~= 34y 9d 97m

    1970JAN01 0000hr + (34y 9d 97m) ~= 2004JAN10 0137hr

    January 10th should be an interesting day for somebody.
  • by fireman sam ( 662213 ) on Sunday December 21, 2003 @07:22PM (#7782514) Homepage Journal
    I've seen some comments about hey, another Y2K waste of time... blah blah blah. But think of it this way:

    1 - What if all the money that was spent to "fix" the Y2K bug actually fixed the bug.

    2 - Most people say that all the money spent "fixing" the Y2K bug was a waste because nothing happened.

    3 - How many people have insurance of some sort, and have never needed it (I am). Yet every year, you renew your policies.

    There are two things we can do about these "time" bombs. The first is to do nothing and hope that all is well. Or we could audit the code that may fail. A bit like paying insurance.

    [ PS: it is SCO's code, so they should pay ]
    • by lurker412 ( 706164 ) on Sunday December 21, 2003 @07:51PM (#7782691)
      I worked on a large Y2K program for a hospital chain. From what I observed, I can tell you this:

      There were, in fact, many problems that were found and fixed before they did any harm.

      A lot of infrastructure was upgraded on somewhat dubious claims of Y2K problems. In some cases, resetting the system clock once on 1/1/00 would have sufficed.

      Consulting firms and contractors had a feeding frenzy. Some added value, others did not.

      Many corporations were frightened by the prospect of lawsuits that might occur if they had Y2K problems. Lawfirms were licking their chops with anticipation.

      As a result of all of the above, for the only time in recorded history CIOs could get whatever they wanted. Naturally, they played it safe. Wouldn't you?

    • by Kenja ( 541830 ) on Sunday December 21, 2003 @08:18PM (#7782822)
      This is true of IT work in general. If you do your job, nothing happens and people think you're wsting time.
  • by Anonymous Coward on Sunday December 21, 2003 @07:29PM (#7782569)
    Seriously, could we please get started fixing this 2038 bug now? I don't know if it's practical to change time_t to "long long"; if not, could we at least officially define the successor to time_t?

    I know that the emergence of 64-bit chips will alleviate this somewhat, but it wouldn't surprise me if at least embedded systems are still running 32-bits in 2038.

    I know that "long long" is common, but it's not part of the official C++ standard yet. Shouldn't we be putting this in the standard now? It's not too much to require language libraries to have 64-bit integer support (if necessary). This doesn't have to be painful.

    I'll feel a lot better the day that I know what I'm officially supposed to use instead of time_t -- or if I can be given a guarantee that time_t will be upgraded to 64 bits within the next few years.

    • by mabu ( 178417 ) on Sunday December 21, 2003 @08:43PM (#7782934)
      In case you haven't figured out, we are now a reactive society as opposed to proactive. We fix things, or usually replace them, when they break, not before. Americans don't think much about the future beyond what's on television later that day.

      Yes, we could fix the bug now. Likewise, we could also address world hunger, the deficit, the exploding crime problem, terrorism and a host of other issues with such cautious, preventative measures, but doing so wouldn't give us the instant gratification we desire now, so we'll let your children deal with the deficit, crime, terrorism, poverty, hunger and the time bug. We have better things to do. I'd write more, but I think "Friends" is coming on.
  • For Example (Score:3, Funny)

    by Anonymous Coward on Sunday December 21, 2003 @07:31PM (#7782579)
    Parametric Technologies has this problem [ptc.com]. Seems they were trying to insert the year 2038 bug into their code, but the messed up and got the year 2004 bug instead.
  • yup... (Score:5, Informative)

    by AntiTuX ( 202333 ) on Sunday December 21, 2003 @07:46PM (#7782660) Homepage
    http://maul.deepsky.com/%7Emerovech/2038.html [deepsky.com]

    antitux@TuX0r:~$ perl 2038.pl
    Tue Jan 19 03:14:01 2038
    Tue Jan 19 03:14:02 2038
    Tue Jan 19 03:14:03 2038
    Tue Jan 19 03:14:04 2038
    Tue Jan 19 03:14:05 2038
    Tue Jan 19 03:14:06 2038
    Tue Jan 19 03:14:07 2038
    Fri Dec 13 20:45:52 1901
    Fri Dec 13 20:45:52 1901
    Fri Dec 13 20:45:52 1901
    antitux@TuX0r:~$

    hrm..
    Looks like we're fucked too.
  • by iggymanz ( 596061 ) on Sunday December 21, 2003 @07:58PM (#7782735)
    These might be a problem for many slashdot readers down the road, I for one plan on being likely dead, what with being old fart already. So here's those "overflow" dates, mm/dd/yyyy U.S.A. format:
    02/06/2036 - systems which use unsigned 32-bit seconds since 01/01/1900
    01/01/2037 - NTP time rolls over
    01/19/2038 - Unix 32 bit time, signed 32 bit seconds (that's to say, 2^31) since 01/01/1970
    02/06/2040 - Older Macintosh
    09/17/2042 - IBM 370 family mainframe time ends, 2^32 "update intervals, a kind of 'long second'" since 01/01/1900
    01/01/2044 - MS DOS clock overflows, 2^6 years since 01/01/1980
    01/01/2046 - Amiga time overflows
    01/01/2100 - many PC BIOS become useless
    11/28/4338 - ANSI 85 COBOL date overflow, 10^6 days since epoch of 01/01/1601

    and my personal favorite,
    07/31/31086 - DEC VMS time overflows
  • by Walabio ( 660956 ) on Sunday December 21, 2003 @08:33PM (#7782895) Homepage

    If we use Plank-Time and 256bit integers, we can handle 1.981384141637854Year*E+26. We should handle time as 256bit integer based on placktime and convert to local human time-standards as needed. We should support for a second 256bit imaginary integer and conversion to two floating point-math-units (one real and one imaginary) because some calculations in Physics involving time occur on the complex plain. I propose that zero-time be zero Julian Date.

  • by RouterSlayer ( 229806 ) on Sunday December 21, 2003 @09:06PM (#7783029)
    If you want to know what the real values are, this article and the one on cnet is wrong in so many ways... ugh, but here are the real ones that will really affect people:

    FreeBSD 2.2.7 will start having this clock problem on January 18th, 2038 at 20:14 (8:14PM) EST when the unix clock on FreeBSD will read: 2147483640,
    20:15 (8:15PM) EST will cause FreeBSDs clock timer to claim an invalid date... joy !

    That's not 2^30 folks, that's 2^31 (2147483648) or about 8 seconds after the time I quoted above.

    I know because we still have one box running 2.2.7 here (and what a fun box it is too!) can't handle more than 128megs of ram. What is this - the dark ages? that was rhetorical... :)
  • by shadowcabbit ( 466253 ) * <cx.thefurryone@net> on Sunday December 21, 2003 @10:06PM (#7783264) Journal
    A girl once told me she wouldn't go out with me until the end of time.

    Sally Roberts, pucker up.
  • Time is complex... (Score:5, Informative)

    by Goonie ( 8651 ) * <robert,merkel&benambra,org> on Sunday December 21, 2003 @11:05PM (#7783566) Homepage
    Recording times accurately can get very complex in some cases, and longer time_t's aren't the whole solution.

    Firstly, every so often a leap second [wikipedia.org] is added to UTC. For this reason, over timescales of years it is impossible to exactly map unix time_t and calendar times.

    Another issue is determining when a transaction happened that occurred across multiple time zones...

  • by LouisvilleDebugger ( 414168 ) on Monday December 22, 2003 @12:46AM (#7783949) Journal
    The Y2K preparedness team at my company went crazy over the hype. They set up a big "Y2K Command Center" (commandeered a big teleconferencing room) with PCs full of nothing but Excel spreadsheets with all the functionality metrics for our whole enterprise painstakingly listed. Every ten minutes, all of us in the trenches were supposed to telephone this "command center" so they could update their spreadsheets (yes, web site "foobar" is still responding, yes, this database still works.)

    About 30 minutes before Y2K hit our time zone, I noticed the maintenance guys firing up the big diesel backup generators in our rear parking lot. I asked my boss about it. "Oh yeah," he said, "They're going to take us off the power grid just in case." No big deal to us: we have UPS's on all our PCs, and the power fails over all the time in the always-spectacular Kentucky summer thunderstorm season. (Half of the building's lighting turns off to conserve power, everyone slightly gasps, but keeps working...we're used to it.)

    But not so for the "Y2K Command Center." The "suits" had plugged all their spreadsheet-running PCs straight into the wall, and when we changed over to the generators (on their command) the momentary power drop caused *every single one* of their machines to go down....

    We laughed in their faces openly. If that's not being hoist by one's own petard I don't know what is. It almost made it worth it not to be kissing my sweetie on New Year's Eve.

  • by Entropy_ajb ( 227170 ) on Monday December 22, 2003 @01:17AM (#7784081)
    Maybe this is what the Orange Alert is about.....
  • Network Associate's (McAfee) Webshield product has already failed on the 1,000,000,000 second test. (In decimal - not a power of 2).

    This SMTP server stores the time to next retry sending a message but only the last 9 digits. So come mid 2001 Webshield would no longer retry sending a mail if the first attempt didn't work. Because it concluded it had been about 30 years since it last tried and it should give up about now.

    There is a hot fix available, but this insidious problem only manifests itself if there is a problem at the receiving end so few people know they should upgrade and blame the recipient for mail that bounces immediately. Network Associates still provide software unpatched - hot fixes are only to be applied if you report he specific problem to be fixed.

    If you use tempfailing [slashdot.org] (greylisting) as I do, then this immediately stuffs up any Webshield user trying to communciate with you because they will not retry after being given a temporary failure SMTP error code.

    So if this example is anything to go by, then yeah, there'll be recent, modern commercial software that will fail (perhaps in non-obvious ways), with no fix available until after the event.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...