Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Bug IT Technology

Time's Up: 2^30 Seconds Since 1970 675

An anonymous reader writes: "In Software glitch brings Y2K deja vu, CNET points out a small wave of Y2K-like bugs may soon hit, though it gets the explanation wrong. It will soon be about 2^30 (1 billion, not 2 billion) seconds since 1970 (do the arithmetic). Systems that use only 29 bits of a word for unsigned/positive integers, or store time as seconds since 1970 in this format, may roll back to 1970. (Many systems that do not need full 32 bit integers may reserve some bits for other uses, such as boolean flags, or for type information to distinguish integers from booleans and pointers.)"
This discussion has been archived. No new comments can be posted.

Time's Up: 2^30 Seconds Since 1970

Comments Filter:
  • RTFA (Score:3, Informative)

    by g-to-the-o-to-the-g ( 705721 ) on Sunday December 21, 2003 @08:03PM (#7782387) Homepage Journal
    The bug effects older unix systems. So if your still using UnixWare, you may be in trouble.
  • Re:Some systems... (Score:0, Informative)

    by Anonymous Coward on Sunday December 21, 2003 @08:05PM (#7782408)
    GNU/Linux prior to 2.2 series.
  • Re:RTFA (Score:5, Informative)

    by cbiffle ( 211614 ) on Sunday December 21, 2003 @08:07PM (#7782421)
    Okay, I read TFA. Wrong.

    The article specifically states that Unices use unsigned 32-bit values to store the number of seconds since 1970. Unfortunately, it's wrong even in that respect, since most Unices have been using larger timevals for some time now.

    It's fun to bash SCO and all, but come on.
  • by dagg ( 153577 ) on Sunday December 21, 2003 @08:08PM (#7782423) Journal
    perl -e 'print "seconds left: ", ((2**30) - time), "\n"'
  • Re:subject (Score:3, Informative)

    by after ( 669640 ) on Sunday December 21, 2003 @08:08PM (#7782432) Journal
    If this is a problem, then developers should start making ``patches'' for the year 2038 [deepsky.com].

    Its interesting, how no one considered this would happen eventually and just started to use 64 bit ints to store this from the long run.

    Someday, we will hit a very high year, and these sort of problems will hit us as well ... all I hope is that my body gets frozen so I can see that year ;)
  • by lurker412 ( 706164 ) on Sunday December 21, 2003 @08:09PM (#7782442)
    The program in question was revised in 1997. Most companies already had kicked off their Y2K programs by then. The popular press was already starting to run end of the world warnings. OK, so it wasn't a Y2K problem as such, but how this company managed to ignore the problem at that time is truly baffling.
  • Re:Some systems... (Score:5, Informative)

    by be-fan ( 61476 ) on Sunday December 21, 2003 @08:10PM (#7782444)
    On many dynamically typed languages (notably Lisp) some of the bits of an integer are used as 'tag bits' that distinguish integers from pointers from cons cells, etc. Some bits are also sometimes used to help out the GC.

    So maybe a Lisp Machine might have this problem? Of course, Lispers will tell you that they'd always have the sense to use a bignum :)
  • by rdunnell ( 313839 ) * on Sunday December 21, 2003 @08:10PM (#7782445)
    in mainframes and other "big iron" in finance shops.

  • Did the math. (Score:5, Informative)

    by Yaztromo ( 655250 ) on Sunday December 21, 2003 @08:10PM (#7782446) Homepage Journal

    Okay -- I did the math, and 2^29 seconds since January 1st 1970 would have been up on January 4th, 1987.

    2^30 seconds since the epoch puts us into January 9th, 2004.

    Yaz.

  • by Anonymous Coward on Sunday December 21, 2003 @08:13PM (#7782461)
    I saw one minor problem with time()==10^9 where some logging put the time_t in a 9-digit area. That rollover happened in 2001.

    The 2**30 rollover in January strikes me as pretty unlikely to affect much. Are there any commonly used 31-bit archs? (I think IBM s390 is but only for addressing, not data - please correct me if I'm wrong) In 18 years of working on UNIX software I don't think I've ever seen code to "cleverly" re-use the high bits on a time_t variable for something else.

    Oh well, back to waiting for 2038. I should be retired by then, have fun kids!
  • 1 billion != 2^30 (Score:2, Informative)

    by terremoto ( 679350 ) on Sunday December 21, 2003 @08:16PM (#7782484)
    It will soon be about 2^30 (1 billion, not 2 billion) seconds since 1970 (do the arithmetic).

    My arithmetic says that 1 billion = 1,000,000,000 whereas 2^30 is 1,073,741,824.

    The 1 billion rollover happened back in September 2001. The 2^30 rollover is in a few weeks time.
  • by Anonymous Coward on Sunday December 21, 2003 @08:17PM (#7782490)
    Linux 2.0.x and 2.2.x use 31-bit time_t struct's.
  • by Tom7 ( 102298 ) on Sunday December 21, 2003 @08:19PM (#7782500) Homepage Journal
    It's not uncommon to use some extra bits for tags in implementations of some high-level languages. For instance, in SML/NJ the 'int' type is 31-bits long and signed; all integers are represented shifted up one bit and with a 1 in the new ones place. This is to distinguish them from pointers, which (since they are aligned) always end in two 0 bits. The arithmetic primops account for this extra bit, usually with very little overhead since the instructions are simple and can be paired. (Other SML compilers do it in different, sometimes better ways.) Anyway, fortunately they are not dumb enough to use 'int' to represent time, so there's no problem there! I expect there are lisp implementations that do similar things.
  • Re:subject (Score:1, Informative)

    by Anonymous Coward on Sunday December 21, 2003 @08:22PM (#7782520)
    well, lisp implementations sometimes have funny sized small integer word sizes, like 29 or 31 bits. But the way lisp works, it's not an issue, since such small ints are considered just "hardware acceleration" of commonly used numbers - lisps just go to bignums (arbitrary sized, implemented in software) when you exceed the size possible for a "tagged unboxed" representation. A programmer would have to try REALLY HARD to make a lisp implementation fail in this way, and even then it would be immediately apparent since he'd have had to make a declaration to override lisp default behaviour.

  • by boa ( 96754 ) on Sunday December 21, 2003 @08:26PM (#7782538)
    > I could of course be wrong but I'm pretty sure there aren't 31-bit architectures. At least, these architectures are exceedingly rare if they do indeed exist.

    Of course you're wrong :-)
    The IBM OS/390 and Z/OS operating systems, which run on most IBM mainframes, are both 31-bit.

  • Re:RTFA (Score:5, Informative)

    by oGMo ( 379 ) on Sunday December 21, 2003 @08:28PM (#7782554)
    The article specifically states that Unices use unsigned 32-bit values to store the number of seconds since 1970. Unfortunately, it's wrong even in that respect, since most Unices have been using larger timevals for some time now.

    Actually, it's wrong in that POSIX states this value is signed, which is what causes it to be a problem we have to worry about before the next century. (If time_t was unsigned, various functions, such as time(2) could not return an error code. Similar deal happened with other types, such as size_t, which lead to the 2GB file problem for awhile.)

  • Re:RTFA (Score:3, Informative)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday December 21, 2003 @08:31PM (#7782580) Homepage Journal
    Also, UnixWare is a relatively new Unix featuring many newer concepts missing in SCO Unix, the actual product which geeks love to hate.
  • yup... (Score:5, Informative)

    by AntiTuX ( 202333 ) on Sunday December 21, 2003 @08:46PM (#7782660) Homepage
    http://maul.deepsky.com/%7Emerovech/2038.html [deepsky.com]

    antitux@TuX0r:~$ perl 2038.pl
    Tue Jan 19 03:14:01 2038
    Tue Jan 19 03:14:02 2038
    Tue Jan 19 03:14:03 2038
    Tue Jan 19 03:14:04 2038
    Tue Jan 19 03:14:05 2038
    Tue Jan 19 03:14:06 2038
    Tue Jan 19 03:14:07 2038
    Fri Dec 13 20:45:52 1901
    Fri Dec 13 20:45:52 1901
    Fri Dec 13 20:45:52 1901
    antitux@TuX0r:~$

    hrm..
    Looks like we're fucked too.
  • Re:yup... (Score:3, Informative)

    by AntiTuX ( 202333 ) on Sunday December 21, 2003 @08:48PM (#7782678) Homepage
    Oh, and for those who care:

    antitux@TuX0r:~$ uname -a
    Linux TuX0r 2.6.0 #4 Sat Dec 20 18:38:44 MST 2003 i686 unknown unknown GNU/Linux
    antitux@TuX0r:~$
  • by lurker412 ( 706164 ) on Sunday December 21, 2003 @08:51PM (#7782691)
    I worked on a large Y2K program for a hospital chain. From what I observed, I can tell you this:

    There were, in fact, many problems that were found and fixed before they did any harm.

    A lot of infrastructure was upgraded on somewhat dubious claims of Y2K problems. In some cases, resetting the system clock once on 1/1/00 would have sufficed.

    Consulting firms and contractors had a feeding frenzy. Some added value, others did not.

    Many corporations were frightened by the prospect of lawsuits that might occur if they had Y2K problems. Lawfirms were licking their chops with anticipation.

    As a result of all of the above, for the only time in recorded history CIOs could get whatever they wanted. Naturally, they played it safe. Wouldn't you?

  • by iggymanz ( 596061 ) on Sunday December 21, 2003 @08:58PM (#7782735)
    These might be a problem for many slashdot readers down the road, I for one plan on being likely dead, what with being old fart already. So here's those "overflow" dates, mm/dd/yyyy U.S.A. format:
    02/06/2036 - systems which use unsigned 32-bit seconds since 01/01/1900
    01/01/2037 - NTP time rolls over
    01/19/2038 - Unix 32 bit time, signed 32 bit seconds (that's to say, 2^31) since 01/01/1970
    02/06/2040 - Older Macintosh
    09/17/2042 - IBM 370 family mainframe time ends, 2^32 "update intervals, a kind of 'long second'" since 01/01/1900
    01/01/2044 - MS DOS clock overflows, 2^6 years since 01/01/1980
    01/01/2046 - Amiga time overflows
    01/01/2100 - many PC BIOS become useless
    11/28/4338 - ANSI 85 COBOL date overflow, 10^6 days since epoch of 01/01/1601

    and my personal favorite,
    07/31/31086 - DEC VMS time overflows
  • by Meowing ( 241289 ) on Sunday December 21, 2003 @08:59PM (#7782738) Homepage
    36 bit architectures are common. Of the mainframes I know of none are 31 bits...

    36 bits was fairly common once upon a time, but no longer. Unisys still have a 36-bit series, but they're the last of the breed. See here. [36bit.org]

    Big scary IBM boxes are where you see the 31 bit weirdness.

  • Re:Some systems... (Score:5, Informative)

    by __past__ ( 542467 ) on Sunday December 21, 2003 @08:59PM (#7782739)
    they'd have the sense not to override the default behaviour of the damn language, which would be to go to bignum if necessary. It would take effort to write a declaration to actually deliberately override the behaviour, and would be A Seriously Stupid Thing To Do. Doesn't mean that somebody, somewhere wouldn't do it, of course
    Indeed, someone did, sort of. Namely the implementors of the SBCL [sourceforge.net] compiler (and they probably inherited it from CMUCL [cons.org]) who, generally, definitely do not qualify as stupid.
    "... and of course, CL transparently uses bignums when a numeric quantity exceeds the range of machine words, so we don't get overflow problems"
    * (decode-universal-time (+ (* 86400 365 50) (get-universal-time)))
    debugger invoked on condition of type TYPE-ERROR:
    The value 2635786389 is not of type (SIGNED-BYTE 32).

    This is because I didn't specify a timezone, so it asks unix for the default timezone and DST settings, and unix needs a time_t, which is 32 bits on this box.
    Dan Barlow, SBCL and the Y2038 problem [gmane.org]

    So even if Lisp tends to not have overflow problems, Unix and C will come back and bite you if you give them a chance...
  • by dmobrien_2001 ( 731198 ) on Sunday December 21, 2003 @09:18PM (#7782820) Homepage Journal
    Uh, you mean Tue Jan 19 03:14:08 GMT 2038, doncha?
  • by Yaztromo ( 655250 ) on Sunday December 21, 2003 @09:21PM (#7782830) Homepage Journal

    I'd like to add to that January 1st, 2032, which is when the date structure in older Macs and PalmOS devices will overflow.

    Yaz.

  • by klausner ( 92204 ) on Sunday December 21, 2003 @09:23PM (#7782839)
    If you read the Cnet artlcle, this only affects software from a single company, and they are supposed to be "product lifecycle management" specialists. Why would anyone else care? The rest of us have until 2038 before there is a problem. Probably will get fixed in the 4.2 kernel.
  • by Anonymous Coward on Sunday December 21, 2003 @09:27PM (#7782862)
    $ ./2038.pl
    Tue Jan 19 03:14:01 2038
    Tue Jan 19 03:14:02 2038
    Tue Jan 19 03:14:03 2038
    Tue Jan 19 03:14:04 2038
    Tue Jan 19 03:14:05 2038
    Tue Jan 19 03:14:06 2038
    Tue Jan 19 03:14:07 2038
    Tue Jan 19 03:14:08 2038
    Tue Jan 19 03:14:09 2038
    Tue Jan 19 03:14:10 2038
    $ uname -mrs
    Linux 2.4.9-40smp alpha
    $ perl -v

    This is perl, v5.6.0 built for alpha-linux

    Copyright 1987-2000, Larry Wall
  • by Dahan ( 130247 ) <khym@azeotrope.org> on Sunday December 21, 2003 @09:28PM (#7782872)
    Isn't perl kinda big for that?

    $ echo seconds left: $(((1 << 30) - `date +%s`))

    (assuming your date(1) supports the %s extension)

  • The IBM OS/390 and Z/OS operating systems, which run on most IBM mainframes, are both 31-bit.

    They're actually 32-bit platforms but only are addressable by 31 bits. I believe they do arthimetic on 32 bits...
  • by Anonymous Coward on Sunday December 21, 2003 @09:45PM (#7782942)
    time() doesn't return a 32-bit value, it returns a time_t. The nature of a time_t is is implementation dependent.
  • Re:ObCalculation (Score:5, Informative)

    by Bombcar ( 16057 ) <racbmob@@@bombcar...com> on Sunday December 21, 2003 @09:54PM (#7782983) Homepage Journal
    Get the timezone right!

    date -u -d "1/1/70 `dc -e '2 30 ^p'` secs"
    Sat Jan 10 13:37:04 UTC 2004

    See man date
  • by RouterSlayer ( 229806 ) on Sunday December 21, 2003 @10:06PM (#7783029)
    If you want to know what the real values are, this article and the one on cnet is wrong in so many ways... ugh, but here are the real ones that will really affect people:

    FreeBSD 2.2.7 will start having this clock problem on January 18th, 2038 at 20:14 (8:14PM) EST when the unix clock on FreeBSD will read: 2147483640,
    20:15 (8:15PM) EST will cause FreeBSDs clock timer to claim an invalid date... joy !

    That's not 2^30 folks, that's 2^31 (2147483648) or about 8 seconds after the time I quoted above.

    I know because we still have one box running 2.2.7 here (and what a fun box it is too!) can't handle more than 128megs of ram. What is this - the dark ages? that was rhetorical... :)
  • by Dahan ( 130247 ) <khym@azeotrope.org> on Sunday December 21, 2003 @10:18PM (#7783081)
    No, it assumes a POSIX shell.
  • Re:RTFA (Score:1, Informative)

    by Anonymous Coward on Sunday December 21, 2003 @10:31PM (#7783122)
    Not a failure of C as such.. it could be avoided by using pointers instead of values and use the return value for success/failure reporting. It's all about the design.
  • by naelurec ( 552384 ) on Sunday December 21, 2003 @10:38PM (#7783154) Homepage
    Hmm.. no... you forgot a /60 (60 minutes in an hour)

    so the correct answer would be ..

    ~0.0533 years ... which is ~19.45 days ..

  • by Helios292 ( 528182 ) <BYoung292@m[ ]works.org ['ail' in gap]> on Sunday December 21, 2003 @10:45PM (#7783186) Homepage
    Reset your power manager (you can find out how for your specific model on Apple's site), and then leave your 'book to charge fully overnight. Probably just a corrupted pmu setting. It's happened before on mine and many of the ones I worked on back when doing Apple service.
  • the arithmetic? (Score:3, Informative)

    by jqh1 ( 212455 ) on Sunday December 21, 2003 @11:04PM (#7783250) Homepage
    It will soon be about 2^30 (1 billion, not 2 billion) seconds since 1970 (do the arithmetic).

    It's not a billion seconds, it's 1,073,741,824 seconds -- right?

    and we are close:

    perl -e "print time();"
    1072061932

    1 365 day year is 60*60*24*365 = 31,536,000
    seconds.

    there are 8 leap years in there, and probably a few leap seconds, sure, but:

    1,000,000,000 / 31,536,000 = 31.70979 years to hit 1 billion seconds.

    1970 + 31.7 years puts us in September 2001. Randall Schwartz called this event U1E9 (unsigned 10 to the 9th power?) - there were a few glitches (mostly sorting related), but I've still got all my canned goods and batteries.

  • by Skapare ( 16644 ) on Sunday December 21, 2003 @11:07PM (#7783270) Homepage

    Actually UNIX is really using an effective 31 bits because of the fact that it defaults to a signed quantity, and hence the highest order bit is really a sign bit. So when the clock finally increments 0x7FFFFFFF (19 January 2038 03:14:07) to 0x80000000 the time will wrap back to 2,147,483,648 seconds before 1970, e.g. instead of being Tuesday 19 January 2038 03:14:08, it suddenly becomes Friday the Thirteenth (specifically Friday 13 December 1901 20:45:52).

    Those systems that are using an unsigned 32 bit time value can go on until Sunday 7 February 2106 06:28:15.

    If we were to switch to 64 bits, we could use a resolution of nanoseconds with all that extra space and still represent time until Friday 11 April 2262 23:47:16.854775807 before the sign bit becomes an issue (and negative values can represent time back to Tuesday 21 September 1677 00:12:43.145224192).

  • by Anonymous Coward on Sunday December 21, 2003 @11:13PM (#7783300)
    You should use "localtime" as time will always return the seconds left since 1970.

    >perl -e 'print "seconds left: ", ((2**30) - time), "\n"'
    seconds left: 1681182

    >perl -e 'print "seconds left: ", ((2**30) - localtime), "\n"'
    seconds left: 1073741824
  • Time is complex... (Score:5, Informative)

    by Goonie ( 8651 ) * <robert.merkel@b[ ... g ['ena' in gap]> on Monday December 22, 2003 @12:05AM (#7783566) Homepage
    Recording times accurately can get very complex in some cases, and longer time_t's aren't the whole solution.

    Firstly, every so often a leap second [wikipedia.org] is added to UTC. For this reason, over timescales of years it is impossible to exactly map unix time_t and calendar times.

    Another issue is determining when a transaction happened that occurred across multiple time zones...

  • by OwnedByTheMan ( 169684 ) on Monday December 22, 2003 @12:44AM (#7783713) Homepage
    ...hence the use of "365.242199days/year" as opposed to 365.
  • by Detritus ( 11846 ) on Monday December 22, 2003 @01:17AM (#7783847) Homepage
    The official explanation [apple.com].

    Short version, it simplified leap-year calculations.

  • by some guy I know ( 229718 ) on Monday December 22, 2003 @02:15AM (#7784070) Homepage
    Not to be too pedantic or anything, but to use your shift right scheme for extracting integers, the second-to-LSB must be zero (naught) for even numbers, and 1 for odd numbers (in a two's comp machine).
    So the tag bits have to be 00 or 01 for even integers and 10 or 11 for odd integers.

    Some implementations of LISP go even further, using additional bits in the non-integer case.
    For example:
    0 - Upper 31 bits are signed integer (even or odd).
    001 - Mask to get pointer to object, 8-byte aligned.
    011 - Upper 29 bits are index into cons table.
    101 - Upper 29 bits are index into string table.
    111 - etc.

    I seem to recall that Scheme, a LISP dialect, uses this type of tortuous mechanism to extreme extent.
  • Not quite accurate (Score:2, Informative)

    by SoupaFly ( 558227 ) on Monday December 22, 2003 @03:59AM (#7784459)
    2^30seconds / 60seconds/minute / 60minutes/hour / 24hours/day / 365.242199days/year = 34.025551925361years

    Actually, there are approximately 86,400.002 seconds in a day (see here [navy.mil]). In addition, you neglected to add the leap seconds that may or may not be required.

    I'm just sayin', if you're going to try and be ultra accurate, then don't half-ass it.

  • by vrt3 ( 62368 ) on Monday December 22, 2003 @05:32AM (#7784703) Homepage
    http://lxr.linux.no/source/include/linux/types.h?v =2.0.39 [linux.no], line 57:
    typedef __kernel_time_t time_t;

    http://lxr.linux.no/source/include/asm-i386/posix_ types.h?v=2.0.39 [linux.no], line 21:
    typedef long __kernel_time_t;

    It's defined as a long for the other architectures as well. AFAIK a long is 32 bits on all platforms Linux 2.0.39 runs on.

  • by vrt3 ( 62368 ) on Monday December 22, 2003 @05:52AM (#7784737) Homepage
    I'm sorry but you're wrong. Floating point numbers have such a large range because the fact that their point is floating gives them dynamic precision. That means that small numbers are very precise, while large numbers have much less precision.

    Think about it: a 32-bit floating point number can not possible have more different states than a 32-bit integer, yet it spans a much much larger range. It does so by sacrificing precision for large numbers.

    Just try it:
    #include <stdio.h>
    int main(void)
    {
    float a = 4e9;
    float b = 4.3;
    printf("%f\t%f\t%f\n", a, b, a+b);
    return 0;
    }
    Output:
    4000000000.000000 4.300000 4000000000.000000

    With doubles (64-bit floating point numbers) this example would work:
    #include <stdio.h>
    int main(void)
    {
    double a = 4e9;
    double b = 4.3;
    printf("%f\t%f\t%f\n", a, b, a+b);
    return 0;
    }
    gives 4000000000.000000 4.300000 4000000004.300000
  • Re:ObCalculation (Score:1, Informative)

    by Anonymous Coward on Monday December 22, 2003 @06:36AM (#7784841)
    Take 34 years from 1/1/1970 as an example. 34*365.25 = 12418.5 days. But that's wrong. We know the exact number of days which is: 34*365 + 8 = 12418. There are 8 leap years between 1970 and 2004. Notice that it is off by exactly 1/2 day. Oh, and It should be 365.2422 days per year. There are also 22 leap seconds in that time period.
  • Network Associate's (McAfee) Webshield product has already failed on the 1,000,000,000 second test. (In decimal - not a power of 2).

    This SMTP server stores the time to next retry sending a message but only the last 9 digits. So come mid 2001 Webshield would no longer retry sending a mail if the first attempt didn't work. Because it concluded it had been about 30 years since it last tried and it should give up about now.

    There is a hot fix available, but this insidious problem only manifests itself if there is a problem at the receiving end so few people know they should upgrade and blame the recipient for mail that bounces immediately. Network Associates still provide software unpatched - hot fixes are only to be applied if you report he specific problem to be fixed.

    If you use tempfailing [slashdot.org] (greylisting) as I do, then this immediately stuffs up any Webshield user trying to communciate with you because they will not retry after being given a temporary failure SMTP error code.

    So if this example is anything to go by, then yeah, there'll be recent, modern commercial software that will fail (perhaps in non-obvious ways), with no fix available until after the event.

  • Re:Mod parent up (Score:2, Informative)

    by dan_b ( 982 ) on Tuesday December 30, 2003 @04:51PM (#7838566) Homepage
    Most systems are moving or have moved to 64 bit time_t, but that doesn't actually help because the compiled timezone data files which they're still using for timezone and DST lookup are still based on 32 bit quantities. At least, that was the case on my Linux Alpha box (until it died this morning. Need PSU)

    The workaround we have in SBCL sucks more than slightly, to be honest. What we'll probably do at some point is get the source Olsen data ourselves and parse it into a form that doesn't throw useful information away quite as freely.

E = MC ** 2 +- 3db

Working...