Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Bug Programming The Almighty Buck

2010 Bug Plagues Germany 233

krou writes "According the Guardian, some 30 million chip and pin cards in Germany have been affected by a programming failure, which saw the microchips in cards unable to recognize the year change. The bug has left millions of credit and debit card users unable to withdraw money or make purchases, and has stranded many on holiday. French card manufacturer Gemalto accepted responsibility for the fault, 'which it is estimated will cost €300m (£270m) to rectify.' They claim cards in other countries made by Gemalto are unaffected."
This discussion has been archived. No new comments can be posted.

2010 Bug Plagues Germany

Comments Filter:
  • 2010 (Score:5, Interesting)

    by s31523 ( 926314 ) on Thursday January 07, 2010 @09:55AM (#30681720)
    Who would have thought 2010 was going to be a big deal. We just had a 2010 programming problem at work. Everything worked great in December then in January our software simulation stopped sending the correct time to our hardware. Turns out the simulation handles 2010 incorrectly. We now have to set our PC clocks to 2009 until the team gets a fix out. I bet we see more of this.
  • Untested software (Score:4, Interesting)

    by nodan ( 1172027 ) on Thursday January 07, 2010 @10:04AM (#30681800)
    Again, it's surprising that such an obvious software bug makes it into the real world. You really can never trust untested software at all. What disturbs me most are the proposed "solutions": the companies issuing the cards try to avoid the exchange of the cards by all means due to the costs involved. However, that comes at the cost of sacrificing the security gained by the introduction of the now ill-functioning chip. What has been mentioned as well was updating the software on the card at the banking terminal - I'm surprised that this is possible at all. Essentially, that opens another huge security gap (which might have been there for a long time but went unnoticed so far).
  • by foobsr ( 693224 ) on Thursday January 07, 2010 @10:10AM (#30681862) Homepage Journal
    Technology: 2016 Bug Hits Text Messages, Payment Processing [slashdot.org]

    Experts (or excellence, YMMV) at work.

    CC.
  • by langelgjm ( 860756 ) on Thursday January 07, 2010 @10:23AM (#30681984) Journal

    Reminds me of a story I mention every so often. When I was an undergrad, I along with a few other enterprising students discovered that our university ID cards stored our social security numbers in the clear on the magnetic stripe. We eventually brought this to the attention of the school, who rushed to find a solution. They needed a unique identifier that was also not important information. They quickly settled on using our "university ID numbers" - arbitrary numbers whose value had no importance to the individual, and they reissued cards to the entire university.

    A few weeks after they finished reissuing cards, one of us discovered that the "university ID number" was a primary key in the school's LDAP database. By using a directory browser, you could look up any student, staff, or faculty member by name, and obtain their university ID number. Since this was the number on their ID card, and their ID card controlled access to buildings, labs, etc., it was trivial to obtain access privileges to pretty much anywhere on campus. Want to make it look like the president of the university broke into the nuclear reactor? Look him up, write his ID number to a magnetic stripe card (we had built the hardware to do this, as well as to "fake" cards, which allowed us to simple type in numbers and generate signals, without actually making a card), and have at it.

    Again, it was brought to the attention of the university. After a failed attempt to begin disciplinary action against one of us, they recalled everyone's cards and wrote new, presumably pseudo-random identifiers to them that were not publicly accessible.

    Moral of the story? In your rush to fix one problem, make sure you don't create an even bigger one.

  • Re:2010 (Score:5, Interesting)

    by edmicman ( 830206 ) on Thursday January 07, 2010 @10:29AM (#30682076) Homepage Journal

    2 weeks before the new year I found in our legacy code multiple "Y2K10" bugs. We're a health insurance company, and this is for a major national client who is sending us data with a 2-digit year format. There is code all over the place that is making assumptions about how to treat those dates, but it's faulty logic. We've fixed what we've found, but have no way of doing a complete audit so we're just going to have to fix them as issues arise. I love the clusterf*ck that is my job.

  • Re:2010 (Score:5, Interesting)

    by guruevi ( 827432 ) on Thursday January 07, 2010 @10:30AM (#30682090)

    My question is, why the f*** so many systems have issues with their clocks. In just about any language (C, Java, .NET, Perl, PHP, SQL...) there are (built-in) libraries available that do time correctly. If you're unsure on how to store time, Unix epoch is just about the simplest way to store it (it's a freakin' integer), it's universally recognizable and accepted and very easy to calculate with and if you need more precision just make it a floating point number and add numbers after the comma.

    I see way too many implementations where people build their own libraries to convert a string into a date format, calculate with it and back. On embedded systems it's even worse. Some hope to save some storage space and speed by building custom functions to store a time format (eg. 2010-01-07 10:50:59 pm) into an integer (201001071050591) and back simply by stripping some characters and implementing the storage part in assembler. When they decide to export to other states/countries however they now have to implement a conversion for timezones and daylight savings time and the code becomes hopelessly buggy and bloated - usually too late to fix it since they already have it out in the field. While they could've just saved time (and storage space) by just storing it as 1278024659 using an (initially) somewhat larger library.

  • Re:2010 (Score:5, Interesting)

    by digitig ( 1056110 ) on Thursday January 07, 2010 @10:56AM (#30682388)

    My question is, why the f*** so many systems have issues with their clocks. In just about any language (C, Java, .NET, Perl, PHP, SQL...) there are (built-in) libraries available that do time correctly. If you're unsure on how to store time, Unix epoch is just about the simplest way to store it (it's a freakin' integer), it's universally recognizable and accepted and very easy to calculate with and if you need more precision just make it a floating point number and add numbers after the comma.

    Partly because of ignorance of the libraries, but partly because the built-in libraries simply don't do time correctly. Unix epoch? Rolls over in 2038, so it's no use for dealing with 49 or 99 year land-leases (or the 999 year lease I held on one property). I know somebody who worked on software dealing with mineral exploration rights who had just this problem: Unix epoch simply got it wrong for the timescales involved (and he wasn't allowed to use 3rd party libraries because management perceived that as a support issue). What was he to do but roll his own? And it's very much because people think it's simple, without looking at the actual issues.

  • Re:2010 (Score:1, Interesting)

    by Anonymous Coward on Thursday January 07, 2010 @11:41AM (#30682996)

    "If you're having problems with the transition from "09" to "10", then you'd also have problems with the transition from "2009" to "2010""

    Not so. Some hacks to "fix" the problem have a "pivot year". Years before that are interpreted one way, and years after are interpreted another. The full year representation does not require any of that, and would not be liable to the same error.

    "The client had better not get any blame for this"

    According to his/her/it's post, the client is sending the year over as two digits. That is on them, unless the company the parent poster works for required them to send it that way.

    I would like to go on record predicting the year 10k problem set, by the way. When 4 digit years are no longer sufficient. I have patents on fixing this issue, so start the bank deposits now.

  • by WinterSolstice ( 223271 ) on Thursday January 07, 2010 @11:46AM (#30683078)

    I was running a Y2K lab at my company from 1999 to 2000, and we found a TON of serious problems. Nearly all of our internal stuff had major issues, as well as email, phone systems, backup systems, and several operating systems.

    The tests I ran went from 1998->2012 in one year increments (with full tests by all teams at each year step) and most of those problems were nailed down.

    I'm guessing not all shops tested much further out than 2001 or 2002 - probably due to poor planning and lack of funds. As it was we had to cut ours back to 2012 because of budget constraints, so I can only imagine other shops did likewise.

  • Re:2010 (Score:1, Interesting)

    by Anonymous Coward on Thursday January 07, 2010 @12:07PM (#30683406)

    Before 2000, many programs extended two-digit years by adding 1900. "89" became "1989". During the Y2K fixing frenzy, some evil/lazy/incompetent programmers replaced that code with something like this:
    if (year<10) year+=2000;
    else year+=1900;

    This solved the Y2K problem by replacing it with a Y2K+10 problem.

    Another type of problem caused the SMS date misinterpretation and was also only possible with a two digit year format: A two digit BCD number fits into one byte, 4 bits for one digit and 4 bits for the other digit. If you read a BCD number smaller than 10, it looks just like a normal decimal number, but the number 10 in BCD format is different from the number 10 in binary format. Had the year been formatted with four digits, it would have been obvious that it is not stored as a normal binary number and the misinterpretation would never have made it into production code.

  • by Nefarious Wheel ( 628136 ) on Thursday January 07, 2010 @05:28PM (#30687784) Journal

    I was running a Y2K lab at my company from 1999 to 2000, and we found a TON of serious problems...

    My wife was in charge of a Pick-based hospital IS back then. She put in a huge number of extra-long days getting the dates expanded so the hospital wouldn't have to "go manual" on 1-Jan-2000. She made the deadline, but it nearly killed her. Hospital administrators then gave her a sledge for making such a big deal over it, because clearly it was all a farce - nothing happened as a result of the date changing to year 2000.

    The loss to her was any further interest in an IT career. She's now teaching immersive medieval history to a number of schools, where her work is at least appreciated by her clients.

  • by jc42 ( 318812 ) on Thursday January 07, 2010 @07:38PM (#30689212) Homepage Journal

    Plus, by waiting until the last minute, no labor was wasted in pre-fixing systems that would already be obsoleted by 1999.

    Yeah, but recall that in the Y2K bugs were mostly found in corporate COBOL programs, and COBOL code doesn't get retired. It just accumulates in the musty corners of old runtime libraries. There was a lot of COBOL code from the 1960s that was patched for Y2K

    To see how bad the COBOL retention syndrome is, consider that IBM has had to supply emulators of older processors, so that customer companies could continue to run binaries for which the COBOL source has been lost. Yes, it really is that bad in the corporate DP/MIS/IT world. Of course, the binary-only programs generally couldn't be fixed for Y2K, and had to be rewritten. But they were rewritten by people adapted to the same corporate culture, and made the same mistakes in date/time handling that they've always made.

    I remember reading a story by a fellow who got curious about the date problems in COBOL code, and started collecting examples of COBOL date-manipulating code. He said that when his count of the number of different date formats passed 180, he decided that he understood the problem quite well. This is still going on, as can be seen by looking at the date-handling code in newer software, and we still have the same problems.

    Just last week, I gave a demo of some web code that I've been developing for a (mercifully) unnamed client. During the demo, some of the screens exposed the fact that the code internally saves all dates in ISO standard form, for the UT "time zone". I assured them that I could add the obvious translations to local time fairly soon, but this turned out not good enough. They were insistent that they didn't want the code "working on European time", and wanted the internal times all in local time. This despite the fact that they (and their visitors) are scattered across about 10 time zones. Just displaying all times in the local zone isn't acceptable; they object to Universal Time internally on general principles. I've seen this repeatedly on a lot of projects. It tells you a lot about why our software continues to have time-handling problems, whenever any particular ad-hoc time representation reaches a value ending with some number of zeroes (in base two or ten).

    It was really funny a few days ago, when we read about spamassassin's bug triggered by the first day of the year 2010. I predict that we'll get reports like this in every year ending with a zero, for as long as any of us is alive. ;-)

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...