Forgot your password?
typodupeerror
Bug Programming The Almighty Buck

2010 Bug Plagues Germany 233

Posted by CmdrTaco
from the well-that-was-ten-years-late dept.
krou writes "According the Guardian, some 30 million chip and pin cards in Germany have been affected by a programming failure, which saw the microchips in cards unable to recognize the year change. The bug has left millions of credit and debit card users unable to withdraw money or make purchases, and has stranded many on holiday. French card manufacturer Gemalto accepted responsibility for the fault, 'which it is estimated will cost €300m (£270m) to rectify.' They claim cards in other countries made by Gemalto are unaffected."
This discussion has been archived. No new comments can be posted.

2010 Bug Plagues Germany

Comments Filter:
  • by mapkinase (958129) on Thursday January 07, 2010 @09:53AM (#30681700) Homepage Journal

    from TOA

    A French card manufacturer, Gemalto, admitted today it was to blame for the failure, which it is estimated will cost 300m (£270m) to rectify.

    I wonder how does it compare to the losses from Y2K bug... I know it is hard to compare, because there was an unspecified money loss as part of unnecessary checks, difference in scale, anticipation and efforts to fix before manifestation.

    I guess it hits you when you are least expecting.

    • by zaydana (729943) on Thursday January 07, 2010 @10:11AM (#30681876)

      Moreover, it makes you wonder who much of a problem Y2K may have actually been if we hadn't of looked for all the problems and fixed them.

      Chances are things like this would have only been the beginning if Y2K hadn't have been anticipated and planned for, even if we over-reacted. Maybe we should be giving some people more credit than we do...

      • by Nadaka (224565) on Thursday January 07, 2010 @10:53AM (#30682362)

        The response for y2k was not planned for, and it was not an over reaction.

        Y2k issues were known in the 80's. Had IT been allowed to respond in a timely manner, it would have cost much less, been checked more thoroughly and finished earlier. Instead they waited until the last possible moment and poured much more money into it, hiring as many developers as possible to put in a rushed hackjob and then firing them when the hack worked instead of retaining them to vet, verify and implement permanent solutions where needed. This issue is a result of the failure to react apropriately to y2k. The rushed temporary get-it-done-yesterday hacks are starting to fail.

        • by shentino (1139071)

          I suspect it's also aggravated by a bunch of sleazy contractors taking advantage of desperate clients who know that they're about to get bit in the behind, and deciding to cheap out and do a half-assed job in the first place.

          Seriously, after what caused Y2K, only a complete moron or a crook would add 2000 to a single digit in a barcode.

          • Re: (Score:3, Insightful)

            by Rich0 (548339)

            You know, companies make the conscious decision to not have permanent staff to oversee contractors. They get what they pay for. That doesn't excuse contractors, but there is this thing called due diligence.

            Also, I'd say there is a 90% chance that the contractor spelled out exactly what they were doing and its implications, and somebody in the company signed off. Maybe they didn't read it all, but it is just as likely that they were given the choice of $600k to do it right, and $500k to do it cheap, and t

        • Re: (Score:2, Insightful)

          by hrimhari (1241292)

          The rushed temporary get-it-done-yesterday hacks are starting to fail.

          Wonderful rant, but pray tell, how does this issue link to y2k hacks when it's an update to previous cards limited to German market? Have you inside knowledge from Gemalto of what motivated the aforementioned update and the reason they used such a way to represent the year in that particular geographic location?

        • by inviolet (797804)

          Y2k issues were known in the 80's. Had IT been allowed to respond in a timely manner, it would have cost much less, been checked more thoroughly and finished earlier. Instead they waited until the last possible moment and poured much more money into it, hiring as many developers as possible to put in a rushed hackjob and then firing them when the hack worked instead of retaining them to vet, verify and implement permanent solutions where needed.

          You forget TVM. Money spent fifteen years later is waaaaaaay c

          • by jc42 (318812) on Thursday January 07, 2010 @07:38PM (#30689212) Homepage Journal

            Plus, by waiting until the last minute, no labor was wasted in pre-fixing systems that would already be obsoleted by 1999.

            Yeah, but recall that in the Y2K bugs were mostly found in corporate COBOL programs, and COBOL code doesn't get retired. It just accumulates in the musty corners of old runtime libraries. There was a lot of COBOL code from the 1960s that was patched for Y2K

            To see how bad the COBOL retention syndrome is, consider that IBM has had to supply emulators of older processors, so that customer companies could continue to run binaries for which the COBOL source has been lost. Yes, it really is that bad in the corporate DP/MIS/IT world. Of course, the binary-only programs generally couldn't be fixed for Y2K, and had to be rewritten. But they were rewritten by people adapted to the same corporate culture, and made the same mistakes in date/time handling that they've always made.

            I remember reading a story by a fellow who got curious about the date problems in COBOL code, and started collecting examples of COBOL date-manipulating code. He said that when his count of the number of different date formats passed 180, he decided that he understood the problem quite well. This is still going on, as can be seen by looking at the date-handling code in newer software, and we still have the same problems.

            Just last week, I gave a demo of some web code that I've been developing for a (mercifully) unnamed client. During the demo, some of the screens exposed the fact that the code internally saves all dates in ISO standard form, for the UT "time zone". I assured them that I could add the obvious translations to local time fairly soon, but this turned out not good enough. They were insistent that they didn't want the code "working on European time", and wanted the internal times all in local time. This despite the fact that they (and their visitors) are scattered across about 10 time zones. Just displaying all times in the local zone isn't acceptable; they object to Universal Time internally on general principles. I've seen this repeatedly on a lot of projects. It tells you a lot about why our software continues to have time-handling problems, whenever any particular ad-hoc time representation reaches a value ending with some number of zeroes (in base two or ten).

            It was really funny a few days ago, when we read about spamassassin's bug triggered by the first day of the year 2010. I predict that we'll get reports like this in every year ending with a zero, for as long as any of us is alive. ;-)

      • by WinterSolstice (223271) on Thursday January 07, 2010 @11:46AM (#30683078)

        I was running a Y2K lab at my company from 1999 to 2000, and we found a TON of serious problems. Nearly all of our internal stuff had major issues, as well as email, phone systems, backup systems, and several operating systems.

        The tests I ran went from 1998->2012 in one year increments (with full tests by all teams at each year step) and most of those problems were nailed down.

        I'm guessing not all shops tested much further out than 2001 or 2002 - probably due to poor planning and lack of funds. As it was we had to cut ours back to 2012 because of budget constraints, so I can only imagine other shops did likewise.

        • by Nefarious Wheel (628136) on Thursday January 07, 2010 @05:28PM (#30687784) Journal

          I was running a Y2K lab at my company from 1999 to 2000, and we found a TON of serious problems...

          My wife was in charge of a Pick-based hospital IS back then. She put in a huge number of extra-long days getting the dates expanded so the hospital wouldn't have to "go manual" on 1-Jan-2000. She made the deadline, but it nearly killed her. Hospital administrators then gave her a sledge for making such a big deal over it, because clearly it was all a farce - nothing happened as a result of the date changing to year 2000.

          The loss to her was any further interest in an IT career. She's now teaching immersive medieval history to a number of schools, where her work is at least appreciated by her clients.

  • 2010 (Score:5, Interesting)

    by s31523 (926314) on Thursday January 07, 2010 @09:55AM (#30681720)
    Who would have thought 2010 was going to be a big deal. We just had a 2010 programming problem at work. Everything worked great in December then in January our software simulation stopped sending the correct time to our hardware. Turns out the simulation handles 2010 incorrectly. We now have to set our PC clocks to 2009 until the team gets a fix out. I bet we see more of this.
    • Re: (Score:3, Funny)

      by corbettw (214229)

      Damn Mayans and their inability to correctly predict the end of the world! It came two years early!

    • Re:2010 (Score:5, Interesting)

      by edmicman (830206) on Thursday January 07, 2010 @10:29AM (#30682076) Homepage Journal

      2 weeks before the new year I found in our legacy code multiple "Y2K10" bugs. We're a health insurance company, and this is for a major national client who is sending us data with a 2-digit year format. There is code all over the place that is making assumptions about how to treat those dates, but it's faulty logic. We've fixed what we've found, but have no way of doing a complete audit so we're just going to have to fix them as issues arise. I love the clusterf*ck that is my job.

      • by u8i9o0 (1057154)

        Wait, 2-digit year format? If you're having problems with the transition from "09" to "10", then you'd also have problems with the transition from "2009" to "2010". A Y2K-like bug would mean that the INPUT value is incomplete, or essentially pre-truncated. What you describe is code that intentionally truncates the value itself. The client had better not get any blame for this.

        • by Mr. DOS (1276020)

          It sounds like the software assumes incorrect start and end dates for double-digit years; i.e., it interprets 09 as 2009 and 10 as 1910.

                --- Mr. DOS

        • by aaarrrgggh (9205)

          One of the kludges you see with 2-digit input years is assuming that if a number is less than x, you add 2000 to the number for the correct year; else add 1900 for correct year. People (apparently) assumed that by 2010 all the Y2K kludges will be worked around... or just didn't care.

          BCD issues are also much more common than I (as a non-programmer) would have ever expected.

    • Re:2010 (Score:5, Interesting)

      by guruevi (827432) <evi.smokingcube@be> on Thursday January 07, 2010 @10:30AM (#30682090) Homepage

      My question is, why the f*** so many systems have issues with their clocks. In just about any language (C, Java, .NET, Perl, PHP, SQL...) there are (built-in) libraries available that do time correctly. If you're unsure on how to store time, Unix epoch is just about the simplest way to store it (it's a freakin' integer), it's universally recognizable and accepted and very easy to calculate with and if you need more precision just make it a floating point number and add numbers after the comma.

      I see way too many implementations where people build their own libraries to convert a string into a date format, calculate with it and back. On embedded systems it's even worse. Some hope to save some storage space and speed by building custom functions to store a time format (eg. 2010-01-07 10:50:59 pm) into an integer (201001071050591) and back simply by stripping some characters and implementing the storage part in assembler. When they decide to export to other states/countries however they now have to implement a conversion for timezones and daylight savings time and the code becomes hopelessly buggy and bloated - usually too late to fix it since they already have it out in the field. While they could've just saved time (and storage space) by just storing it as 1278024659 using an (initially) somewhat larger library.

      • by fotoguzzi (230256) on Thursday January 07, 2010 @10:37AM (#30682178)
        Hah, hah, hah!
      • Re:2010 (Score:5, Interesting)

        by digitig (1056110) on Thursday January 07, 2010 @10:56AM (#30682388)

        My question is, why the f*** so many systems have issues with their clocks. In just about any language (C, Java, .NET, Perl, PHP, SQL...) there are (built-in) libraries available that do time correctly. If you're unsure on how to store time, Unix epoch is just about the simplest way to store it (it's a freakin' integer), it's universally recognizable and accepted and very easy to calculate with and if you need more precision just make it a floating point number and add numbers after the comma.

        Partly because of ignorance of the libraries, but partly because the built-in libraries simply don't do time correctly. Unix epoch? Rolls over in 2038, so it's no use for dealing with 49 or 99 year land-leases (or the 999 year lease I held on one property). I know somebody who worked on software dealing with mineral exploration rights who had just this problem: Unix epoch simply got it wrong for the timescales involved (and he wasn't allowed to use 3rd party libraries because management perceived that as a support issue). What was he to do but roll his own? And it's very much because people think it's simple, without looking at the actual issues.

        • And it's very much because people think it's simple, without looking at the actual issues.

          Indeed. Time is one of the most problematic issues I've ever had to deal with in my career. There are just so many different special cases like leap years/seconds and corner cases like converting from one timezone to another in the middle of daylight savings time change in or both of those timezones. Somebody ought to write a paper, or even a book, on all the stuff you have to watch out for.

        • That is exactly where open source shines: Clone just the date/time/calendar part of the (GNU) standard library, and patch it so it works with your needs. Then either offer this back to GNU, and continue to clone every release. Or just use updates of the library, to carefully apply applicable patches to your fork of that part.

          You avoid rolling your own custom solution (with all the huge traps inside date/time calculations), and you avoid having to depend on someone else (since you forked it, and can choose t

          • Re: (Score:3, Insightful)

            by digitig (1056110)

            continue to clone every release. Or just use updates of the library, to carefully apply applicable patches to your fork of that part.

            Sounds like exactly the sort of maintenance issue management wanted to avoid in the case I mentioned.

          • by rubycodez (864176)

            ah, but your happy-go-lucky thinking is exactly what causes problems. Where do your nifty libraries get the time ultimately...from hardware battery backed clocks, which have all manner of time issues and limitations, some even only use 16 bits or less to get time from some esoteric epoch date which might be the year the thing was designed, or the year 2000 or 1970 or 1969 and some month/day.

            Your GNU goodness suddenly turns to shit in those cases.

        • by Chemisor (97276) on Thursday January 07, 2010 @11:34AM (#30682892)

          2038 is only the limit on 32bit platforms. On a 64bit platform time_t is 64bits, which will last "forever". We are already significantly on the way to switching to 64bit-only CPU operation, and I'm going to bet that by 2038 we'll switch completely, if only to avoid the end of time. Heck, if you could only have a working 64bit flash plugin on Linux, all Linux users would go 64bit already.

          • by aaarrrgggh (9205)

            ...so, what do you do when you wrote a program on a 32-bit platform 10 years ago, and it is still running?

        • Re: (Score:3, Funny)

          by l0b0 (803611)

          Planck units since the Big Bang is the only way! Let's see: ~5.4E-44 seconds per unit, ~1.37E10 years since the Big Bang ~= 2.53E53 decimal = 2A4359FEF2C78D94A50F53B75B35AA648000000000000 hex, which should take about 180 bits to store.

          • by digitig (1056110)
            Just what we need -- an epoch that changes as science progresses (and estimates of the time of the Big Bang improve).
        • by shentino (1139071)

          Just another case of a PHB sticking his nose where it doesn't belong.

          If someone above you on the food chain is telling you HOW to do your job, that's micromanaging, especially when you know more than they do about what you are doing.

          • by digitig (1056110)
            Or the person above you in the foodchain might just know more than you about what the company is actually tring to achieve, and/or may be correctly reflecting shared experience which you haven't yet encountered. If you can't deal with that, set up on your own.
      • Re: (Score:3, Insightful)

        by Rockoon (1252108)
        I think most of the time they are building their own conversions to date formats because they have to. Those standard libraries are great when the date is in a standard format, but multinationals deal with nearly every variation of date encoding known to man.

        1-digit years, 2-digit years, 4-digits years, month-before-day, month-after-day, year-first, year-last, decimal-seperators, slash-seperators, dhash-seperators, space-seperators, a-mix-of-seperators, without-day-of-week, with-day-of-week, with-day-of-w
      • Re: (Score:3, Insightful)

        by Rufty (37223)
        Premature optimization is the root of (most) evil.
      • it's a freakin' integer

        How would you know?

        All you have is a blob of bytes. It's about how you interpret them. Even if you store Unix epoch in your Datastore: Who prevents 3rd party software to mis-interpret it as windows-timestamp? Or Bitmap?And thats what happened here. A byte wasn't interpreted as integer, but as BCD number. (or other way round) And no one noticed, as it worked well as long as 0x03 = 00000011 = (BCD)03 = 0000 0011

      • by DeadCatX2 (950953)

        Leap years; did you know that leap years are every 4 years, except every one hundred years, except every four hundred years? (that's why 2000 was a leap year, but 2100 won't be)

        Leap seconds.

        Localization.

        The libraries themselves have a storage size vs. resolution trade-off, so even Unix will have an Epoch Fail! [xkcd.com] in just a few dozen years.

        The new NTP protocol is supposed to have 128 bit timestamps. 64-bit fractional second, 64-bit whole second. This is allegedly small enough to resolve the amount of time it

      • by Nicolay77 (258497)

        Databases.

        IT doesn't matter if a C program can use fancy algorithms.

        It all depends on the format used to store the data in a DB. Blame the old DBAs from these systems, not the programmers.

    • My phone (HTC Touch Pro 2) or something with my service provider (Telus) has had an issue making all incoming text messages appear as though they are coming from the year 2016.
      It's not a huge groundbreaking deal that stops me from using my phone, but any text messages I send appear to have been sent before my post new years recieved texts, making it hard to sort through and read conversations.

  • by HiChris! (999553) on Thursday January 07, 2010 @09:55AM (#30681724)
    "Although some cash machines were quickly reconfigured to override the 2010 problem, many bank customers were forced to queue to withdraw cash over the counter. Germany's economics minister, Rainer Brüderle, urged banks to 'ensure that credit and bank cards function without problem as soon as possible, or to replace them immediately'." My gosh standing in line to get money is so 1980
    • My gosh standing in line to get money is so 1980

      Yes, how horrible it is that for a brief moment in time, people had to revert to an older way which just works (albeit slightly more slowly).
      • Increasing the wait by an order of magnitude is hardly "slightly more slowly". And that ignores the additional paperwork that's required, too.
      • by Megane (129182)
        Back in the '90s, there were a couple of times when a store's credit card dialup was not working, and they had to pull out the trusty old Addressograph and get an imprint on carbon paper. The hard part was for them to find the thing.
      • by socsoc (1116769)
        I dont think I could get money from a teller. I walk up, "Hi, I want money" ... "okay fill out this form" ... "i dont know my account number" ... "okay, bye have a nice day then"
      • by Tim C (15259)

        (albeit slightly more slowly)

        Heh, my card was eaten by an ATM a few months ago. Let me assure you that standing in line in the bank to withdraw cash over the counter is definitely *not* slightly slower than using an ATM, and that's when you're probably the only one doing it. When everyone has to, it's going to take a long time. (Especially if people who don't have the correct ID start to argue with the tellers...)

    • Re: (Score:3, Insightful)

      The trouble is less in having to do it the old way than in having to do it the old way without notice and in an environment that has shifted toward the new way.

      Back when ATMs and POS electronics were uncommon, everyone knew well in advance that they would have to go get cash in order to make purchases, and do so during banking hours. Inconvenient; but everybody knows the score and the system is set up to work that way. If things suddenly shift back, you get a whole bunch of people, many whose first warni
    • by nospam007 (722110) *

      many bank customers were forced to queue to withdraw cash over the counter.

      Local news reported that since only the chip on the cards were wrong, you'd simple had to cover the pins with some adhesive tape to force the machine to read the magnetic strip instead.

      OTOH another point of attack is now widely known to the planet.

    • "Germany's economics minister, Rainer Brüderle, urged banks to 'ensure that credit and bank cards function without problem as soon as possible, or to replace them immediately'."

      With almost the same words as the german secretary of consumer protection.

      If I were working in IT for a bank, my answer would be a press release "What exactly is it this stupid tart thinks we're busy with right now?"

  • by egandalf (1051424) on Thursday January 07, 2010 @09:55AM (#30681726)
    It only took 65 years, but they finally got their revenge for those invasions. Subtle, the french are, very subtle and patient. Like mice.
    • by FooAtWFU (699187)
      Man, they wish those invasions only cost them 300 million, inflation or no inflation.

      ... I'll </whoosh> myself, thank you.

    • by LSD-OBS (183415)

      Taco has fixed it. We can get back on topic now. As you were.

  • Effected? (Score:4, Informative)

    by LSD-OBS (183415) on Thursday January 07, 2010 @09:57AM (#30681744)

    Come on, editors.

  • Untested software (Score:4, Interesting)

    by nodan (1172027) on Thursday January 07, 2010 @10:04AM (#30681800)
    Again, it's surprising that such an obvious software bug makes it into the real world. You really can never trust untested software at all. What disturbs me most are the proposed "solutions": the companies issuing the cards try to avoid the exchange of the cards by all means due to the costs involved. However, that comes at the cost of sacrificing the security gained by the introduction of the now ill-functioning chip. What has been mentioned as well was updating the software on the card at the banking terminal - I'm surprised that this is possible at all. Essentially, that opens another huge security gap (which might have been there for a long time but went unnoticed so far).
    • by langelgjm (860756) on Thursday January 07, 2010 @10:23AM (#30681984) Journal

      Reminds me of a story I mention every so often. When I was an undergrad, I along with a few other enterprising students discovered that our university ID cards stored our social security numbers in the clear on the magnetic stripe. We eventually brought this to the attention of the school, who rushed to find a solution. They needed a unique identifier that was also not important information. They quickly settled on using our "university ID numbers" - arbitrary numbers whose value had no importance to the individual, and they reissued cards to the entire university.

      A few weeks after they finished reissuing cards, one of us discovered that the "university ID number" was a primary key in the school's LDAP database. By using a directory browser, you could look up any student, staff, or faculty member by name, and obtain their university ID number. Since this was the number on their ID card, and their ID card controlled access to buildings, labs, etc., it was trivial to obtain access privileges to pretty much anywhere on campus. Want to make it look like the president of the university broke into the nuclear reactor? Look him up, write his ID number to a magnetic stripe card (we had built the hardware to do this, as well as to "fake" cards, which allowed us to simple type in numbers and generate signals, without actually making a card), and have at it.

      Again, it was brought to the attention of the university. After a failed attempt to begin disciplinary action against one of us, they recalled everyone's cards and wrote new, presumably pseudo-random identifiers to them that were not publicly accessible.

      Moral of the story? In your rush to fix one problem, make sure you don't create an even bigger one.

      • by sznupi (719324)

        You had a nuclear reactor at your university?

        • by systemeng (998953)
          Sounds like you went to Reed?
        • by Chirs (87576)

          You didnt? :)

          We actually had several reactors. A SLOWPOKE-2 fission reactor as well as a couple of Tokomak style fusion reactors.

          They're used in the Physics and Engineering Physics departments for research.

        • by harl (84412) on Thursday January 07, 2010 @12:40PM (#30683930)

          My school did also. It's just a small one. They gave tours. It was really neat to look down into the core. Until I realized that I was looking down into the core.

          I thought most large universities had one.

        • by Tim C (15259)

          I don't know about now, but when I was there in the 90s Imperial College in London had a nuclear reactor on a different site, and a z-pinch machine in the basement of the Physics building on the main campus (which is next door to the Science Museum in central London). So, I have no trouble believing that the OP's uni had a reactor.

      • Re: (Score:3, Insightful)

        by noidentity (188756)

        Moral of the story? In your rush to fix one problem, make sure you don't create an even bigger one.

        Indeed. When you find a problem and develop a fix, you are faced with a choice: continue using the old system with mostly known problems and possibly known workarounds, or use the patched system that has one of the known problems fixed, but might have new unknown problems, possibly more severe than the old known problem, and possibly without any workarounds.

      • by NevarMore (248971)

        No the moral is if you're a university you have access to a large number of people who are clever, smart, bored, and willing to do normally expensive work for free.

        Kudos for pointing out the bug. It just always baffles me why universities don't do things like using their students as cheap labor while giving them real-world examples of work in their chosen fields or using professors as consultants and advisors from time to time.

    • by corbettw (214229)

      What has been mentioned as well was updating the software on the card at the banking terminal - I'm surprised that this is possible at all. Essentially, that opens another huge security gap (which might have been there for a long time but went unnoticed so far).

      Since you need the card itself and the PIN with that card to update it, it's not really insecure. And having the ability to change software and/or PINs without mailing things all over the place is a pretty reasonable (and handy) ability. The alternative would be for customers to mail in their cards and wait for them to be sent back weeks later; or have two cards that can access your account at the same time, and then depend on the customer to dispose of the old card properly. Those scenarios are much more p

    • Re:Untested software (Score:4, Informative)

      by owlstead (636356) on Thursday January 07, 2010 @10:31AM (#30682114)

      Essentially, that opens another huge security gap (which might have been there for a long time but went unnoticed so far).

      It does not necessarily open up a "huge security gap", that's sensationalism. It does add significant "surface" to attack.

      GlobalPlatform cards (used by Visa/Mastercard) have always contained methods to update the Java Card (or other OS) applications on the card. Of course, this requires either signed code or a master key set. One expects this interface to be well tested and certified - and normally they are.

      Normally bank cards (and ID cards/passports) don't get updated in the field. I would not be surprised when upgrading the cards would meet serious problems.

    • Re: (Score:3, Insightful)

      by rickb928 (945187)

      Believe me, they they are tested. I know. But they are not always tested well.

      - The EMV (Euro MasterCard & Visa, also called chip & pin) specs are complex to say the least. It took 6 months for one team I know of to get to the point that the spec writerd admitted they did not know how it actually worked, and to admit that the actual data did not match the specs. They rewrote the spec based on actual data. Later, the 'controlling authorities' updated their specs to match our results. As if anyo

      • > Believe me, they they are tested. I know. But they are not always tested well .. They rewrote the spec based on actual data. Later, the 'controlling authorities' updated their specs to match our results ..

        Do you have any reliable third party citations for this ?

      • by nbert (785663)

        Covering the connectors will force the reader to take the stripe if it can, and many do.

        That's exactly how many retailers in Germany currently deal with the situation. If the customer's card doesn't work they just put some sticky tape on it. The banks affected have also modified their ATMs to fall back to stripe-mode in case the chip has the bug. Of course that is just a workaround, because this "fix" doesn't work internationally.

        • by rickb928 (945187)

          Internationally, it should. EMV cards in the U.S. are generally only reading the stripe. There are not very many chip & pin terminals here, if any.

          We aren't deploying at this time.

          • by nbert (785663)
            So those cards would work in the US and Germany. But it wouldn't work in most of Europe and many other places.
  • by foobsr (693224) on Thursday January 07, 2010 @10:10AM (#30681862) Homepage Journal
    Technology: 2016 Bug Hits Text Messages, Payment Processing [slashdot.org]

    Experts (or excellence, YMMV) at work.

    CC.
  • by PolygamousRanchKid (1290638) on Thursday January 07, 2010 @10:23AM (#30681980)

    . . . use the magnetic strip.

    I just saw a clip on a German news channel showing a chick covering the chip on her card with a piece of clear adhesive tape. Apparently this forces a dual card reader to use the strip. But I wasn't listening, so I'm working, you know.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      This "solution" has already clogged up several ATMs: the adhesive tape gets stuck inside the machine and blocks the reader, rendering it unusable even for those customers who have an unaffected card. Thank you very much. (Remember that ATMs are designed to prevent manipulated cards from being used, so tolerances are tight. The increased thickness could be something more sinister than tape.)

      • Oh, well. That's brilliant advice for you. At least my chip had no problems today. Being that they showed the tape thingie on N-TV, I wonder how many more machines will be clogged by the end of the day?

  • This is just an "Up yours" to everyone who, after Y2K, decided "But now we won't have to worry about 4 digit years for another hundred years, so let's just use two digit years. What could be the harm?"

  • by Anonymous Coward on Thursday January 07, 2010 @10:37AM (#30682188)

    They claim cards in other countries made by Gemalto are unaffected.

    Which countries were made by Gemalto?

  • by MiniMike (234881) on Thursday January 07, 2010 @10:39AM (#30682206)

    Gemalto accepted responsibility for the fault, 'which it is estimated will cost €300m (£270m) to rectify.'

    I hope that money isn't in a German bank...

  • The problem lies in the advanced security mechanisms (EMV chip). The fix (currently) is to disable advanced security features (either by disabling the chip by taping it or in POS device). I can already hear the Skimmers jubilating. They will profit greatly...
    Another problem will be that a lot of people will become wary of security features.

  • I don't know the exact nature of the bug, but I know that in the current economic crisis, managers first and foremost look for minimizing costs. This has laid to reductions in personnel, and ultimately in testing problems: not enough people to test the software and the people that are assigned the testing job are not experienced enough to do it properly.

    I experienced this personally: I worked all throughout 2009 on a project that should have been ended by the end of 2008, because the contractor has laid off

    • Re: (Score:3, Insightful)

      by Dr. Hok (702268)
      I agree. In my experience, testing is usually cut down first when it comes to cost reduction, because the bosses can't see the benefit of testing. They never learn, it seems.
  • Wow! (Score:2, Insightful)

    by Linuxmonger (921470)
    Wow, somebody stood up, said it was their fault, and took responsibility - what a rare moment in the business world. I offer my gratitude and wish them well on what will undoubtedly be a perilous journey.

Help me, I'm a prisoner in a Fortune cookie file!

Working...