Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Bug

New Framework For Programming Unreliable Chips 128

rtoz writes "For handling the future unreliable chips, a research group at MIT's Computer Science and Artificial Intelligence Laboratory has developed a new programming framework that enables software developers to specify when errors may be tolerable. The system then calculates the probability that the software will perform as it's intended. As transistors get smaller, they also become less reliable. This reliability won't be a major issue in some cases. For example, if few pixels in each frame of a high-definition video are improperly decoded, viewers probably won't notice — but relaxing the requirement of perfect decoding could yield gains in speed or energy efficiency."
This discussion has been archived. No new comments can be posted.

New Framework For Programming Unreliable Chips

Comments Filter:
  • godzilla (Score:5, Insightful)

    by Anonymous Coward on Monday November 04, 2013 @10:37AM (#45325135)

    Asking software to correct hardware errors is like asking godzilla to protect tokyo from mega godzilla

    this does not lead to rising property values

  • Hmmm ... (Score:5, Insightful)

    by gstoddart ( 321705 ) on Monday November 04, 2013 @10:38AM (#45325147) Homepage

    So, expect the quality of computers to go downhill over the next few years, but we'll do out best to fix it in software?

    That sounds like we're putting the quality control on the wrong side of the equation to me.

  • How on earth (Score:5, Insightful)

    by dmatos ( 232892 ) on Monday November 04, 2013 @10:48AM (#45325257)

    are they going to make "unreliable transistors" that, upon failure, simply decode a pixel incorrectly, rather than, oh, I don't know, branching the program to an unspecified memory address in the middle of nowhere and borking everything.

    They'd have to completely re-architect whatever chip is doing the calculations. You'd need three classes of "data" - instructions, important data (branch addresses, etc), and unimportant data. Only one of these could be run on unreliable transistors.

    I can't imagine a way of doing that where the overhead takes less time than actually using decent transistors in the first place.

    Oh, wait. It's a software lab that's doing this. Never mind, they're not thinking about the hardware at all.

  • Re:How on earth (Score:4, Insightful)

    by bluefoxlucid ( 723572 ) on Monday November 04, 2013 @03:06PM (#45328509) Homepage Journal

    Erm, that's the whole point. If we allowed high error rates with existing architectures, none of our results would be trustworthy. I imagine the most practical approach would be a fast, low-power but error-prone co-processor living alongside the main, low-error processor.

    Or you know, the thing from 5000 years ago where we used 3 CPUs (we could on-package ALU this shit today) all running at high speeds and looking for 2 that get the same result and accepting that result. It's called MISD architecture.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...