Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Encryption Programming Security

Hackers Can Now Reverse Engineer Intel Updates Or Write Their Own Custom Firmware (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Researchers have extracted the secret key that encrypts updates to an assortment of Intel CPUs, a feat that could have wide-ranging consequences for the way the chips are used and, possibly, the way they're secured. The key makes it possible to decrypt the microcode updates Intel provides to fix security vulnerabilities and other types of bugs. Having a decrypted copy of an update may allow hackers to reverse engineer it and learn precisely how to exploit the hole it's patching. The key may also allow parties other than Intel -- say a malicious hacker or a hobbyist -- to update chips with their own microcode, although that customized version wouldn't survive a reboot.

"At the moment, it is quite difficult to assess the security impact," independent researcher Maxim Goryachy said in a direct message. "But in any case, this is the first time in the history of Intel processors when you can execute your microcode inside and analyze the updates." Goryachy and two other researchers -- Dmitry Sklyarov and Mark Ermolov, both with security firm Positive Technologies -- worked jointly on the project. The key can be extracted for any chip -- be it a Celeron, Pentium, or Atom -- that's based on Intel's Goldmont architecture.
In a statement, Intel officials wrote: "The issue described does not represent security exposure to customers, and we do not rely on obfuscation of information behind red unlock as a security measure. In addition to the INTEL-SA-00086 mitigation, OEMs following Intel's manufacturing guidance have mitigated the OEM specific unlock capabilities required for this research. The private key used to authenticate microcode does not reside in the silicon, and an attacker cannot load an unauthenticated patch on a remote system."
This discussion has been archived. No new comments can be posted.

Hackers Can Now Reverse Engineer Intel Updates Or Write Their Own Custom Firmware

Comments Filter:
  • The headline lies. Hackers can authenticate the firmware, bravo, but they still can't create a signature for it.

    That said, the PS3's fatal flaw was not using a random seed, so eventually the private key was figured out by brute force.

    • Did I read it wrong? (Score:5, Informative)

      by Ungrounded Lightning ( 62228 ) on Wednesday October 28, 2020 @06:20PM (#60660480) Journal

      no, this won't allow malicious microcode
      The headline lies. Hackers can authenticate the firmware, bravo, but they still can't create a signature for it.

      As I read it, what they got was the symmetric encryption/decryption key for the bulk encryption of the microcode, but not the private side of the Asymmetric signature-generation key pair needed to let them talk the chip into writing the microcode into its flash for use after the next powerup.

      This doesn't let them permanently write their own microcode changes into the chip. But it does give them TWO new abililties:
        - They can DEcrypt the intel microcode to analyze it.
        - They can ENcrypt (but not sign) their own microcode. This lets them feed it into the chip to run it for no longer than the power remains on.

      And there are two ways to feed it into the chip:
        - An attached hardware flash programming tool IF the board manufacturer didn't disable a debugging fuse-ROM-emulation mode.
        - The "Computer Red Pill" exploit tool IF the manufacturer didn't disable the debugging feature it needs to run AND the attacker has Ring-zero access on the local machine to run the tool.

      That last could be chained with a remote exploit to temporarily install non-Intel microcode until the next power cycle, and a persistent exploit to do it again after power cycles. (Not that an attacker would need it. If he had the ring-zero remote or persistent exploit in place it could do pretty much anything he wanted anyhow, without needing to modify microcode to accomplish its dirty work.)

      There are several links in that chain, so please let me know which, if any, are wrong.

      • I'm surprised to see so little interest in this topic, even allowing for the Slashdot 2020 effect.

        I think the non-persistent exploit you describe is actually an advantage in hiding exploits. Much harder to investigate it if the malware disappears at the first attempt to move the device. I have speculated that Huawei might create volatile storage features to allow the Chinese government to exploit such vulnerabilities, but this one could be coming from the other side's ox.

        If you have someone on the inside to

      • "This lets them feed it into the chip to run it for no longer than the power remains on."

        This is the way regular microcode patches for Intel processors work. The patches are programmed in the BIOS, then on boot the BIOS load them into the patch ram in the processors cores. The CPU does not contain any non-volatile memory, and efuses are only one-time programmable.

    • Hackers can authenticate the firmware

      That's worth something right there. Being able to authenticate firmware may help in actual trusted boot chains, compared with Intel's disaster.

      Adobe [computerworld.com.au] doesn't get to use it though.

  • Not an Intel fan, and I don't believe much of what they say. Nonetheless I feel compelled to mention that the phrase "we do not rely on obfuscation of information" is exactly what I want to hear from every single company involved with security.

    • Not an Intel fan

      I had assumed that, unless some custom firmware had granted processor cooling fans the ability to post to Slashdot...

  • by biggaijin ( 126513 ) on Wednesday October 28, 2020 @06:07PM (#60660444)

    Hackers will now be able to improve the security of their computers by patching the CPU firmware to block access to the Intel Management Engine, a disconcerting security and privacy hole in Intel processors that allows a remote system to gain low-level access to the computer. If IME can be quarantined in this way, the overall privacy and security of the system will be measurably improved.

    • Hackers will now be able to improve the security of their computers by patching the CPU firmware to block access to the Intel Management Engine, a disconcerting security and privacy hole in Intel processors that allows a remote system to gain low-level access to the computer.

      Or (if they have a little trust and/or the tools to verify) they can buy their machine from a manufacturer that offers models with the ME factory-configuration disabled, as a security feature. (For instance, I just ordered, as my next

    • by Anonymous Coward

      If you want to "block" the ME, you should probably just refrain from turning it on in the first place.

      It's mind boggling that you would intentionally go through all the steps to enable the ME, secure it with your own keys, give those keys out, and provide others access to your LAN, just so there is a "threat" you can complain about.

      • If you want to "block" the ME, you should probably just refrain from turning it on in the first place.

        It's mind boggling that you would intentionally go through all the steps to enable the ME, secure it with your own keys, give those keys out, and provide others access to your LAN, just so there is a "threat" you can complain about.

        I think you are perhaps thinking of the wrong IME. The Intel Management Engine portion of Intel processors is always on and can't be disabled by any normal methods.

    • Hackers will now be able to improve the security of their computers by patching the CPU firmware to block access to the Intel Management Engine, a disconcerting security and privacy hole in Intel processors that allows a remote system to gain low-level access to the computer. If IME can be quarantined in this way, the overall privacy and security of the system will be measurably improved.

      This does not allow them to do any thing useful. Can you look at the microcode now? Sure. Can you roll it back? Not with this key. Can you replace it? Not with this key. The original Intel-SA-00086 is the real security hole and it is in ME itself, though you can close it with firmware updates. That is why they used the architecture that they did - nobody is updating that firmware anymore and that hole is still wide open. Also most system vendors have an ability to prevent firmware rollback that is som

  • Even the newbies who actually RTFA don't know if we should panic or not.

    • The article suggests this isn't a big security issue *unless it's used along with other vulnerabilities*. Well that's kinda a dumb thing to say, because vulnerabilties are almost always used together.

      However, in this case one has to be able to execute code in kernel mode in order to (temporarily) update the microcode. If a bad guy is in your kernel, they already own your TCB and you are completely hosed anyway. So there is almost no security impact from this in terms of somebody updating the microcode.

  • Now when you get compromised, the rootkit now has the possibility of burying itself beyond the reach of any anti-virus software. A practical worst case is that a rootkit gains administrative level privileges and installs an "early boot" driver that is then written to UEFI. From then on, at boot the driver loads the IME rootkit. It should be possible to prevent any program from finding the rootkit on disk or reprogramming the IME by intercepting requests and forging a reply.

    The real worst case scenario is

  • "customized version wouldn't survive a reboot" - not true

A complex system that works is invariably found to have evolved from a simple system that works.

Working...