Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Programming Bug Security

Curl Warns GitHub About 'Malicious Unicode' Security Issue (daniel.haxx.se) 39

A Curl contributor replaced an ASCII letter with a Unicode alternative in a pull request, writes Curl lead developer/founder Daniel Stenberg. And not a single human reviewer on the team (or any of their CI jobs) noticed.

The change "looked identical to the ASCII version, so it was not possible to visually spot this..." The impact of changing one or more letters in a URL can of course be devastating depending on conditions... [W]e have implemented checks to help us poor humans spot things like this. To detect malicious Unicode. We have added a CI job that scans all files and validates every UTF-8 sequence in the git repository.

In the curl git repository most files and most content are plain old ASCII so we can "easily" whitelist a small set of UTF-8 sequences and some specific files, the rest of the files are simply not allowed to use UTF-8 at all as they will then fail the CI job and turn up red. In order to drive this change home, we went through all the test files in the curl repository and made sure that all the UTF-8 occurrences were instead replaced by other kind of escape sequences and similar. Some of them were also used more or less by mistake and could easily be replaced by their ASCII counterparts.

The next time someone tries this stunt on us it could be someone with less good intentions, but now ideally our CI will tell us... We want and strive to be proactive and tighten everything before malicious people exploit some weakness somewhere but security remains this never-ending race where we can only do the best we can and while the other side is working in silence and might at some future point attack us in new creative ways we had not anticipated. That future unknown attack is a tricky thing.

In the original blog post Stenberg complained he got "barely no responses" from GitHub (joking "perhaps they are all just too busy implementing the next AI feature we don't want.") But hours later he posted an update.

"GitHub has told me they have raised this as a security issue internally and they are working on a fix."

Curl Warns GitHub About 'Malicious Unicode' Security Issue

Comments Filter:
  • If it can refine the difference to highlight which word in a line was different, maybe it could use a different color (if moving to a 3-color process [britannica.com] isn't too much more expensive) for which characters in that line are different. Or have a checkbox to temporarily highlight non-ASCII UTF-8 characters.
    • by drnb ( 2434720 )

      If it can refine the difference to highlight which word in a line was different, maybe it could use a different color (if moving to a 3-color process [britannica.com] isn't too much more expensive) for which characters in that line are different. Or have a checkbox to temporarily highlight non-ASCII UTF-8 characters.

      Various diff utilities do highlight things at the character level.

  • However, looking for these sort of shenanigans seems like something that could've (and maybe should've) been at least semi-automated a couple decades ago - search for characters outside the typical ASCII range and flag those parts for human review.

    • However, looking for these sort of shenanigans seems like something that could've (and maybe should've) been at least semi-automated a couple decades ago - search for characters outside the typical ASCII range and flag those parts for human review.

      An automated review is not that difficult. For each ASCII character there can be a list of visually similar characters. For example a Latin (Ascii) 'a' would have a Cyrillic 'a' on its list.
      U+0061: U+0430, ...

      Flagging everything would include characters that do not look the same. That would seem like false positives. Or maybe lower priority warnings. Visually similar characters being a higher priority warning.

  • Programs should be written 7 bit ASCII like in the good old days.
  • Vertical double quotes.

    Closing double quotes. Opening double quotes.

    Homoglyphs.

    Arbitrary number of bytes per glyph.

    If it ain't ascii it isn't worth expressing in bytes.

    • Unicode is fucking ridiculous and so are standards bodies who seem to be entirely composed of zero experts and just industry insiders. Javascript is even worse and the web as a whole is getting progressively worse.

    • If it ain't ascii it isn't worth expressing in bytes.

      If you exclusively speak American then you can say everything is US ASCII ... but for many who, reasonably, want to express themselves in their own language they will want other characters. But the "everything" is not entirely true even for Americans, eg 1/100 of a dollar is a cent which is U+00A2 - which slashdot will not display correctly.

    • Yeah, but to be fair, Unicode was invented to put those in.
      Also to be fair, and I don't want to be fair, Unicode and multilanguage websites, where the content owners hounded me forever to get the orthography right in 7 languages, was a source of significant and ongoing pain and irritation... that is actually the whole point of it. Apparently, not everyone speaks ASCII.
      • by drnb ( 2434720 )

        Yeah, but to be fair, Unicode was invented to put those in.

        More specifically, Unicode was invented to provide a standard encoding for all living languages. Anything currently used books, magazines, newspapers, etc.

        It was later expanded to include dead languages to help researchers.

        • And poo emojis to help retards.

        • Oh for sure, psst, I'm quite aware of a what unicode offers...its great as a user :-) but they content didn't display well in browsers at some time in what seems like ancient past now, like 20ya?.. It was probably me ha ah true, but in my defence I had a lot on my plate at that time and I got the server side database and stuff working well enough, and perhaps the browsers hadn't really caught up to display unicode so .. a common gap between minds, academics, like subject matter people, and technology ... o
    • by drnb ( 2434720 )

      Arbitrary number of bytes per glyph.

      Yes and no. That's mostly a result of encoding, UTF-8 vs UTF-32. Although there would still be some glyphs that are composed from multiple code points. To oversimplify, image two characters, 'A' and '`', creating an accented A glyph.

      FWIW, UTF-8 is not difficult to decode, so doing comparisons or detecting malformed UTF-8 isn't too much work. As part of defensive programming I check for proper UTF-8 encoding on any inputs. Its a write once, use many times, sort of thing.

      If it ain't ascii it isn't worth expressing in bytes.

      Bytes, iie UTF-8 encoding of code p

  • Wikipedia used to have sockpuppet accounts that spoofed admin usernames until they implemented the antispoof feature. Then there was the spoofing from punycode domain names. and colombian domains spoofing .co.uk.
  • Many traditional distros still ship unusably old versions of some packages - due to some network dependency they literally don't work anymore.

    Some are buggy with upstream fixes (e.g. nvme tool) and just don't work. "Wait a year and we'll ship a version that works".

    This pushes people to use upstream packages which often times come with update scripts that run as root.

    These would be an ideal place for a malicious "contributor" to put in an update URL he controls.

    It would be better for the distros to remove t

  • That's indeed one of the use-cases than an AI can catch easier than a human.

    Patch (Simplified as I couldn't copy&paste from the screenshot):
    --- test1.txt 2025-05-17 20:56:18.097357631 +0200
    +++ test2.txt 2025-05-17 20:56:33.357317426 +0200
    @@ -1 +1 @@
    -Find the file at https://githubusercontent.com/... [githubusercontent.com]
    +Find the file at https:/// [https]ithubusercontent.com/mozilla-firefox/file.json

    Instruction: "Describe the changes done in this patch"
    Input: (the patch)
    AI:
    In this patch, the following changes were made:

    1. **Re

    • by allo ( 1728082 )

      Also note that the LLM did get the actual code point (first question) and the script (second question) wrong. To the AI's defense: It was only a small 12B model.

    • That's indeed one of the use-cases than an AI can catch easier than a human.

      A very small amount of non-AI code could also catch it. Not everything needs AI.

      • And not everything that's called "AI" actually is "AI" (except to pedants).

        • by drnb ( 2434720 )

          And not everything that's called "AI" actually is "AI" (except to pedants).

          I would add everything that is AI is not necessarily Machine Learning. Some of it is old fashioned humans developing algorithms and stitching them together. AI is really about a family of problems, not necessarily a particular approach to a solution.

          • I would add that for marketing purposes, matrix inversion can be an AI algorithm.

            There are times when I wonder if the fraction of people who believe computers are merely another form of magic is higher than I presumed.

            • by drnb ( 2434720 )
              The Turing test is more of a test of human gullibility than of computer intelligence. :-)
      • Yea, this. It'll also be more reliable, be harder to subvert, have no hallucinations and so on. And could run on a potato, not need a 16-32GB GPU and even then be like "oh and it's only a small 12B model".

Work expands to fill the time available. -- Cyril Northcote Parkinson, "The Economist", 1955

Working...