Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Python PHP Perl Programming Ruby

Did Programming Language Flaws Create Insecure Apps? (bleepingcomputer.com) 100

Several popular interpreted programming languages are affected by severe vulnerabilities that expose apps built on these languages to attacks, according to research presented at the Black Hat Europe 2017 security conference. An anonymous reader writes: The author of this research is IOActive Senior Security Consultant Fernando Arnaboldi, who says he used an automated software testing technique named fuzzing to identify vulnerabilities in the interpreters of five of today's most popular programming languages: JavaScript, Perl, PHP, Python, and Ruby.

Fuzzing involves providing invalid, unexpected, or random data as input to a software application. The researcher created his own fuzzing framework named XDiFF that broke down programming languages per each of its core functions and fuzzed each one for abnormalities. His work exposed severe flaws in all five languages, such as a hidden flaw in PHP constant names that can be abused to perform remote code execution, and undocumented Python methods that can be used for OS code execution. Arnaboldi argues that attackers can exploit these flaws even in the most secure applications built on top of these programming languages.

This discussion has been archived. No new comments can be posted.

Did Programming Language Flaws Create Insecure Apps?

Comments Filter:
  • So did bad programming.
  • by RhettLivingston ( 544140 ) on Sunday December 10, 2017 @03:26PM (#55711723) Journal

    This article is either intentionally sensationalist or written by someone who just has no clue.

    The research presented found flaws in popular interpreters for a few interpreted languages. This is little different than finding flaws in libraries and in fact, many of these flaws were in the libraries.

    It is a very important distinction. Fixing a problem in a language usually takes negotiation and can be years. Fixing a problem in an interpreter often takes days.

    People whose idiot managers read this and are panicking at this moment will now be having to explain to them this week why this doesn't mean that they need to rewrite all of their code into another language to fix their problems.

  • by Anonymous Coward

    Letâ(TM)s assume for example that the languages themselves arenâ(TM)t perfect. Developers working in these languages (I am not) will write code using the standard libraries for these languages. So unlike C and C++ where developers tend to constantly rewrite the standard libraries (see Qt, glib, etc...) as well as compile non-library related problems into their code making it have to be recompiled in order to correct flaws, when security flaws are found in code written in these languages, updating

  • by Anonymous Coward on Sunday December 10, 2017 @03:34PM (#55711743)

    But the exploits require shell-level access to launch the interpreters. When you have shell access, it's not surprising that you can execute an arbitrary shell command.

    • I don't get this article. They're not even fuzzing the interpreters, but rather the STANDARD LIBRARIES. How is this remotely interesting? Passing unsanitized input to arbitrary standard library functions, what could go wrong?? *facepalm*

  • Pounding the hell out of functions with random (and thus lying) input is one of my best tricks.

    How am I gonna save the Enterprise if everyone knows the secret?

  • Turing complete (Score:3, Insightful)

    by hackwrench ( 573697 ) <hackwrench@hotmail.com> on Sunday December 10, 2017 @04:03PM (#55711839) Homepage Journal
    It's almost like being Turing complete opens you up to being insecure.
  • With "traditional" attacks such as buffer overflows. Newer languages abstract away having to do things like manually allocate string sizes to make buffer overflows less possible. That's why we continue to improve these languages and develop new ones and I expect this process to continue as newer attacks are developed against existing languages.
  • by fahrbot-bot ( 874524 ) on Sunday December 10, 2017 @04:19PM (#55711893)

    Most of us are probably too young to remember the TECO [wikipedia.org] editor, from the early 1960s, but ...

    It has been observed that a TECO command sequence more closely resembles transmission line noise than readable text. One of the more entertaining games to play with TECO is to type your name in as a command line and try to guess what it does. Just about any possible typing error while talking with TECO will probably destroy your program, or even worse - introduce subtle and mysterious bugs in a once working subroutine.

    Also, I assert that there are no language flaws in Perl, just obscure and/or advanced usages, some of which may be dangerous to you, others or the planet.

    • by Anonymous Coward

      s/TECO/vim/ and it's essentially saying the same thing: try to guess what happens when you type your name as a command in vim

    • Just about any possible typing error while talking with TECO will probably destroy your program

      I'm pretty sure that's mostly the case with contemporary programing environments as well. Point to a random spot, replace the character in that spot with a random one, and observe the glorious result. We don't have any self-repair yet.

    • I'm old enough to remember TECO, the world's most terrifying text editor. It never actually did anything terrible to me, but living one keystroke away from disaster eight or ten hours a day was stressful.

  • by zifn4b ( 1040588 )

    Programming languages don't write insecure applications, people do.

    Next thing I suppose we'll arrive at is if someone builds a structurally insecure apparatus with a hammer, nails and wood, I suppose it will be the hammer's fault for being a bad tool and we'll need to consider not using hammers anymore?

    Humans have an uncanny ability to evade culpability in clever ways.

    • > Humans have an uncanny ability to evade culpability in clever ways.

      Well you seem to be saying that language designers / implementers have no culpability at all.

      For your metaphor, what if the person building the house is using a rubber mallet, rusted nails and broken wood?

      People can write bad programs in any programming languages, but some programming languages have flawed designs that make bad behaviour much more likely.

      • by zifn4b ( 1040588 )

        Well you seem to be saying that language designers / implementers have no culpability at all.

        For your metaphor, what if the person building the house is using a rubber mallet, rusted nails and broken wood?

        No, you missed the point. If someone builds something that is unstable with a hammer and nails, is the manufacturer of the hammer and nails liable for it. NO. A thousand times NO. Take your nanny state and shove it up your liberal ass.

        • WTF?! Seriously, we're making metaphors of programming languages as tools and houses as programs.
          You make the point that it's the builder's responsibility to make a good house. (I got your point, it's not a hard one to understand).
          I make the point that it's worth using good tools, because some programming languages make it hard to write secure code. I could keep stretching the metaphor and say it's up to the builder to know what good tools are if that would make you happier.

          Jumping to "Take your nanny state

  • So if I were to write a straight 'hello world' app in these languages, it could be exploited. Any proof?

  • by pthisis ( 27352 ) on Sunday December 10, 2017 @05:04PM (#55712143) Homepage Journal

    Fuzzing is great, but he doesn't seem to understand what a language flaw is.

    In the case of Python, he's found 2 methods in libraries that can call shell commands. Leaving aside that this would be a library issue rather than a language issue, there's no evidence that it's even that.

    Python explicitly doesn't have sandboxing. Like most languages (including C, C++, etc), the documented behavior is that if you control the program and environment then you're fully allowed to import subprocess or os and run whatever you want. You don't need to find "hidden" ways to run a subprocess, you can directly "import subprocess" and run stuff.

    This is doubly true because of the nature of the modules investigated. The first "flaw" is that mimetools has a deprecated "pipeto" method that lets you pipe to arbitrary commands. But mimetools is already well-known to expose os access in millions of ways (most obviously, it imports and exposes os, so if for some bizarre reason you want to avoid importing os yourself, you can simply run "mimetools.os.popen" directly); no competent programmer would expect otherwise.

    The second "flaw" is that pydoc runs a pager program which lets you run an arbitrary command if you control the program environment. Of course, the documentation states explicitly that the specified pager program will be used. It's unclear what part of the behavior here he thinks even surprising. And, again, the pydoc module imports and exposes "os" in exactly the same way that mimetools does.

  • not a flaw in perl (Score:3, Informative)

    by Anonymous Coward on Sunday December 10, 2017 @05:11PM (#55712183)

    I haven't looked at the other languages, but in the case of perl, it's not a flaw in the interpreter, its a flaw in a specialised library module (ExtUtils::Typemaps::Cmd) that is used to build other modules - i.e. that is run only when building and installing a third-party module. The installer for such a module will typically hard-code the module name they pass to ExtUtils::Typemaps::Cmd::embeddable_typemap(). If someone wanted to modify the installer to run a command rather than load a file, they could just directly write 'system "rm -rf /"' rather than the elaborate ExtUtils::Typemaps::Cmd::embeddable_typemap( 'system "rm -rf /"'). And if they could modify the install script, you've lost anyway.

    Also, I can't find any in-the-wild use of that function.

  • Imagine code that actually helped with security rather than opened a back door and trap door.
  • Unskilled "engineers" pumped through bad schools to cobble together some barely functional shit to make money for venture capitalists and industrialists in a huge economic bubble made insecure apps.

    Another factor is engineered vulnerabilities to assist mass surveillance.

    Basically our entire society has become a bubble that is going to pop within 50 years.

  • Obvious Answer (Score:4, Interesting)

    by Required Snark ( 1702878 ) on Sunday December 10, 2017 @07:23PM (#55712797)
    Everything should be coded in Haskell. It has the best compile time error checking on the planet. (Sorry ADA, but Haskell is more advanced.)

    Note for the dim bulbs: this comment is meant to be a joke. The original article was foolish, and suggesting Haskell shows how ridiculous it is in the first place.

  • The examples given mostly have nothing to do with the languages having vulnerabilities at all (I only read the Python section as I'm most familiar).
    For goodness sake, none of those were privilege escalation or remote access attack vectors. Yes, if you allow the user to specify their environment variables (like PAGER and EDITOR) they'll get executed *as that user* which is known behaviour.

    • Also the case for the perl example, which I kid you not, posits that if you have access to the command line such that you can type in a perl one-liner, there's a perl library function for which one of the parameters can be tricked into shelling out to (you guessed it) ... the command line.

      The example cited is this:

      perl -e "use ExtUtils::Typemaps::Cmd;print embeddable_typemap(\"system 'id'\")"

      ... which shells out the output of the 'id' command in the middle of the error message it returns.

      And yet
  • See subject: C shows you it in buffer overflows due to null-terminated string use & functions like sscanf (iirc this had big problems) having to be redone.

    Neither is a problem in Pascal/Object Pascal (length is built-into strings).

    Had a troll "bug me" today https://it.slashdot.org/comments.pl?sid=11461611&cid=55711831/ [slashdot.org] & it "hits on" this part & what did I do to AVOID program language issues (especially in stringwork which my program noted there HUGELY operates in)?

    Something from a book I re

  • by ka9dgx ( 72702 ) on Monday December 11, 2017 @01:00AM (#55713885) Homepage Journal

    If your OS doesn't require you to specify what I/O is allowed for a program when you run it, you're never going to have a secure system. We need capability based security, and will be spinning our wheels until we get it.

  • by account_deleted ( 4530225 ) on Monday December 11, 2017 @05:05AM (#55714469)
    Comment removed based on user account deletion
  • by Anonymous Coward

    this looks a lot like someone just wants a paper they can refer to when they sell you their "NextGen Cyber Security Protection Package (tm)" for a couple grand a month. I mean, those "flaws" are certainly not language flaws, they aren't even interpreter flaws, and to me it looks like they aren't even flaws at all.
    Python: libraries provide relatively unknown functions that allow to execute arbitrary code, don't feed them user input and you're fine
    Perl: see python
    php: if you feed user input to shell_exec wit

  • you're already running an arbitary php script on the machine. What does executing arbitrary machine code through the php interpreter gives you that you don't already have?
    The ability to escape php's poor sandboxing features? Don't make me laugh.

  • by Anonymous Coward

    The biggest flaws are in OSes and product strategies. Microsoft is the obvious poster-boy for this. Their product strategy of binary backwards compatibility is easily responsible for 99% of the exploits on Windows. And the pathetic part is that people did and have been predicting nearly every type of exploit that has hit Windows since 1995 simply base on architectural design issues caused by this product strategy. Windows has only recently become half-secure as it's abandoned that fundamental design f

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...