Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Java PHP Ruby Security

Java, PHP, NodeJS, and Ruby Tools Compromised By Severe Swagger Vulnerability (threatpost.com) 97

"Researchers have discovered a vulnerability within the Swagger specification which may place tools based on NodeJS, PHP, Ruby, and Java at risk of exploit," warns ZDNet's blog Zero Day, adding "the severe flaw allows attackers to remotely execute code." Slashdot reader msm1267 writes: A serious parameter injection vulnerability exists in the Swagger Code Generator that could allow an attacker to embed executable code in a Swagger JSON file. The flaw affects NodeJS, Ruby, PHP, Java and likely other programming languages. Researchers at Rapid7 who found the flaw disclosed details...as well as a Metasploit module and a proposed patch for the specification. The matter was privately disclosed in April, but Rapid7 said it never heard a response from Swagger's maintainers.

Swagger produces and consumes RESTful web services APIs; Swagger docs can be consumed to automatically generate client-server code. As of January 1, the Swagger specification was donated to the Open API Initiative and became the foundation for the OpenAPI Specification. The vulnerability lies in the Swagger Code Generator, and specifically in that parsers for Swagger documents (written in JSON) don't properly sanitize input. Therefore, an attacker can abuse a developer's trust in Swagger to include executable code that will run once it's in the development environment.

This discussion has been archived. No new comments can be posted.

Java, PHP, NodeJS, and Ruby Tools Compromised By Severe Swagger Vulnerability

Comments Filter:
  • by Anonymous Coward

    Never heard of it and not in use in major areas. Nothing to see here. Just overhyped.

  • Congratulations to the Swagger team on achieving their impressive goal of officially codifying every RESTful anti-pattern ever invented, and let's wish them all the best in formally implementing every known security hole next.
  • by ooloorie ( 4394035 ) on Saturday June 25, 2016 @11:16AM (#52388497)

    I hadn't see Swagger before, but it looks like a nicer design than previous web service description languages.

    The "vulnerability" related to Swagger in some tools that the REST API specification (in Swagger format) into a library that talks to that API. Specifically, malicious specifications can inject code into the library. I don't think this is a major problem in practice. These translation tools are invoked by people who want to write clients for specific services; usually, that means that you know the service provider and understand your trust relationship. In addition, this is not a fully automatic process, since you'll be programming against the library that the tool generates anyway.

    Keep in mind that the alternative to a REST specification that the service provider gives you a bunch of REST client libraries, and it's far easier to hide malicious code in those client libraries than in a REST specification.

    I don't think it's fair to call this a significant "vulnerability", although it might still be nice if Swagger tools detected these cases and alerted the developer to it.

    • Re: (Score:3, Interesting)

      Swagger is nice but it is a workaround for what is really a mess: REST. HTTP was not intended to be used an an application level API, nor were XML or JSON. These are all bastard approaches. Compare these with the elegance of remote procedure call approaches such as CORBA and gRPC, where one specifies a method interface just like in a normal computer language, and the marshalling and unmarshalling code is generated by a compiler. The compiler is then the subject of rigorous validation based on the language s
      • by darpo ( 5213 )
        "The fundamental problem with RPC is coupling. RPC clients become tightly coupled to service implementation in several ways and it becomes very hard to change service implementation without breaking clients" http://stackoverflow.com/a/151... [stackoverflow.com]
        • Yes, that has been the argument. But it has proven to be a red herring. Systems are semantically coupled if they interoperate. The syntax of that coupling can either help to maintain consistency, or not. REST does not help: it is entirely up to the programmer to check everything and make sure though tons of testing that everything still works. In contract, a compiler based approach helps enormously. Real real way to decouple systems is to _logically_ decouple them, so that, e.g., backward compatibility is i
          • by hhas ( 990942 )
            Doh! My reply to you's just got posted as AC. HTH (Though you can still tell by the length, I'm sure.)
        • "The fundamental problem with RPC is coupling. RPC clients become tightly coupled to service implementation in several ways and it becomes very hard to change service implementation without breaking clients"

          Which is why RESTful HTTP isn't RPC, because we already know it's the wrong tool for this job. The fundamental problem is that today's web has an entire programming cult[ure] raised on OOP to the point where they're pathologically incapable of imagining any kind of interaction model except synchronous local message passing, so instead of bothering to RTFM until they understand correctly how REST works, the lazy toads simply reinterpret "REST" to mean what they already know. Which is 180 the opposite to what

      • by hhas ( 990942 )

        REST isn't a mess, it's actually a very clean, logical, and elegant state-centric approach to interconnecting vast numbers of highly heterogenous state machines. It's the entire web programming "profession"'s atrocious inability to get what REST actually means, as to what they reinterpret it to mean, to the point where their total misconceptions are now raised to the status of "industry standard".

        Protip: Any software developer who uses the phrase "REST API" has a deep detailed technical understanding of R

        • Well I must misunderstand REST then! Although every single REST project I have been on has treated REST as an API syntax. But is this splitting hairs? I know that the concept is that one transfers state from one place to another. But in practice the paradigm is driven by the UI framework (e.g., Angular, etc.), and given the component-oriented react oriented patterns of today, one is not transferring state: one is making API calls. That is what apps need, and they are using REST because of AJAX, and it has p
          • Re: (Score:3, Insightful)

            by hhas ( 990942 )

            Well I must misunderstand REST then!

            Very likely. It's not about "APIs"; never has been. The web was original designed to be a vastly distributed document publishing system, where everyone could read and everyone could write. The first "web browser" was actually a WYSIWYG editor, kinda like Word except that instead of opening and saving documents on your local drive it opened and saved them across the internet. HTTP was the transport mechanism for that, and crucially it made no statements on what those documents were or how they were encoded;

            • "The first "web browser" was actually a WYSIWYG editor" Which one was that? Are you referring to Mosaic? If so, I did not know that it had editing capability. As I recall, REST came along as a response to SOAP, which was overly complex for what people were using it for. The most common feature of SOAP was SOAP-RPC, so it was natural for people to want to use REST for that. I 100% agree that HTTP is being misused - and REST as ell. What we need is a protocol other than HTTP for remote procedure calls. Unf
              • by hhas ( 990942 )
                Good grief, and I've AC'd my other reply too! Hey, I blame Firefox for crashing on me halfway through (it's trying to keep you from the Truth!).
            • by eWarz ( 610883 )
              What do you think that JSON documents are for?
              • by hhas ( 990942 )
                Encoding information? JSON's just another serialization format; it says nothing about what information a document should contain or how it is organized (beyond being arranged in a tree shape, of course). Some folks prefer it over XML cos it's more lightweight and trivial to work with in JS, though of course there's nothing to stop a RESTful resource offering up representations of itself in both encodings - e.g. application/vnd.initrode.employee.v2+xml and application/vnd.initrode.employee.v2+json - or any o
      • by hey! ( 33014 ) on Saturday June 25, 2016 @12:44PM (#52388927) Homepage Journal

        Swagger is nice but it is a workaround for what is really a mess: REST. HTTP was not intended to be used an an application level API, nor were XML or JSON. These are all bastard approaches.

        This is (a) a matter of opinion and (b) completely irrelevant to the bug in question, which is a problem with input sanitation which is a perennial source of security bugs recognized as far back as 1974 in the first edition of Elements of Programming Style.

        Now as to the bastard-y of HTML as an API -- having actually read RFC 2616 myself and implemented some of it in raw TCP sockets for some very early mobile to server data connections, I beg to differ. REST is precisely what HTTP was designed to do. People who didn't read the RFCs simply went with what seemed simplest to them, which by in large was using GET and PUT interchangeably since they seemed to be just two ways of doing the same thing. That was very common practice in 2000 when Roy Fielding wrote his famous doctoral dissertation, the arguably most significant contribution of which is simply pointing out what had been the intended semantics of HTTP all along.

          Having tried my hand at SOAP and XML-RPC, I can also say why REST over JSON has been so successful: they make the programmer's job easier, which is what architecture is supposed to do.

        "Architecture" has almost become a synonym for making things awkward and unnecessarily complicated, but what good architecture does is separate concerns so you don't have to deal with overwhelming amounts of detail at once. Of course good architecture has never stopped anyone from bolluxing themselves up and handing a steaming pile of logic turds over to someone else.

        • You read a modern spec? stop the presses.

          Seriously, what HTTP was designed to do is described here https://tools.ietf.org/html/rf... [ietf.org]

          Read that one and you'll understand why people think REST is almost silly.

          • by eWarz ( 610883 )
            Did you even read that RFC you linked? You'll also note that's a 20 year old RFC. HTTP 1.0. At minimum you should read the HTTP 1.1 spec before you start trash talking others.
      • Compare these with the elegance of remote procedure call approaches such as CORBA and gRPC, where one specifies a method interface just like in a normal computer language, and the marshalling and unmarshalling code is generated by a compiler.

        CORBA came out in 1991 and SunRPC in 1988. REST didn't arrive until 2000. So, the history of this is that people tried to make CORBA and RPC (and Microsoft's versions of the same) work for about a decade before they gave up and switched to REST.

        CORBA's and gRPC's IDLs a

        • Yes, and before SunRPC, I remember that Apollo Computer had an RPC toolkit. Actually, CORBA did work - quite well. I used it a-lot back then. XML based messaging came along - way before Internet scale was a concern - because it went over HTTP, thus "tunneling" through firewalls. CORBA required you to open ports, and sysadmins would not do that. From there, the nightmare of WSDL emerged, and then REST replaced WSDL, and programmers signed with relief because it was so much simpler. By that time, the OMG had
          • gRPC and REST don't even solve the same problem. Google is using RPC extensively, but mainly in the context of their own internal distributed systems, with an army of testers and developers, massive integration testing, and a single codebase. REST is for highly heterogeneous systems, languages, developer skills, a huge range of latencies, and numerous failure and security models. RPC can be a useful tool for the kinds of distributed systems Google is building to support their services; it is not a good tool

            • It seems to me that REST tries to solve a problem that does not exist. Programmers want RPC. The notion of REST is too abstract for most programmers. Also, Google's internal systems are Internet-scale - they are the ones providing that scale! Of late, Google has turn away from several current cherished paradigms, including REST and dynamic languages, returning to older concepts that have stood the test of time.
              • Of late, Google has turn away from several current cherished paradigms, including REST and dynamic languages, returning to older concepts that have stood the test of time.

                Google hasn't "turned away" from anything, Google never embraced REST or dynamic languages much in the first place. Google has always been a stodgy C++/Java shop, and they can get away with using such unproductive tools because they have gobs of money and tens of thousands of programmers. I'm not sure where you work, so it may come as a su

                • Stodgy? Some of the languages that I have used extensively over the years (more or less chronologically): Basic, Fortran, Algol, PL/I, Pacal, Ada, C, Module2, C++, VHDL (I helped to develop this language, and wrote compilers for it), Java, Ruby, Go. Other languages that I have used here and there: Lisp, Prolog, Python, AspectJ, Scala, Groovy. Which are the most productive for an organization (not an individual) over the long term? Without a doubt, Java. Reason: It is by far the most maintainable and refacto
                  • I think that Google knows what it is doing

                    They do. But that doesn't mean that you do. What works for Google (or the DOD, or IBM) doesn't work for most other companies, projects, or programmers, because they operate under a completely different set of constraints.

                    As Alan Kay has said, "Computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture..."

                    I suggest you read the entire interview [acm.org], because Alan Kay was

                    • "What works for Google (or the DOD, or IBM) doesn't work for most other companies, projects, or programmers, because they operate under a completely different set of constraints." - that is VERY true.

                      I agree that C++ is too complex. The problem is, alternatives are even worse for other reasons. Ruby is HORRIBLE from maintainability and performance points of view. To write maintainable Ruby, one has to use TDD, which is deeply incompatible with how many people think. (See the debates between David Heinemeyer

                    • I didn't really voice an opinion on the merits of C++ either way. You had said that "of late, Google has turn[ed] away from several current cherished paradigms", implying that there is some kind of repudiation of dynamic languages going on. I just pointed out that Google never was much into dynamic languages in the first place, and that just because C++ is a good choice for Google's core applications doesn't mean it's a good choice for most programmers. As a C++ programmer myself, I think it's great that Go

                    • Yes, C++ is probably not for most programmers. To use it well, you have to spend a-lot of time with it, and do a-lot of reflection (reflection in the sense of mentally thinking about it). And you are probably right about Google not changing its attitude on dynamic versus static. But doesn't that say something? They have to handle very large things - they have had to from the beginning. The fact that they stay away from dynamic languages - what does that say? I guess you can tell that I am not a fan of dynam
                    • And you are probably right about Google not changing its attitude on dynamic versus static.

                      I think your premises are flawed. Google uses a mix of C++, Python, Java, JavaScript, and Go, and all of those languages support both static and dynamic type checking. And Google clearly isn't happy with C++, otherwise they wouldn't have hired someone to develop Go. Furthermore, Go has substantially weaker static type checking than C++, so it doesn't look like Google is as adamant about static typing as you seem to be

                    • Go and C++ are so different, and C++'s type safety might be stricter, but the type safety of Go is pretty strict. Nuances aside, thus practically speaking, I have found that languages like Ruby lead to very unmaintainable code. That was my point. Dynamic type features (which Go has to some extent) don't change that, because one uses those to add dynamic features to one's application, such as adding a new component a runtime, or dispatching to a method based on dynamic information such as a command that has
                    • Go and C++ are so different, and C++'s type safety might be stricter, but the type safety of Go is pretty strict. Nuances aside, thus practically speaking,

                      Type safety isn't the same as typing strategy. C++'s type safety is weaker than Go's (since there are types like "void *"), but it has a more expressive static type system.

                      I have found that languages like Ruby lead to very unmaintainable code.

                      I think you're overgeneralizing from your limited experience with a tool-poor scripting language to dynamic langua

                    • "limited experience with a tool-poor scripting language..." - which are you referring to, Ruby? If so, Ruby is not tool-poor.

                      "...but in return, a lot of problems become quite a bit easier to solve." - Yes, I agree with you. Perhaps our disagreement is our perspective: I advise organizations, and so I tend to be on the side of maintainability - and that requires languages and tools that are naturally maintainable - not ones that require great effort to craft maintainability. I think that you advocate for the

                    • Ruby? If so, Ruby is not tool-poor.

                      Scripting languages and dynamic languages are not the same thing. Ruby is a scripting language and really doesn't have a lot of the tools that exist for a heavy duty dynamic language like, say, Smalltalk.

                      I advise organizations, and so I tend to be on the side of maintainability - and that requires languages and tools that are naturally maintainable

                      People can easily create completely unmaintainable code in C++ or Haskell. Static typing is neither necessary nor sufficient fo

                • Another thought: I don't want to dismiss what you said above about Internet scale: it is actually quite insightful: "The 'Internet scale' we are talking about here is millions of different clients and servers..." It is true that for for query applications, the REST model is logically a good one. However, the REST protocol (i.e., HTTP with character data) is horribly inefficient. If one compresses it, that helps a-lot, but that is really a workaround. gRPC uses (by default) Protocol Buffers, which is reporte
                  • gRPC uses (by default) Protocol Buffers, which is reportedly ten times more efficient/responsive in terms of bandwidth and latency.

                    Protocol buffers have a couple of serious problems. First, they are not designed for large messages (>1Mbyte); so forget about using them for things like audio, video, image, or document upload... like most of what people actually do with REST. That limitation goes to the core of their APIs, which don't support incremental decoding or non-copy memory transfers very well. Pro

                    • "not designed for large messages...": Hmmm - isn't there a way to attach a file - i.e., a MIME "part"? Since PB uses HTTP2, it would be hard for me to imagine that they left that out. But if you are right, I agree it would be a terrible problem. Perhaps attaching files is part of gRPC but not PB?

                      Not sure I understand your comment about non-copy memory transfers, since PB/gRPC are remote (out-of-process) communication tools.

                      Yes, you are right, that message passing (e.g., UDP) is more scalable when one has a

                    • "not designed for large messages...": Hmmm - isn't there a way to attach a file - i.e., a MIME "part"? Since PB uses HTTP2, it would be hard for me to imagine that they left that out. But if you are right, I agree it would be a terrible problem. Perhaps attaching files is part of gRPC but not PB?

                      Not that I know of. And what would be the point? That would amount to a REST call with metadata attached in PB format, which is kind of like a bicycle for fish.

                      Not sure I understand your comment about non-copy memor

                    • "Not that I know of. And what would be the point? That would amount to a REST call with metadata attached in PB format, which is kind of like a bicycle for fish." - that would indeed be ridiculous, but I would expect the attached binary content to be unencoded, as it is in an HTTP binary encoded part. There is a major use case for that: queries that send binary data. E.g., I have been using the docker engine and docker registry REST APIs, and many of the methods include both query parameters and binary obje

                    • I think we need to look at the gRPC specs to see if it handles this case.

                      Be my guest. I'm just telling you don't hold your breath for gRPC to take off.

                      The debate between synchronous calls and messaging is as old as the Internet.

                      No, I'm sorry, but you still don't understand. Asynchronous message passing is a programming language abstraction, not a network abstraction; it's what Alan Kay originally envisioned for Smalltalk methods.

                      The asynchronous approach is much, much more complex for the programmer to impl

                    • "The asynchronous approach is much, much more complex to implement on top of an RPC system." - can you please give an example? I have implemented message based programs - and you are right, that it is a programming construct independent of the network - but IME message based applications are very complex to design: one must identify all of the states. But I am willing to learn! Thanks!
                    • "The asynchronous approach is much, much more complex to implement on top of an RPC system." - can you please give an example?

                      http://www.grpc.io/docs/tutori... [www.grpc.io]

                      Note that the API still requires requests and responses, so it forces clients and servers to keep track of state even if the computation otherwise doesn't require it. That's because procedure calls are an abstraction that intrinsically involves a notion of state.

                      In a message passing architecture, the primitive by which you invoke functionality on obje

                    • Aha. Now I know where the disconnect is in our discussion on this. I have been thinking in terms of updates, and you have been (it sounds like) been thinking in terms of fetching data. Yes, for fetching data, you are right, asynchronous is far more efficient, if one can get away with a best effort (eventual consistency) approach, which is usually the case for UIs.

                      For transactions that do updates, a synchronous approach is far easier to implement, because one does not have to keep track of application state,

                    • E.g., consider a user who reserves an airline seat, but between the time the user received notice of the available seats, the selected seat is given away. The user does not now the seat was given away (and their UI has not refreshed yet), so they click Submit to reserve the seat. In a synchronous approach, the Submit will fair right then, and so their UI will immediately receive a failure response and can update it self accordingly. But in an async approach, the user will receive a success response, and mig

  • by Anonymous Coward

    Been there, done that, got the scars to prove it.

    I've found that compared to the tooling we were all using 15 years ago to build SOAP web services (fond memories of fighting incompatible implementations...), Swagger tooling is far worse, implements far fewer even obvious use cases, and is laden with bugs. In my recent work, Swagger Codegen was the worst I used. It is a flaming piece of shit which appears to be maintained by the same sort of teenagers with short attention spans that brought you crap like "le

  • So the vulnerability, is that people who put unknown code in their systems sometimes gets screwed?
    Well, we better fix that then.

  • The OFA [rapid7.com] outlines this issue. What they are saying is that because the Swagger is a JSON document, if you use a code generator that simply regurgitates its values without validation, you could end up with code executing in the context of whatever is consuming the API. The issue is with code generators, and not the swagger documentation .

    An example they give as an attack on HTML is the following (with angle brackets instead of square ones, obviously):

    "info": { "description": "[script]alert(1)[/script

If you have a procedure with 10 parameters, you probably missed some.

Working...