Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Cloud Programming

New Hack Shrinks Docker Containers (www.iron.io) 131

destinyland writes: Promising "uber tiny Docker images for all the things," Iron.io has released a new library of base images for every major language optimized to be as small as possible by using only the required OS libraries and language dependencies. "By streamlining the cruft that is attached to the node images and installing only the essentials, they reduced the image from 644 MB to 29MB,"explains one technology reporter, noting this makes it quicker to download and distribute the image, and also more secure. "Less code/less programs in the container means less attack surface..." writes Travis Reeder, the co-founder of Iron.io, in a post on the company's blog. "Most people who start using Docker will use Docker's official repositories for their language of choice, but unfortunately if you use them, you'll end up with images the size of the Empire State Building..."
This discussion has been archived. No new comments can be posted.

New Hack Shrinks Docker Containers

Comments Filter:
  • WTF? (Score:3, Insightful)

    by msauve ( 701917 ) on Wednesday February 03, 2016 @05:49PM (#51435005)
    What are they talking about, and why do I care about the size of the container Levi's ships my Docker khakis in?
    • Re: (Score:3, Insightful)

      by citylivin ( 1250770 )

      Im not a developer, but i think its like install shield for windows. Creates application packages or something. Still the summary should really give a brief definition.

      • Re:WTF? (Score:5, Insightful)

        by twistedcubic ( 577194 ) on Wednesday February 03, 2016 @08:55PM (#51436133)
        Docker is so hyped nowadays I'm surprised people reading Slashdot are claiming they never heard of it. Docker is an application container. It essentially creates an advanced chroot which runs ONE application (usually). Now 644MB seems a lot of overhead for running one app, so shrinking this to 29MB is a welcome improvement. That said, Docker is not for typical users. Use LXC, LXD, or systemd-nspawn if you want containers that can run several apps with their own init.
        • Re:WTF? (Score:5, Insightful)

          by msauve ( 701917 ) on Wednesday February 03, 2016 @09:15PM (#51436227)
          Not everyone who reads /. is a software developer, a *nix sysadmin, or whatever other area of specialization would use that. /. is "News for Nerds," and that encompasses a wide range of technologies/interests, many non-overlapping.
          • Re: (Score:1, Insightful)

            by Anonymous Coward

            Not everyone who reads /. is a software developer, a *nix sysadmin, or whatever other area of specialization would use that.

            When you read a headline and you don't recognize the terminology used in the headline, you have two choices: you can skip the story completely, as it's probably not relevant to what you do; or you can click through the provided links to read more.

            Making a joke by pretending to misunderstand what the terminology means is a distant third choice. I wish there was a -1, Not Funny moderati

            • If it takes 644 Mb for a "Hello World" program, then that is probably why I have never heard of it. Seriously.

          • And some of us are software developers, don't consume advertising or "hype" other than places like this. I only clicked on this to find out if it was something interesting, or just the next big blahblah.

            Judging from the lack of interest displayed even in the "everybody who is anybody heard of it already" responses suggests to me that it is fluff.

            Anyways, I'm not going to take a long enough break from writing firmware to both look up some unrelated thing, and also talk about it on slashdot.

        • Docker is so hyped nowadays I'm surprised people reading Slashdot are claiming they never heard of it. Docker is an application container. It essentially creates an advanced chroot which runs ONE application (usually). Now 644MB seems a lot of overhead for running one app, so shrinking this to 29MB is a welcome improvement.

          That said, Docker is not for typical users. Use LXC, LXD, or systemd-nspawn if you want containers that can run several apps with their own init.

          After 12 seconds of reading the first thing that popped up on Google, this is some kind of virtual machine that runs your app. Is this extra crap standard libraries for that language?

        • Use LXC, LXD, or systemd-nspawn

          Oh no you didn't!

      • by msauve ( 701917 )
        Perhaps. There's no doubt some subset of /. users who know immediately what the summary is about, but I suspect that's fairly small. Something to do with images and languages and OSs, and shrinking image sizes. It's not hard to reduce the resolution and size of an image, so it's not clear why this is called a "hack," or why it's news.

        But, my comment was really in relation to the piss poor submission, and the failure of /. "editors" to fix it. This is one of the worst in recent memory. If the new overlords
        • by WarJolt ( 990309 )

          Even if you know Docker, fewer people actually think about the implications size have on cloud compute systems.

          For example Amazon EC2 Container Registry(ECR) gives 500MB for the free tier and it's relatively cheap to store large container images. Most cloud services store these local registries in their network, so you don't incur bandwidth charges from external registries. Also it should start faster and is likely more reliable, but those are just bonuses.

          It's true that a small image will start faster, but

        • by hink ( 89192 )
          There used to be a MAJORITY of Slashdot readers who knew what a file system was. There used to be a subset of Slashdot readers who actually contributed to open-source software like the Linux kernel. I miss Hemos and Commander Taco - the founders and original editors.

          Then they went commercial (everybody needs to buy groceries). Then they were bought out, with the provision that the founders stay on. Eventually they moved on, then mergers and another sale to a , shall we say, a purely capitalist owner. Acro

          • by msauve ( 701917 )

            it is still kind of telling that you call Docker "esoteric".

            Esoteric doesn't mean what you think it means. It does not mean unusual or rare, it means "intended for or likely to be understood by only a small number of people with a specialized knowledge or interest." Which it is.

            I'm not a developer, and only play a sysadmin at home. /. has a wide audience. I'm interested in learning about technologies outside of my bailiwick (which centers on networking). I can usually get an understanding from context in

            • But this one was just pure technobabble for anyone outside of very specific fields.

              Indeed, not all developers run their code "in somebody's cloud," some of us generally expect hardware to be provisioned to run our software. Not saying that the cloud doesn't have its place, but it is rather odd to see people getting snooty over it when "websites running in public clouds" is sortof fry-cook level development.

              If something I'm working on has a cloud component, that doesn't mean I would want to be deploying it. Most of the people on the development team wouldn't need to know about the cloud-wh

      • Im not a developer, but i think its like install shield for windows. Creates application packages or something. Still the summary should really give a brief definition.

        Not only that it makes virtualizing so much easier. Server 2016 supports docker in Hyper-V as a way to move containers and start and close them in ways that are more manageable than static images that you can not shut off or move during production without modifying the guest OS. Also it opens the possibilities of hardned ultra secure containers too that are hard to hack that do just one thing.

      • It is more like thinstall/thinapp. Everything you need to run the binary is in the package
        • It is more like thinstall/thinapp. Everything you need to run the binary is in the package

          That sounds like a ROM image for a stand alone embedded microcomputer. Have we really gone full circle? There was a reason that we quit doing that! 8-)

          • Right, but this is rewritable. OTOH, so are/were the ROMs...

            Actually, I have to get back to some firmware programming for a microcontroller, but don't worry: I won't be using the EEPROM, only the flash.

    • by jrumney ( 197329 )
      They are talking about taking a container which is commonly used for implementing the 'cloud' buzzword and using it to implement the 'IoT' buzzword. Someone pointed out that 'things' generally are a lot more resource constrained than servers, so they've slimmed down their 644MB container to 29MB. Good luck fitting that into the 128kB of flash in the typical microcontroller running your consumer electronics.
      • by msauve ( 701917 )
        So, just like Java, only different?
      • by WarJolt ( 990309 )

        They are talking about taking a container which is commonly used for implementing the 'cloud' buzzword and using it to implement the 'IoT' buzzword. Someone pointed out that 'things' generally are a lot more resource constrained than servers, so they've slimmed down their 644MB container to 29MB. Good luck fitting that into the 128kB of flash in the typical microcontroller running your consumer electronics.

        It's best not to mix everything together in your head until it all becomes the same thing.

        Containers are great for servers.
        Even if you ran a container on an embedded device, it would need to run Linux.
        That's probably not happening on the microcontroller you describe.

        More importantly theres almost 0 incentive to run Docker on an embedded device simply because theres very few applications which require that kind of isolation on an embedded device.

        About the only device I've seen with a justified reason to use

    • What are they talking about, and why do I care about the size of the container Levi's ships my Docker khakis in?

      I find it scary that this post above was actually mod'ed insightful. Slashdot, wtf happened to you?

      • What are they talking about, and why do I care about the size of the container Levi's ships my Docker khakis in?

        I find it scary that this post above was actually mod'ed insightful. Slashdot, wtf happened to you?

        We got tired of "SalesPersons" writing the stories! 8-)

      • It is insightful, perhaps you didn't understand the language it was written in?

        In English it says, "What are they talking about, they just spewed a bunch of words without enough context to even identify which jargon set is being used. And the key word is a relatively new product/project, whose name is repeated umpteen times like it was written by a marketing droid, but is never explained even in context of the other jargon words."

        Also, you just signed up yesterday, I can tell by your user id. You don't get

        • It is insightful, perhaps you didn't understand the language it was written in?

          In English it says, "What are they talking about, they just spewed a bunch of words without enough context to even identify which jargon set is being used. And the key word is a relatively new product/project, whose name is repeated umpteen times like it was written by a marketing droid, but is never explained even in context of the other jargon words."

          Also, you just signed up yesterday, I can tell by your user id. You don't get to pine for my golden days of yesteryear, those are mine. Get your own, order them now and you can have them in a couple decades when you forget what it was really like.

          LOL. Da'fuk? I have a submission on 2011, so obviously is not yesterday. Plus I had another account that goes back to 1998. But whatever, a post is worth by its content, not but the longevity of the account (and the fact that you use the later speaks more about you than about me.)

          • Right, oh, 2011 isn't yesterday? What, were you born yesterday? No, you didn't have another account, if you did you would use it. If you had been here since the 90s, you would know that. Perhaps your reputation was so awful, you decided to pretend you were born yesterday? No, that isn't any improvement. Or even a believable story.

            A post is only "worth by its content" in some language I don't speak. On slashdot, a comment has to make sense to have value, and if it doesn't have value and is written by somebod

  • Wasn't a common library the entire point of Docker? Packaging the libs with the app, etc, to reduce dependence on the host OS?
    • As a developer, I though the entire point of Docker was to reduce dependence on an entire layer of IT: the human gatekeepers in charge of the release systems and procedures and eventually the care and feeding of maintenance systems (who often f*** something up with manual fumbling or delay things with meetings involving coffee-swilling waterbags).

      At least that's how I've seen Docker used in corporations so far, anyway.

      • 99% of the time, IT is being held hostage by other departments who know nothing about what IT does, but they just heard about this thing called the "cloud" or was it "apps"... and they want IT to prioritize it ASAP. Oh and do it on 30% less budget than they had last year.
        • I thought what he said was that the development team is being held hostage by IT, who convinced somebody they were "the computer guys" so they should be in charge of "all the technical computery stuff."

      • As a developer, I though the entire point of Docker was to reduce dependence on an entire layer of IT: the human gatekeepers

        Finally somebody explained both what it is for, and why I haven't heard of it... I'm not suffering under a BOFH!

        They should have just said in the summary, "Docker, a BOFH-resistant deployment system."

    • Re:the point (Score:5, Informative)

      by steveha ( 103154 ) on Wednesday February 03, 2016 @07:48PM (#51435775) Homepage

      The point of Docker is to have a single package ("container") that contains all of its dependencies, running in isolation from any other Docker containers. Since the container is self-contained, it can be run on any Docker host. For example, if you have some wacky old program that only runs on one particular set of library versions, it might be hard for you to get the Docker container just right to make it run; but once you do, that container will Just Work everywhere, and updating packages on the host won't break it.

      The point of the news story is that someone did a better job of stripping the container down, removing libraries and such that were not true dependencies (weren't truly needed).

      Not only does this make for smaller containers, but it should reduce the attack surface, by removing resources that are available inside the container. For example, if someone finds a security flaw in library libfoo, this would protect against that security flaw by removing libfoo when it is not needed. It's pretty hard for an exploit to call code in a library if the library isn't present. Also, presumably all development tools and even things like command-line shells would be stripped out. Thus a successful attacker might gain control over a docker container instance, but would have no way to escalate privileges any further.

      If the stated numbers are correct (a 644 MB container went down to 29 MB) yet the new small package still works, then clearly there is a lot of unnecessary stuff in that standard 644 MB container.

      • Comment removed based on user account deletion
        • by Dog-Cow ( 21281 )

          Docker containers don't contain a kernel. They use the host OS for services.

        • The problem with your logic (aside from being irrelevant in the case of Docker since it doesn't include a kernel) is that a lot of the "cruft" has been added as a base requirement to make a bootable modern system, and in many cases to improve performance.

          You can strip everything back to 20 years ago, but will you be able to run your harddrives in PIO mode 2 all for the sake of making the kernel smaller by not needing UDMA support? Okay contrived example, but that's what I'm talking about. You want a small k

      • What I've been wondering is ... isn't that a bitch to maintain security patches? Because you now have all these potentially vulnerable libraries spread out over a bunch of docker containers, completely outside of the control of the package manager.

        So when the next heartbleed bug comes around, you may think you have patched your system, while in fact the libraries you are exposing to the outside world via your docker apps are still vulnerable.

        • Right, instead of updating the OS packages when a major security 0-day arrives, you need to turn off all your app containers, forward to a parking page, and start recompiling images.

          But, your dev teams don't have to agree on compatible sets of libraries to use on projects that will be deployed together on the same cloud instances.

          This trades the ability to deal with those types of problems, for being able to do stuff you couldn't do because your company didn't have anybody that can do that stuff. So without

      • It's actually kind of an inversion.

        Docker base images for Debian [docker.com], CentOS [docker.com], and Ubuntu [docker.com] are typically 50-100 megabytes. Shrinking down that "base image" doesn't really make sense; Iron.io instead shrunk down images for things like PHP, Node, and Ruby.

        Even then, you have two main issues.

        Firstly, if you have something stupid like e.g. PHP not coming with ANYTHING installed (no php-pdo, no php-ldap, etc.), you have to write your own Dockerfile to install PHP. Typically, you can just put "image: php/5.6-fpm

      • While the iron.io folks do manage to squeeze the size down, they do so through the use of Alpine Linux which uses musl libs rather than glibc and friends. There is a post on hackernews https://news.ycombinator.com/i... [ycombinator.com] that has a discussion about the pros and cons of using an alpine based image.

        There is also the deviation from upstream. The official images are a curated set of images and can be maintained by anyone willing to put in the time. For the official images that are not maintained by the upstrea

      • ...that container will Just Work everywhere, and updating packages on the host won't break it.

        I love this stuff... updating packages on the host won't break "it," even where "it" is some sort of malware bug.

        It doesn't seem to so much solve a problem as offer a new way to create a compromise between security and convenience. Here, it mostly trades the convenience of security updates at the OS level away for convenience of deploying minimally-maintained packages.

        If I wanted this, I would just switch to static linking. But I can see how, for development teams that don't have anybody on them that knows

    • Wasn't a common library the entire point of Docker?

      Packaging the libs with the app, etc, to reduce dependence on the host OS?

      No, although it's one of Docker's features. Docker images are actually stacked layers of filesystem sub-images operating as overlays, so a typical Docker image might consist of a base OS image, several library images built by the Docker build process, culminating in the actual application image. Done judiciously, those sub-images can be shared by multiple application images, thereby saving space in the Docker image store.

      But Docker is a lot more than that. You can run virtual networks within containers, sha

  • So.... thin jails (Score:5, Insightful)

    by 0100010001010011 ( 652467 ) on Wednesday February 03, 2016 @06:04PM (#51435119)

    iocage create -c

    Congratulations, you've just (almost) caught up to decade old technology.

    http://phk.freebsd.dk/pubs/san... [freebsd.dk]

    • This why all the major cloud providers run freebsd.

    • by Anonymous Coward

      can you also iocage history [docker.com]? docker is to infrastructure what git is to code.

    • by Anonymous Coward

      It's worse, they've combined jails with the equivalent of statically compiled binaries.

      Bit of a nightmare when there's a vulnerability on a library used in multiple containers.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        It's worse, they've combined jails with the equivalent of statically compiled binaries.

        Bit of a nightmare when there's a vulnerability on a library used in multiple containers.

        Except it isn't. You store your base images in a docker registry, you update that base image, and then you can have your CI environment kick off rebuilds of any dependent images. And as an added bonus you get to test your exact deployable image, including all dependencies, before you actually roll prod. In the past you needed something akin to a Satellite / Spacewalk setup to be able to lock combinations of versions of packages to a point-in-time snapshot. Most people don't seem to do this. They either

        • by Anonymous Coward

          It is. You've given a best case usage for docker and a worst case for shared libraries.

  • If these are so much better, why aren't they just the official repos?
  • Their github [github.com] lists Perl but not C++?????
  • > "Less code/less programs in the container means less attack surface..."

    *fewer

  • Disk space is incredibly cheap compared to the standard size of a docker image and your "attack surface" is going to be limited in a docker image anyway. Sure, your application loaded in your docker image might add to that surface, but that's going to happen if you use the big image or the small one. The only real reason to do this is so you can run docker images on smaller embedded devices where resources are limited (Not that I see much of that yet).

    IMHO, this development is meaningless to me. Thanks

    • by dj245 ( 732906 )

      Disk space is incredibly cheap compared to the standard size of a docker image and your "attack surface" is going to be limited in a docker image anyway. Sure, your application loaded in your docker image might add to that surface, but that's going to happen if you use the big image or the small one. The only real reason to do this is so you can run docker images on smaller embedded devices where resources are limited (Not that I see much of that yet).

      IMHO, this development is meaningless to me. Thanks for the disk space back, but I didn't really need it...

      For people running certain common configurations, this is actually very helpful. Docker containers are often used on home file servers. You could put docker containers on your storage array, but then you would be spinning up multiple disks every time you needed to read/write to the docker image. I have an older (small) SSD drive which I keep my docker containers on. The less space Docker uses, the more space I have left on the SSD to do something useful (like caching writes to the spinning disks). Maki

    • So you never run half a dozen docker instances from a ram disc?
      Unfortunately my Mac only has 8Gig RAM, so the size of the Docker Containers does matter.

  • Image sizes (Score:5, Funny)

    by Dragonslicer ( 991472 ) on Wednesday February 03, 2016 @07:07PM (#51435557)

    Most people who start using Docker will use Docker's official repositories for their language of choice, but unfortunately if you use them, you'll end up with images the size of the Empire State Building...

    What's that in Libraries of Congress?

  • As far as I figured out, they use a very stripped down Linux distro called Alpine Linux as the base and then build a Docker image on top of that. How is this a hack? This just means you are now running Alpine Linux in your containers instead of your distro of choice which nobody really wants.
  • I read this yesterday and I found it slightly annoying in the tone. Alpine has been around for awhile, and I don't think anyone using docker for more than experimentation will be happy with massive Ubuntu based images. But would you really use these minimal images packaged by an unknown entity when you can make your own with one line in the dockerfile?

Real programmers don't bring brown-bag lunches. If the vending machine doesn't sell it, they don't eat it. Vending machines don't sell quiche.

Working...