Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Security The Internet

GA Tech: Internet's Mid-Layers Vulnerable To Attack 166

An anonymous reader writes "Evolution has ossified the middle layers of the Internet, leaving it vulnerable but security breaches could be countered by diversification of protocols, according to Georgia Tech, which recommends new middle layer protocols whose functionality does not overlap, thus preventing 'unnatural selection.' Extinction sucks, especially when it's my favorite protocols like FTP."
This discussion has been archived. No new comments can be posted.

GA Tech: Internet's Mid-Layers Vulnerable To Attack

Comments Filter:
  • by msauve ( 701917 ) on Monday August 22, 2011 @07:56PM (#37173698)
    an article which discusses "the six [sic] layers..."

    I understand that IP protocols predate the 7 layer ISO/OSI model, but that's what everything is mapped to in modern terms.

    The article seems even more confused, when it reverses the layers, claiming that "at layers five and six, where Ethernet and other data-link protocols such as PPP (Point-to-Point Protocol) communicate..."

    What are they teaching at GA Tech? This is networking 101.
    • Re: (Score:2, Insightful)

      by postbigbang ( 761081 )

      It's pretty freshmen-ish stuff. FTP hasn't been used in a long time. Glass-screen protocols went the way of the 386 long ago. I'm surprised these guys don't understand various secure protocols, key exchange methods, and so forth. Nice fluffy stuff, but very dated for the reality check. Show me someone using ftp and I'll show you a password theft followed by a crack. Ye gawds.

      • by JMZero ( 449047 )

        Variants of FTP are used widely in business to business transfers - sometimes secured with SSL, but often just by plaintext passwords, obscurity and/or IP whitelists. FTP is consistent between a large variety of platforms and lots of sysadmins like the simplicity of scripting, for example, a nightly FTP file transfer.

        Is there better solutions? Of course. But FTP is still very common - and lots of businesses still employ much more arcane tech than it. For a lot of businesses, terminal servers were a real

        • Our customers demand FTP, no matter how much we educate about SFTP and show how easy it is, still they insist on using FTP.
          if ftp goes down that's likely to get complaints faster than http being down. loss of SSH access they barely even notice oO;

          • by jd ( 1658 )

            SSH has severe performance issues and hardly anyone uses the high-performance patches [psc.edu]. (Hell, hardly anyone knows the high-performance patches exist!)

            You'll notice the patches are not being funded any more and that none of the SSH enthusiasts are, well, enthused enough to have volunteered to help the maintainer. Don't get me wrong, I like SSH, but it doesn't write itself and I have very little sympathy for those who complain about under-utilization when doing nothing about helping to address the issues that

        • Haha. One system I had to build and maintain at a previous employer, not that long ago (1999):
          PC .BAT job runs a Qualcomm application that dials up Qualcomm periodically to connect to their satellite truck monitoring system, capture session into a file in a special directory
          PC .BAT job looks periodically to see if a new file has come in; uses TFTP to transfer it to a Sun workstation - call it Sun-1.
          Sun-1 shell script mails the file to a special email account on another workstation
          Sun-2 uses fetchmail and p

          • Of course that solution is kooky in its way but also amazing in a way b/c each step actually makes sense and works with the other steps only b/c of the genius of the layered internet.

        • nonsense. Tell them they can use sftp or scp and if they complain tell them you will restore the ftp access when they finish removing the locks from the building because "they aren't necessary".
          • by JMZero ( 449047 )

            Yeah, cool. I'll tell clients and unrelated businesses what software they "can" use.

            I was reporting on the situation. Sure there's better options, but that doesn't stop "FTP is very common" from being reality.

      • Show me someone using ftp and I'll show you a password theft followed by a crack.

        Crack this: FTP over TLS [wikipedia.org].

        • Because that makes a lot more sense than just use SFTP or SCP.

          And something I noticed, files I transfer with SCP either fail or or things actually get done right. With FTP and others I've lost count of the times files actually got corrupted while transferring without any kind of warning.

          That adding to security concerns should be enough to force the switch in an enterprise environment.

          • Don't SFTP, SCP, and anything else tunneled over SSH require a shell account? A lot of budget web hosting services provide FTP but no shell account.
            • by deek ( 22697 )

              Technically speaking, yes, SCP and SFTP need a shell to call the subsystem that provides the functions needed. You can install a package called "rssh" which will restrict a user to the SCP and SFTP subsystems, and prevent access to any other commands.

          • by jd ( 1658 )

            FTP over TLS will - by dint of TLS providing a reliable data stream - avoid corruption issues. Honestly.

            SFTP isn't ubiquitous, FTP is. SCP is only useful if you have full filepaths to work with and is even rarer than SFTP.

            Besides, since people like the convenience of single-sign-on, you're better off using Kerberos (the MIT version). SASLv2 is also nice.

            Look, this is very simple. What "makes sense" doesn't matter. Betamax "made sense". The Transputer "made sense". Multicast "makes sense". IPv6 "makes sense"

      • Show me someone using ftp and I'll show you a password theft followed by a crack.

        Does that include anonymous FTP? Or using FTP between two computers in my apartment?

      • Correction. FTP should not be used anymore. It is used. Widely. Why? Because it works, and because the person who could change it left the company years ago. But slowly.

        Turn back the time a decade. We're at the downturn after the dot.com bubble blew up, a lot of more or less sane IT people are out of a job (along with all the duds that got their job by spelling TCP/IP halfway correct and knowing that it ain't the Chinese secret service), and all of them are looking for work, any kind of work will do. So the

        • Back then, people cared even less about security than they do today, what they wanted was an IT infrastructure that works.

          Of course, I've seen ISP environments that used FTP heavily (as well as TFTP for a bunch of automated stuff). Why? Because when you're running an encrypted tunnel through another encrypted tunnel that runs between two trusted hosts on a segment of the network that does not allow incoming traffic from anywhere but the NOC it just seems silly to add another layer of encryption and the potential issues that could come with that for daily log transfers...

      • by Kjella ( 173770 )

        Something tells me you'll have a rude wakeup call if you get out of school and start working for some big business. FTP is still an extremely common way of transferring files in batch scripts and such.

        • Oh. Right. I was cleaning up DEC tape spews when you were a zygote. FTP may be common, but it's intensely insecure. Do your research. If you use it, you're irresponsible and endanger your organization.

          • Like anyone here has a say in what technology will be used. We can (and do) advise, but the decisions are made elsewhere. Most are told what they will support.
            • You can only hope that a cogent argument, repeated until PHBs take it seriously, then think it's theirs, will do some good. Too many systems get p0wn3d because of stupid stuff, and ftp is old, and is just plainly irresponsible-- save places where a secure channel exists. Mostly, they don't; secure channels are another problem for a different day.

      • That's nothing, I spoke with a colleague and they have an intern from a large state college with a computer engineering school that is considered pretty decent. The intern didn't even know what FTP was, and it wasn't because they knew about more secure protocols like sftp. I was shocked to say the least. What are they teaching in school these days? I'm really at a loss...
      • I would beg to differ. I spent a couple of hours last week setting up a regression test environment to run a patched version of our FTP connection layer through its paces with (the errors were actually in SFTP error handling, but we re-test everything). Some of the equipment our customers must collect data from supports no other method of retrieving it. Generally, the network is itself *very* secure, and our box is sitting inside of it. I guess the customers don't see it to be much of an issue, and will

      • by jd ( 1658 )

        Unencrypted FTP with Kerberos? Anonymous FTP? Plenty of ways you can use FTP without putting an account at risk.

        As for your claim that "FTP hasn't been used in a long time" - it's clearly bogus. FTP is widely used. More web browsers support vanilla FTP than support FTP over SSH. If you want the Linux kernel sources, or a distro ISO image, the overheads of encryption aren't gaining you enough to make it worth the effort - the higher throughput and lower server loads win every time.

        Web hosting sites usually d

        • I've been really surprised by all of the purported ftp use cited in this thread... tftp as well. Web hosting sites are in need of some updates. Using https would at least prevent part of the problem. Yet it's up to people that understand infrastructure to help educate those that don't understand the nature of hacks and cracks. Organizations get banged with hammers that most people aren't willing to understand. Yesterday, my primary web facing server was under attack from two different places trying to beat

          • by jd ( 1658 )

            In need of updates? I fully agree.

            Exactly what those updates are, that's more debatable. tftp is excellent for bootstrapping a machine with an OS and is independent of machine architecture (ix86, MIPS, UltraSPARC) and BIOS (Corelis, Phoenix, UEFI, etc) - I really, really, really do NOT want to try implementing SCP in Forth for bootstrap purposes. I couldn't afford the psychiatric treatment afterwards.

            Likewise, I would not consider using any other authentication mechanism in environments already using SASLv2

    • by reiisi ( 1211052 ) on Monday August 22, 2011 @09:25PM (#37174196) Homepage

      ARPANET predates the OSI model, and the current Internet Protocols came after the definition of the OSI stuff. (That's a little hard to see in the current wikipedia articles, but it's there.) The IETF in fact deliberately chose to combine two of the OSI layers.

      The article does have some issues. I'm not sure if the author actually doesn't understand the paper he or she is trying to summarize. Maybe the intent was to make it easier for the lay person to understand. But there is some creativity going on, and parts of the summary don't really reflect the paper.

      The paper itself is offering a framework of analysis of the evolution of the Internet Protocols. It might have been interesting to see a bit more analysis of ARPANET and some of the other protocols the IP protocols eventually replaced. It might have been interesting to see them address the OSI model a bit more, but the OSI model never was really implemented fully, and might be considered not part of the evolution.

      I see that the take IPv6 up as a competitor of IPv4 instead of the heir apparent, which is probably a useful thing to do, if we want to understand why so many IT managers are still failing to move in a timely manner.

      I'm not sure I understand their work well enough to either agree or disagree, but I think it offers food for thought, including the idea that IPv4/6 doesn't actually have to be the only protocol existing at that layer.

      • Hit post without thinking again. This AC post [slashdot.org] down the way a bit links the original paper.

      • by jd ( 1658 )

        You're absolutely right that it doesn't have to be the only protocol at that layer. The X protocols from Europe cover the full spectrum of the OSI model, including layers 3 and 4. The TUBA protocol (one of the candidates for IPv6) could perfectly well be implemented, again sitting at that layer. Infiniband has its own layers 2, 3 and 4. Other IP protocols exist - albeit in experimental form for the most part. (IPv0 could be said to still exist.)

    • You're missing the point. A good example would be fast food restaurants. There used to be a Mexican based fast food chain called Taco Bell. It used to be the only place to get burritos, but then McDonalds introduced their breakfast burrito and drove Taco Bells nearly extinct like FTP. Please ignore the fact that you drove by 3 of them this morning or that it's impossible to update your website without using FTP.
    • by mgiuca ( 1040724 ) on Monday August 22, 2011 @10:14PM (#37174438)

      I've never really been a fan of the OSI model. The idea of the hierarchy is great; sandwiching it into discrete layers seems problematic.

      Wikipedia's definition of the OSI model [wikipedia.org] states that "there are seven layers, each generically known as an N layer. An N+1 entity requests services from the layer N entity." Makes sense. So, why are both ICMP and IP considered to be in layer 3? ICMP is built on top of IP, so it should be in the layer above IP, but it doesn't actually provide transport (or at least, isn't meant to). HTTP is in layer 7, but it can be sent directly on top of TCP, which is in layer 4, skipping over two layers. (Or it can be tunnelled over SSL, but still skipping layer 5.)

      I prefer to think of the IP stack being a directed acyclic graph of technologies, each depending on another, rather than an explicit linear division into layers.

      • by Pentium100 ( 1240090 ) on Monday August 22, 2011 @11:56PM (#37174874)

        Well, you can imagine a "null" layer that does nothing, just passes the data unmodified to the next layer.

        For example, HTTPS would be HTTP over SSL, SSL wouls be level 6 (presentation). If you use HTTP without SSL then level 6 is empty or uses the "null" protocol.

        ICMP is part of IP, while you could say that the ICMP packet is inside an IP packet it is easier to imagine ICMP as just a part of IP, because it is used that way (for example, to signal that some other packet could not be delivered).

        Just because I can send the HTTP packet inside an Ethernet frame (without IP or TCP), does not mean that the model is broken, it's just that "null" is a valid protocol.

        • by mgiuca ( 1040724 )

          Good point about the null. I see that it works that way for non-SSL traffic, but I still don't see how the "session layer" sits in between HTTP and TCP (even if you consider it to be "null"). It seems like session layer protocols are an entirely different sort of connection.

          As for ICMP, I see what you mean that it's sort of part of the IP protocol (IP wouldn't work without ICMP), but it is syntactically formed inside an IP packet, and I do believe it is constructive to think of ICMP as being "on top of" IP

          • Well, in that case ICMP is a transport layer protocol, I mean you can stuff arbitrary data inside an echo request packet, so you can use it as a way to send HTTP requests (and the recipient replies with the same data, so you can check whether it arrived correctly).

            Well, another example - I take an HTTP packet and send it straight over the wire (let's say a serial or parallel port of a PC), now it only has two layers - physical and application, all others are null. Or if you want a network, try an I2C bus, i

            • by mgiuca ( 1040724 )

              Well that makes my point, though: you can arbitrarily nest protocols inside one another, so it doesn't make sense to talk about them strictly in layers. Rather than saying "HTTP can drop to a lower layer", why not throw away the concept of layers, and just have a more vague concept of "application level" versus "transport level" and so on, like the 4-level IP stack.

              • The OSI model is still useful to know in which order you want to do stuff.

                For example, take the application data, if you need to, convert it to something that the recipient can read (XML, some encoding), then encrypt it and/or use whatever session management protocols you want, after that put it in a transport protocol, then a network protocol and pass it down to data link which will send it over a physical connection.

                The fact that you can arbitrarily nest protocols inside one another is the result of the f

      • by lennier ( 44736 ) on Tuesday August 23, 2011 @12:46AM (#37175092) Homepage

        So, why are both ICMP and IP considered to be in layer 3?

        Because the Internet protocols are not in fact part of the OSI model, despite lots of teaching materials claiming this. The neat little OSI layer diagrams you see with all the layers filled in are mostly retcons invented long after OSI was dead.

        The actual Internet protocol suite is not part of the OSI model but the 4-layer Internet model [wikipedia.org] (Link, Internet, Transport, Application). Link is like OSI layers 1 and 2, Internet is like OSI Layer 3, Transport is like OSI Layer 4, Application is like OSI Layer 7, but there is no actual Internet equivalent of OSI's layers 5 and 6. Pretty much everything above 4 runs at Layer 7.

        In the Internet model, it makes perfect sense for DHCP, IP and ICMP and routing protocols like RIP and OSPF to be at the Internetworking level because they are both protocols dealing with datagram transmission between interconnecting disparate packet-switched services, while TCP and UDP are in the Transport layer because they make dealing with raw datagrams somewhat more pleasant.

        It would perhaps be sensible to invent a whole new layer model now that we have a lot more protocols. HTTP, for instance, should be a layer of its own, since so many things are now tunnelled over it. That would be sensible, though, so good luck.

        • by mgiuca ( 1040724 )

          Thank you. Yes, the four-layer Internet Protocol Suite thing makes a lot more sense. Rather than trying to say "there are seven layers stacked on top of each other," it seems like here, the protocols are arranged into four logical "protocol groups" with clearly-defined roles, and no sense of "protocols in layer N run on top of those in layer N-1". In the IP suite, it seems valid for protocols in the same group to run on top of each other (e.g., HTTP runs over SSL; ICMP runs over IP).

        • It would perhaps be sensible to invent a whole new layer model now that we have a lot more protocols. HTTP, for instance, should be a layer of its own, since so many things are now tunnelled over it. That would be sensible, though, so good luck.

          Thinking of a fixed set of layers stops being useful as soon as you get moderately complex network setups because these days encapsulations tend to happen at all sorts of layers. Modern networks can probably be thought of more as a stack of protocols with the link layer at the bottom, application at the top and chopped up repetitive bits of the stack in the middle.

          e.g. take for example a modern connection to a website and we probably see this kind of stack:
          HTTP
          SSL
          TCP
          IP
          PPP
          PPPoE
          Ethernet
          ATM VC-Mux
          ATM
          G.922.5 data link layer
          Physical ADSL

          And that's just for a plain home ADSL connection. In more complex networks it is common to encapsulate stuff further, for example using GRE tunnels or IPSEC tunnels, and it isn't uncommon to see something more like:

          HTTP
          SSL
          TCP
          IP
          IPSEC ESP
          IPSEC AH
          IP
          Ethernet
          GRE
          IP
          GRE
          IP
          PPP
          PPPoE
          Ethernet
          ATM VC-Mux
          ATM
          G.922.5 data link layer
          Physical ADSL

          And you can keep adding encapsulation layers at pretty much any point in the stack.

      • by Animats ( 122034 )

        So, why are both ICMP and IP considered to be in layer 3? ICMP is built on top of IP.

        The real answer to that is that it's a Berkeley UNIXism. Some early TCP/IP implementations, including the one I worked on, had ICMP at a layer above IP, in the same layer with TCP and UDP. The Berkeley UNIX kernel, like other UNIX versions of the period, had real trouble communicating upward within the kernel, because this was before threads, let alone kernel threads.

        To get around that kernel limitation, ICMP was crammed in with IP. This had some downsides, including the demise of ICMP Source Quench for

      • by sjames ( 1099 )

        And of course layer 8 is where they make you sit in the comfey chair.

  • by whoever57 ( 658626 ) on Monday August 22, 2011 @07:57PM (#37173710) Journal
    Surely this article should be nodded "massive ignorance"! It's the simplicity of the middle layers that enables the development of the upper and lower levels. It also makes the middle layer much more immune to security issues.
  • Well, I know for myself a good swift "attack" on my "middle layer" does cause me to fall to the ground and writhe around for a while, so I guess the internet and I do have a lot in common, really vulnerable mid-sections.
  • by norpy ( 1277318 ) on Monday August 22, 2011 @08:05PM (#37173772)

    Not only did they combine the presentation and application layers from the OSI model they completely misunderstand WHY that the transport layer is less diverse in number of protocols.

    They propose that we should create new transport protocols that do not overlap with existing ones.... The reason we only have a handful of them is because of the fact that there are not many ways to differentiate a transport protocol.

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday August 22, 2011 @08:08PM (#37173790) Journal
    There seems to be the unstated(but vital to the conclusion asserted) assumption that competition actually makes protocols more secure and that competition must occur at the protocol level, rather than the implementation level. Without those assumptions holding, all this article really says is that people use TCP and UDP a lot. Yup. That they do.

    This seems like it might be true in the (not necessarily all that common) case of a protocol whose security is fucked-by-design competing with a protocol that isn't fundamentally flawed, in a marketplace with buyers who place a premium on security, rather than price, features, time-to-market, etc.

    Outside of that, though, much of the competition and security polishing seems to be at the level of competing implementations of the same protocols(and, particularly in the case of very complex ones, the de-facto modification of the protocol by abandonment of its weirder historical features). It also often seems to be the case that(unless you are in the very small formally-proven-systems-written-in-Ada market, or something of that sort) v1.0 of snazzynewprotocol is a bit of a clusterfuck, and is available in only a single implementation, also highly dubious, while the old standbys have been polished considerably and have a number of implementations available...
    • (unless you are in the very small formally-proven-systems-written-in-Ada market, or something of that sort) v1.0 of snazzynewprotocol is a bit of a clusterfuck, and is available in only a single implementation, also highly dubious, while the old standbys have been polished considerably and have a number of implementations available...

      Careful that we do not open Pandora's box here... (You know exactly what I am talking about, heh)

      But on another note your exactly right. This article seems to talk about how protocols "evolved" but this is just as useful as painting a picture of the internet:
      Time and time again I will see models looking at a picture of the internet "all at once", but without knowing what and why with each individual link, protocol, implementation, etc... this is a complete waste of time.

      As you have said in so many words ab

      • As best I can tell, after going back and reading the paper, TFA is a miserable hatchetjob that has almost nothing to do with the paper.

        The paper dealt with modeling the survival or culling of protocols at various layers, under various selection criteria, from a sort of evolutionary-biology standpoint. This did entail examining what conditions resulted in monoculture end states, and what conditions might result in stable multiple-protocols-at-each-layer end states; but all at the level of a fairly abstrac
  • by Anonymous Coward on Monday August 22, 2011 @08:08PM (#37173794)

    It's the very first Google hit, is still on a public server, and doesn't obviously distort the conclusions like TFSA in an effort to get more clicks. A+ for poorly crafted summaries, Slashdot.

    http://www.cc.gatech.edu/~sakhshab/evoarch.pdf [gatech.edu]

  • ... there is human error there will be weakness. Before innovation, there is caution and upkeep. Careless server admins just leave their gates open, a la Sony. A simple misconfiguration and the East goes dark, a la Amazon.

    But like all things founded on good democratic freedoms, we are free to be idiots. And unless we add socialized security, the internet will always be full of gaping weaknesses. And all of us, including those that serve responsibly, will suffer their consequences. A la the United States of

  • Evolution always seemed to be too like MS Outlook to me, this article just seems to confirm that, judging by the odd intelligible snippet I can make out from the overuse of metaphors and confused language of the summary. But fear not, mutt does not suffer these problems, and nor does Thunderbird if you need your middle layers of the internet client to have pretty icons.
  • by khallow ( 566160 ) on Monday August 22, 2011 @08:14PM (#37173828)

    security breaches could be countered by diversification of protocols, according to Georgia Tech, which recommends new middle layer protocols whose functionality does not overlap, thus preventing 'unnatural selection.'

    Let's have a lot of protocols right, but to prevent too much diversity (that is, stuff that doesn't work) we'll need to make sure these comply with one or two protocols that everyone will use...

    Hmmm, "Middle layer protocols whose functionality does not overlap"... does that mean that we prune the vast abundance of current protocols with sometimes overlapping functionality? I guess we could call that "diversification" though at this level of semantic mismatch, we could call it "Frank" with equal justification.

    I guess I'm not quite sold on the argument presented here.

  • by jhantin ( 252660 ) on Monday August 22, 2011 @08:29PM (#37173916)
    Evolution at the middle layers is also hampered by the proliferation of middleboxes [wikipedia.org]: monkeying with packet headers for policy-enforcement and profit. It's also pretty well de rigueur for IT departments to configure both middleboxes and "smart" switches to drop any unrecognized middle layer packets.
  • Let FTP die already. Clear text passwords suck.
    The only legitimate use of FTP is a way of transferring files over a LAN to something which doesn't have a good implementation of a CIFS or SSH server.

    • Let FTP die already. Clear text passwords suck.

      How do clear text passwords suck for anonymous FTP?

      • Anonymous runs an ftp server? Aren't they worried about the FBI?

      • FTP has more flaws than just clear text passwords. Requiring multiple connections, often in opposite ways, for one.

      • by Alioth ( 221270 )

        They don't on anonymous ftp, but ftp fundamentally sucks: it needs two ports, a fixed port and a random data port that gets opened and closed for each transfer or directory listing, meaning added firewall complexity (the packet filter now must understand and parse the FTP protocol to be able to punch the holes to allow the random port traffic to pass, then close them again afterwards).

        HTTP is far better for doing what anonymous FTP does. It requires only one port. For anything authenticated, sftp beats ftp.

        • by Viol8 ( 599362 )

          "HTTP is far better for doing what anonymous FTP does"

          Really? Try uploading to a directory using an HTTP server.

  • forgive me, but nothing useful turned up on Google or urban dictionary. what does this word mean? (I am a native English speaker)
    • ...unless this is some strange reference to bone formation
    • by Livius ( 318358 )

      They're trying to say 'petrified' (in its figurative meaning) but they think it will sound more impressive if they incorrectly use a somewhat similar word.

    • by Alioth ( 221270 )

      I recommend WordReference:

      English definition: http://www.wordreference.com/definition/ossified

      Synonyms: http://www.wordreference.com/thesaurus/ossified

      (WordReference will also give you the definition in a variety of languages).

  • Everything is vulnerable to attack, especially if it's connected to a worldwide network.
  • Having skimmed the article, I am concerned that they seem to ignore the well-known network effect: the value of a network to those attached to it increases at a rate faster than linear as a function of the number of others attached. This property has generally meant that once a network-layer protocol is sufficiently well established, it is hard to displace; a winner-take-all situation. Telegraph network. Telephone network. In the data world, IP, ATM, and a handful of others slugged it out, and eventuall
  • More for integrity, but the service layer architecture is purely based on trust. It turns out, that you can more readily do the most when you have trust, which partly explains the rapid growth of the Internet. However, a bunch of trusting souls make an irresistible target for those who are willing to exploit their trust. I believe the only way to deal with them is to move faster than they can. FTP should have been enhanced to the point that few would use the older version, hence a smaller target. I don

"Yes, and I feel bad about rendering their useless carci into dogfood..." -- Badger comics

Working...