GA Tech: Internet's Mid-Layers Vulnerable To Attack 166
An anonymous reader writes "Evolution has ossified the middle layers of the Internet, leaving it vulnerable but security breaches could be countered by diversification of protocols, according to Georgia Tech, which recommends new middle layer protocols whose functionality does not overlap, thus preventing 'unnatural selection.' Extinction sucks, especially when it's my favorite protocols like FTP."
It's hard to take seriously... (Score:5, Insightful)
I understand that IP protocols predate the 7 layer ISO/OSI model, but that's what everything is mapped to in modern terms.
The article seems even more confused, when it reverses the layers, claiming that "at layers five and six, where Ethernet and other data-link protocols such as PPP (Point-to-Point Protocol) communicate..."
What are they teaching at GA Tech? This is networking 101.
Re: (Score:2, Insightful)
It's pretty freshmen-ish stuff. FTP hasn't been used in a long time. Glass-screen protocols went the way of the 386 long ago. I'm surprised these guys don't understand various secure protocols, key exchange methods, and so forth. Nice fluffy stuff, but very dated for the reality check. Show me someone using ftp and I'll show you a password theft followed by a crack. Ye gawds.
Re: (Score:3)
Variants of FTP are used widely in business to business transfers - sometimes secured with SSL, but often just by plaintext passwords, obscurity and/or IP whitelists. FTP is consistent between a large variety of platforms and lots of sysadmins like the simplicity of scripting, for example, a nightly FTP file transfer.
Is there better solutions? Of course. But FTP is still very common - and lots of businesses still employ much more arcane tech than it. For a lot of businesses, terminal servers were a real
Re: (Score:2)
Our customers demand FTP, no matter how much we educate about SFTP and show how easy it is, still they insist on using FTP.
if ftp goes down that's likely to get complaints faster than http being down. loss of SSH access they barely even notice oO;
Re: (Score:2)
SSH has severe performance issues and hardly anyone uses the high-performance patches [psc.edu]. (Hell, hardly anyone knows the high-performance patches exist!)
You'll notice the patches are not being funded any more and that none of the SSH enthusiasts are, well, enthused enough to have volunteered to help the maintainer. Don't get me wrong, I like SSH, but it doesn't write itself and I have very little sympathy for those who complain about under-utilization when doing nothing about helping to address the issues that
Re: (Score:2)
SFTP is one idea. SCP. Notice the S?
Which of these two protocols' client comes with Microsoft Windows brand operating systems? And which budget shared web host supports file uploads using such protocols?
Re: (Score:2)
Re: (Score:2)
Dreamhost [dreamhost.com]. Being able to SSH in and pull down something with their pipes using wget has come in handy a number of times as well.
The client thing, meh. If people are mucking around in command line FTP programs they're savvy enough to download one; if they're using a GUI an awful lot of them have SFTP support these days, including FileZilla (free/Free). I guess I could see an argument if they're just entering an FTP URL into the
Re: (Score:2)
Depending on the reason for FTP, they could just as well use Internet Explorer for FTP (or Firefox or whatever), because it comes standard. Or map the FTP to a drive on their PC (Windows supp
Re: (Score:2)
Re:It's hard to take seriously... (Score:5, Informative)
FTP (and FTPS) uses two ports: one fixed port number and the other random. You also have passive mode and "active" mode for FTP (but everyone these days uses passive, except one particularly backward vendor I had to deal with).
This causes firewall headaches because now the packet filter must understand FTP and selectively punch holes in the firewall for the data connection, and close them when the data connection finishes. Either the packet filter in the OS kernel must understand FTP, or you must use an FTP proxy that can dynamically modify your packet filter rules.
SFTP requires none of this. It works on a single port and this port doesn't change with each file you want to transfer or directory listing you want to see. You can also use the scp command which is much cleaner for scripting than writing FTP scripts. SFTP is a *lot* easier and cleaner to support, and the encryption is built right into the protocol, not added ad-hoc some time later.
Re: (Score:2)
SFTP is part of SSH, FTPS is FTP with encryption poorly stuck onto it. On top of that very few FTPS software packages seem to be compatible with eachother.
If you don't know what SSH is please look it up yourself. SSH is one of the things that makes POSIX systems awesome.
Re: (Score:2)
In fact, that is exactly how I posted this message. I am at work, with an SSH tunnel to my home network which acts a SOCKS proxy to the internet for my work PC. Even my DNS queries go to my internal DNS server on my home LAN.
All my corporate overlords see is a fuck ton of SSH traffic to my home IP on some very unusual ports. All Slashdot sees is a normal web connection from my home
Re: (Score:2)
If you know what SSH is then why did you ask about SFTP? SFTP is just an FTP-esque environment to copy files over SSH, for when you don't want to do so one by one over SCP or you don't know all the remote paths or whatever. So you say you know what SSH is, but have you actually ever used it?
Re: (Score:2)
UDP 4500 would like a word with you. Mind you, if you have a VPN server and are NATing it, you may have a problem or at least require some registry hacks
Re: (Score:2)
Haha. One system I had to build and maintain at a previous employer, not that long ago (1999): .BAT job runs a Qualcomm application that dials up Qualcomm periodically to connect to their satellite truck monitoring system, capture session into a file in a special directory .BAT job looks periodically to see if a new file has come in; uses TFTP to transfer it to a Sun workstation - call it Sun-1.
PC
PC
Sun-1 shell script mails the file to a special email account on another workstation
Sun-2 uses fetchmail and p
Re: (Score:2)
Of course that solution is kooky in its way but also amazing in a way b/c each step actually makes sense and works with the other steps only b/c of the genius of the layered internet.
Re: (Score:2)
Re: (Score:2)
Yeah, cool. I'll tell clients and unrelated businesses what software they "can" use.
I was reporting on the situation. Sure there's better options, but that doesn't stop "FTP is very common" from being reality.
FTP over TLS (Score:2)
Show me someone using ftp and I'll show you a password theft followed by a crack.
Crack this: FTP over TLS [wikipedia.org].
Re: (Score:2)
Because that makes a lot more sense than just use SFTP or SCP.
And something I noticed, files I transfer with SCP either fail or or things actually get done right. With FTP and others I've lost count of the times files actually got corrupted while transferring without any kind of warning.
That adding to security concerns should be enough to force the switch in an enterprise environment.
Shell account (Score:2)
Re: (Score:2)
Technically speaking, yes, SCP and SFTP need a shell to call the subsystem that provides the functions needed. You can install a package called "rssh" which will restrict a user to the SCP and SFTP subsystems, and prevent access to any other commands.
Re: (Score:2)
FTP over TLS will - by dint of TLS providing a reliable data stream - avoid corruption issues. Honestly.
SFTP isn't ubiquitous, FTP is. SCP is only useful if you have full filepaths to work with and is even rarer than SFTP.
Besides, since people like the convenience of single-sign-on, you're better off using Kerberos (the MIT version). SASLv2 is also nice.
Look, this is very simple. What "makes sense" doesn't matter. Betamax "made sense". The Transputer "made sense". Multicast "makes sense". IPv6 "makes sense"
Re: (Score:2)
You can shave five bits off. Whether other attack vectors will emerge are unknown. People use AES 128 then use the same salt over all of the base keys generated, too. That one takes a while.
Re: (Score:2)
Re: (Score:2)
Show me someone using ftp and I'll show you a password theft followed by a crack.
Does that include anonymous FTP? Or using FTP between two computers in my apartment?
Re: (Score:2)
No, it's a wired connection. And why is mentioning anonymous FTP being smartass?
Re: (Score:2)
I wasn't going to pay any attention to that silliness, but I feel like saying that I use FTP all the time as well.
Not for server work (SSH protocols for that), but I use FTP between computers here. It's a fast and reliable way to transfer data. If it's a lot of small files I tar it up first though. (I would always want to archive that kind of stuff for any method of data transfer, though)
I still use FTP clients to download stuff where I can too. (e.g. kernel and other source tarballs, distro mirrors for ISO
Re: (Score:2)
There might be one practical use where you won't violate security ideals-- anonymous ftp. Otherwise ftp is swiss cheese. I'm not joking. SO it sounded like you were finding the smart ass narrow exception. But you may in fact not really know the dangers otherwise.
Re: (Score:2)
Anonymous ftp is not that rare to be a narrow exception.
Re: (Score:2)
The problem isn't really the downloader. It's the fact that the host is vulnerable to iterative attacks until it cracks. Then it's hijacked. Ftp can be cracked like an egg in its Unix and GNU form.... and that's not the only problem.
Re: (Score:2)
Correction. FTP should not be used anymore. It is used. Widely. Why? Because it works, and because the person who could change it left the company years ago. But slowly.
Turn back the time a decade. We're at the downturn after the dot.com bubble blew up, a lot of more or less sane IT people are out of a job (along with all the duds that got their job by spelling TCP/IP halfway correct and knowing that it ain't the Chinese secret service), and all of them are looking for work, any kind of work will do. So the
Re: (Score:2)
Back then, people cared even less about security than they do today, what they wanted was an IT infrastructure that works.
Of course, I've seen ISP environments that used FTP heavily (as well as TFTP for a bunch of automated stuff). Why? Because when you're running an encrypted tunnel through another encrypted tunnel that runs between two trusted hosts on a segment of the network that does not allow incoming traffic from anywhere but the NOC it just seems silly to add another layer of encryption and the potential issues that could come with that for daily log transfers...
Re: (Score:2)
Something tells me you'll have a rude wakeup call if you get out of school and start working for some big business. FTP is still an extremely common way of transferring files in batch scripts and such.
Re: (Score:2)
Oh. Right. I was cleaning up DEC tape spews when you were a zygote. FTP may be common, but it's intensely insecure. Do your research. If you use it, you're irresponsible and endanger your organization.
Re: (Score:2)
Re: (Score:2)
You can only hope that a cogent argument, repeated until PHBs take it seriously, then think it's theirs, will do some good. Too many systems get p0wn3d because of stupid stuff, and ftp is old, and is just plainly irresponsible-- save places where a secure channel exists. Mostly, they don't; secure channels are another problem for a different day.
Re: (Score:2)
Re: (Score:2)
I would beg to differ. I spent a couple of hours last week setting up a regression test environment to run a patched version of our FTP connection layer through its paces with (the errors were actually in SFTP error handling, but we re-test everything). Some of the equipment our customers must collect data from supports no other method of retrieving it. Generally, the network is itself *very* secure, and our box is sitting inside of it. I guess the customers don't see it to be much of an issue, and will
Re: (Score:2)
Unencrypted FTP with Kerberos? Anonymous FTP? Plenty of ways you can use FTP without putting an account at risk.
As for your claim that "FTP hasn't been used in a long time" - it's clearly bogus. FTP is widely used. More web browsers support vanilla FTP than support FTP over SSH. If you want the Linux kernel sources, or a distro ISO image, the overheads of encryption aren't gaining you enough to make it worth the effort - the higher throughput and lower server loads win every time.
Web hosting sites usually d
Re: (Score:2)
I've been really surprised by all of the purported ftp use cited in this thread... tftp as well. Web hosting sites are in need of some updates. Using https would at least prevent part of the problem. Yet it's up to people that understand infrastructure to help educate those that don't understand the nature of hacks and cracks. Organizations get banged with hammers that most people aren't willing to understand. Yesterday, my primary web facing server was under attack from two different places trying to beat
Re: (Score:2)
In need of updates? I fully agree.
Exactly what those updates are, that's more debatable. tftp is excellent for bootstrapping a machine with an OS and is independent of machine architecture (ix86, MIPS, UltraSPARC) and BIOS (Corelis, Phoenix, UEFI, etc) - I really, really, really do NOT want to try implementing SCP in Forth for bootstrap purposes. I couldn't afford the psychiatric treatment afterwards.
Likewise, I would not consider using any other authentication mechanism in environments already using SASLv2
What are you talking about? (Score:4, Informative)
ARPANET predates the OSI model, and the current Internet Protocols came after the definition of the OSI stuff. (That's a little hard to see in the current wikipedia articles, but it's there.) The IETF in fact deliberately chose to combine two of the OSI layers.
The article does have some issues. I'm not sure if the author actually doesn't understand the paper he or she is trying to summarize. Maybe the intent was to make it easier for the lay person to understand. But there is some creativity going on, and parts of the summary don't really reflect the paper.
The paper itself is offering a framework of analysis of the evolution of the Internet Protocols. It might have been interesting to see a bit more analysis of ARPANET and some of the other protocols the IP protocols eventually replaced. It might have been interesting to see them address the OSI model a bit more, but the OSI model never was really implemented fully, and might be considered not part of the evolution.
I see that the take IPv6 up as a competitor of IPv4 instead of the heir apparent, which is probably a useful thing to do, if we want to understand why so many IT managers are still failing to move in a timely manner.
I'm not sure I understand their work well enough to either agree or disagree, but I think it offers food for thought, including the idea that IPv4/6 doesn't actually have to be the only protocol existing at that layer.
The original paper (Score:2)
Hit post without thinking again. This AC post [slashdot.org] down the way a bit links the original paper.
Re: (Score:2)
You're absolutely right that it doesn't have to be the only protocol at that layer. The X protocols from Europe cover the full spectrum of the OSI model, including layers 3 and 4. The TUBA protocol (one of the candidates for IPv6) could perfectly well be implemented, again sitting at that layer. Infiniband has its own layers 2, 3 and 4. Other IP protocols exist - albeit in experimental form for the most part. (IPv0 could be said to still exist.)
Re: (Score:2)
Not really. IPX/SPX were really only designed for LANs. They don't deal with latency, lost packets, etc. anywhere near as gracefully as TCP/IP does.
Re: (Score:2)
Re:It's hard to take seriously... (Score:4, Informative)
I've never really been a fan of the OSI model. The idea of the hierarchy is great; sandwiching it into discrete layers seems problematic.
Wikipedia's definition of the OSI model [wikipedia.org] states that "there are seven layers, each generically known as an N layer. An N+1 entity requests services from the layer N entity." Makes sense. So, why are both ICMP and IP considered to be in layer 3? ICMP is built on top of IP, so it should be in the layer above IP, but it doesn't actually provide transport (or at least, isn't meant to). HTTP is in layer 7, but it can be sent directly on top of TCP, which is in layer 4, skipping over two layers. (Or it can be tunnelled over SSL, but still skipping layer 5.)
I prefer to think of the IP stack being a directed acyclic graph of technologies, each depending on another, rather than an explicit linear division into layers.
Re:It's hard to take seriously... (Score:5, Informative)
Well, you can imagine a "null" layer that does nothing, just passes the data unmodified to the next layer.
For example, HTTPS would be HTTP over SSL, SSL wouls be level 6 (presentation). If you use HTTP without SSL then level 6 is empty or uses the "null" protocol.
ICMP is part of IP, while you could say that the ICMP packet is inside an IP packet it is easier to imagine ICMP as just a part of IP, because it is used that way (for example, to signal that some other packet could not be delivered).
Just because I can send the HTTP packet inside an Ethernet frame (without IP or TCP), does not mean that the model is broken, it's just that "null" is a valid protocol.
Re: (Score:2)
Good point about the null. I see that it works that way for non-SSL traffic, but I still don't see how the "session layer" sits in between HTTP and TCP (even if you consider it to be "null"). It seems like session layer protocols are an entirely different sort of connection.
As for ICMP, I see what you mean that it's sort of part of the IP protocol (IP wouldn't work without ICMP), but it is syntactically formed inside an IP packet, and I do believe it is constructive to think of ICMP as being "on top of" IP
Re: (Score:2)
Well, in that case ICMP is a transport layer protocol, I mean you can stuff arbitrary data inside an echo request packet, so you can use it as a way to send HTTP requests (and the recipient replies with the same data, so you can check whether it arrived correctly).
Well, another example - I take an HTTP packet and send it straight over the wire (let's say a serial or parallel port of a PC), now it only has two layers - physical and application, all others are null. Or if you want a network, try an I2C bus, i
Re: (Score:2)
Well that makes my point, though: you can arbitrarily nest protocols inside one another, so it doesn't make sense to talk about them strictly in layers. Rather than saying "HTTP can drop to a lower layer", why not throw away the concept of layers, and just have a more vague concept of "application level" versus "transport level" and so on, like the 4-level IP stack.
Re: (Score:2)
The OSI model is still useful to know in which order you want to do stuff.
For example, take the application data, if you need to, convert it to something that the recipient can read (XML, some encoding), then encrypt it and/or use whatever session management protocols you want, after that put it in a transport protocol, then a network protocol and pass it down to data link which will send it over a physical connection.
The fact that you can arbitrarily nest protocols inside one another is the result of the f
Re:It's hard to take seriously... (Score:5, Informative)
So, why are both ICMP and IP considered to be in layer 3?
Because the Internet protocols are not in fact part of the OSI model, despite lots of teaching materials claiming this. The neat little OSI layer diagrams you see with all the layers filled in are mostly retcons invented long after OSI was dead.
The actual Internet protocol suite is not part of the OSI model but the 4-layer Internet model [wikipedia.org] (Link, Internet, Transport, Application). Link is like OSI layers 1 and 2, Internet is like OSI Layer 3, Transport is like OSI Layer 4, Application is like OSI Layer 7, but there is no actual Internet equivalent of OSI's layers 5 and 6. Pretty much everything above 4 runs at Layer 7.
In the Internet model, it makes perfect sense for DHCP, IP and ICMP and routing protocols like RIP and OSPF to be at the Internetworking level because they are both protocols dealing with datagram transmission between interconnecting disparate packet-switched services, while TCP and UDP are in the Transport layer because they make dealing with raw datagrams somewhat more pleasant.
It would perhaps be sensible to invent a whole new layer model now that we have a lot more protocols. HTTP, for instance, should be a layer of its own, since so many things are now tunnelled over it. That would be sensible, though, so good luck.
Re: (Score:2)
Thank you. Yes, the four-layer Internet Protocol Suite thing makes a lot more sense. Rather than trying to say "there are seven layers stacked on top of each other," it seems like here, the protocols are arranged into four logical "protocol groups" with clearly-defined roles, and no sense of "protocols in layer N run on top of those in layer N-1". In the IP suite, it seems valid for protocols in the same group to run on top of each other (e.g., HTTP runs over SSL; ICMP runs over IP).
Re: (Score:2)
While the 4 layer model may make sense from the upper layers POV, I do prefer separating the Link layers, and not mixing the media used w/ the switching layers.
I think the key with TCP/IP is that you have two layers that are actually part of TCP/IP. Above those layers you have an application and below them you have a "link" . The application and the link may themselves be divided into multiple layers but that is outside the scope of TCP/IP. You may even have some layers occouring more than once in the stack.
Re: (Score:2)
TCP/IP was not designed to fit the OSI model therefore any attempt at mapping TCP/IP onto the OSI model will be imperfect
TCP sits above IP and is conventionally considered to be at OSI level four (though according to wikipedia it implements some functionality that is in OSI level 5). UDP also sits above IP and therefore is also conventionally considered to be at OSI level four (though it implements hardly any of the functionality OSI associates with that layer).
The rest of the functionality that OSI places
Re:It's hard to take seriously... (Score:5, Informative)
It would perhaps be sensible to invent a whole new layer model now that we have a lot more protocols. HTTP, for instance, should be a layer of its own, since so many things are now tunnelled over it. That would be sensible, though, so good luck.
Thinking of a fixed set of layers stops being useful as soon as you get moderately complex network setups because these days encapsulations tend to happen at all sorts of layers. Modern networks can probably be thought of more as a stack of protocols with the link layer at the bottom, application at the top and chopped up repetitive bits of the stack in the middle.
e.g. take for example a modern connection to a website and we probably see this kind of stack:
HTTP
SSL
TCP
IP
PPP
PPPoE
Ethernet
ATM VC-Mux
ATM
G.922.5 data link layer
Physical ADSL
And that's just for a plain home ADSL connection. In more complex networks it is common to encapsulate stuff further, for example using GRE tunnels or IPSEC tunnels, and it isn't uncommon to see something more like:
HTTP
SSL
TCP
IP
IPSEC ESP
IPSEC AH
IP
Ethernet
GRE
IP
GRE
IP
PPP
PPPoE
Ethernet
ATM VC-Mux
ATM
G.922.5 data link layer
Physical ADSL
And you can keep adding encapsulation layers at pretty much any point in the stack.
Re: (Score:3)
So, why are both ICMP and IP considered to be in layer 3? ICMP is built on top of IP.
The real answer to that is that it's a Berkeley UNIXism. Some early TCP/IP implementations, including the one I worked on, had ICMP at a layer above IP, in the same layer with TCP and UDP. The Berkeley UNIX kernel, like other UNIX versions of the period, had real trouble communicating upward within the kernel, because this was before threads, let alone kernel threads.
To get around that kernel limitation, ICMP was crammed in with IP. This had some downsides, including the demise of ICMP Source Quench for
Re: (Score:2)
And of course layer 8 is where they make you sit in the comfey chair.
Re: (Score:2)
Yeah - that protocol layer has a name to: PEBKAC
Re: (Score:2)
zombie bot? maybe, but it's probably some third-world peon doing this for pennies an hour.
but maybe that's what you meant by zombie bot.
Re: (Score:2)
Same thing.
Re: (Score:2)
People still use FTP? I exclusively use SFTP and/or SCP these days. I can't remember when I last used FTPS, let alone plain FTP.
Re: (Score:2)
A network tap doesn't involve the layers sitting on top of it.
How to mod article? (Score:4)
Link to the original paper (Score:2)
Courtesy of this AC post [slashdot.org] down the page a bit.
So the internet is just like a human being then? (Score:5, Funny)
Re: (Score:2)
How did this article make it? (Score:4, Insightful)
Not only did they combine the presentation and application layers from the OSI model they completely misunderstand WHY that the transport layer is less diverse in number of protocols.
They propose that we should create new transport protocols that do not overlap with existing ones.... The reason we only have a handful of them is because of the fact that there are not many ways to differentiate a transport protocol.
Unstated, and important, assumptions? (Score:5, Insightful)
This seems like it might be true in the (not necessarily all that common) case of a protocol whose security is fucked-by-design competing with a protocol that isn't fundamentally flawed, in a marketplace with buyers who place a premium on security, rather than price, features, time-to-market, etc.
Outside of that, though, much of the competition and security polishing seems to be at the level of competing implementations of the same protocols(and, particularly in the case of very complex ones, the de-facto modification of the protocol by abandonment of its weirder historical features). It also often seems to be the case that(unless you are in the very small formally-proven-systems-written-in-Ada market, or something of that sort) v1.0 of snazzynewprotocol is a bit of a clusterfuck, and is available in only a single implementation, also highly dubious, while the old standbys have been polished considerably and have a number of implementations available...
Re: (Score:2)
(unless you are in the very small formally-proven-systems-written-in-Ada market, or something of that sort) v1.0 of snazzynewprotocol is a bit of a clusterfuck, and is available in only a single implementation, also highly dubious, while the old standbys have been polished considerably and have a number of implementations available...
Careful that we do not open Pandora's box here... (You know exactly what I am talking about, heh)
But on another note your exactly right. This article seems to talk about how protocols "evolved" but this is just as useful as painting a picture of the internet:
Time and time again I will see models looking at a picture of the internet "all at once", but without knowing what and why with each individual link, protocol, implementation, etc... this is a complete waste of time.
As you have said in so many words ab
Re: (Score:3)
The paper dealt with modeling the survival or culling of protocols at various layers, under various selection criteria, from a sort of evolutionary-biology standpoint. This did entail examining what conditions resulted in monoculture end states, and what conditions might result in stable multiple-protocols-at-each-layer end states; but all at the level of a fairly abstrac
Really? Why not link to the original paper? (Score:5, Informative)
It's the very first Google hit, is still on a public server, and doesn't obviously distort the conclusions like TFSA in an effort to get more clicks. A+ for poorly crafted summaries, Slashdot.
http://www.cc.gatech.edu/~sakhshab/evoarch.pdf [gatech.edu]
As long as... (Score:2)
... there is human error there will be weakness. Before innovation, there is caution and upkeep. Careless server admins just leave their gates open, a la Sony. A simple misconfiguration and the East goes dark, a la Amazon.
But like all things founded on good democratic freedoms, we are free to be idiots. And unless we add socialized security, the internet will always be full of gaping weaknesses. And all of us, including those that serve responsibly, will suffer their consequences. A la the United States of
one man's weakness (Score:2)
is another man's freedom?
Re: (Score:2)
Use mutt (Score:2)
Alrighty (Score:3)
security breaches could be countered by diversification of protocols, according to Georgia Tech, which recommends new middle layer protocols whose functionality does not overlap, thus preventing 'unnatural selection.'
Let's have a lot of protocols right, but to prevent too much diversity (that is, stuff that doesn't work) we'll need to make sure these comply with one or two protocols that everyone will use...
Hmmm, "Middle layer protocols whose functionality does not overlap"... does that mean that we prune the vast abundance of current protocols with sometimes overlapping functionality? I guess we could call that "diversification" though at this level of semantic mismatch, we could call it "Frank" with equal justification.
I guess I'm not quite sold on the argument presented here.
Other things hampering evolution (Score:3)
Let FTP die already (Score:2)
Let FTP die already. Clear text passwords suck.
The only legitimate use of FTP is a way of transferring files over a LAN to something which doesn't have a good implementation of a CIFS or SSH server.
Re: (Score:3)
Let FTP die already. Clear text passwords suck.
How do clear text passwords suck for anonymous FTP?
Re: (Score:2)
Anonymous runs an ftp server? Aren't they worried about the FBI?
Re: (Score:3)
Re: (Score:2)
Oh, fuck. That's probably why work has been calling me.
Re: (Score:2)
Don't worry, it's just this kid called "Joshua".
Re: (Score:2)
FTP has more flaws than just clear text passwords. Requiring multiple connections, often in opposite ways, for one.
Re: (Score:2)
Re: (Score:2)
They don't on anonymous ftp, but ftp fundamentally sucks: it needs two ports, a fixed port and a random data port that gets opened and closed for each transfer or directory listing, meaning added firewall complexity (the packet filter now must understand and parse the FTP protocol to be able to punch the holes to allow the random port traffic to pass, then close them again afterwards).
HTTP is far better for doing what anonymous FTP does. It requires only one port. For anything authenticated, sftp beats ftp.
Re: (Score:2)
"HTTP is far better for doing what anonymous FTP does"
Really? Try uploading to a directory using an HTTP server.
ossified? (Score:2)
Re: (Score:2)
Re: (Score:2)
They're trying to say 'petrified' (in its figurative meaning) but they think it will sound more impressive if they incorrectly use a somewhat similar word.
Re:ossified? (Score:5, Informative)
No - the figurative sense of ossified is correct and common. Petrified is usually used figuratively to mean something like "scared stiff". Ossified, in common figurative use, means that something has become stiff and inflexible (often through disuse or rot) - like tissue that has become bone.
If you check a reasonable dictionary (eg. http://dictionary.cambridge.org/dictionary/british/ossify_1?q=ossified [cambridge.org]) you'll find this definition.
Re: (Score:2)
I recommend WordReference:
English definition: http://www.wordreference.com/definition/ossified
Synonyms: http://www.wordreference.com/thesaurus/ossified
(WordReference will also give you the definition in a variety of languages).
This just in (Score:2)
Network effect? (Score:2)
It really wasn't designed for security. (Score:2)
Re:What we need is a P2P (Score:4, Informative)
There are plenty of those already. NetBIOS is an example of a non-TCP/IP peer-to-peer filesharing protocol (I'm talking LANMAN style NetBIOS, not NetBIOS over TCP/IP). It doesn't route outside your local network though. There's the good ol' IPX/SPX, which can actually be routed if your router supports them - while not filesharing protocols in themselves, they do support some very well-established filesharing protocols. You could probably adapt bittorrent to work on IPX/SPX.
The problem is we can't even get IPv6 routed on the internet, much less some obscure non-IP protocol. Hell, we never even really got all of IPv4 - multicast would have been great for streaming video if anyone had bothered to set up their routers for it.
That being said, you don't need to use TCP and UDP. You can create new protocols to run over IP, and the internet will generally pass them (your local firewall might be a different story). They'll stick out like a sore thumb to anyone searching for them, though.