Study Shows Many Sites Still Failing Basic Security Measures 103
Orome1 writes with a summary of a large survey of web applications by Veracode. From the article: "Considered 'low hanging fruit' because of their prevalence in software applications, XSS and SQL Injection are two of the most frequently exploited vulnerabilities, often providing a gateway to customer data and intellectual property. When applying the new analysis criteria, Veracode reports eight out of 10 applications fail to meet acceptable levels of security, marking a significant decline from past reports. Specifically for web applications, the report showed a high concentration of XSS and SQL Injection vulnerabilities, with XSS present in 68 percent of all web applications and SQL Injection present in 32 percent of all web applications."
Citicorp Hack (Score:5, Interesting)
Then there is the Citicorp hack, where they dont even bother hashing the account numbers in the URL...
Re:Citicorp Hack (Score:4, Insightful)
Re: (Score:2)
It might be interesting to compare the total amount of losses to bank robbery and this sort of hacking to the amount pocketed by execs in the bailout.
Re:Citicorp Hack (Score:4, Informative)
Total loot: $7,820,347.96 in cash, $298.88 in cheques. So far, they've gotten back $1,801,073.18, for a net loss of $6,019,573.66
Extrapolated to an entire year, that would still be under $25 million net. A rounding error compared to all the US bank bail-outs.
200 (Score:5, Insightful)
I wonder how they test. Some sites that I manage return the user to the homepage on a hack attempt or unrecoverable error resulting in a 200 return. Would they consider such a system as hacked, since they got a 200 OK return, or not.
Re:200 (Score:5, Interesting)
Re: (Score:2)
That's awesome. They should open source that component.
Re: (Score:2)
Hmmm, how about
a) Have a secondary instance running with dummy/fake data
b) Have a wrapper around queries that checks for attempted injections (perhaps a pre/post sanitization check), if the query is an injection attempt, grab data from the fake DB
c) Watch for people using data from the fake DB, attempt to use a fake (but realistic enough to pass a smell test) CC# are fraud attempts flagged to visa...
Re: (Score:2)
Why wouldn't you return a more appropriate code (something from 4xx or 5xx) in those cases? Since you can always send whatever content you want along with (almost) any code, might as well give standards-compliant HTTP feedback.
Re: (Score:2)
I should add that appropriate error codes can help drive off traffic from automated scanners of various sorts, looking for open proxies and other problems. Things like your 404 or 401 pages should definitely not return a 200 OK, for that reason if no other.
Re: (Score:1)
Because I have no control over the specifications.
Re: (Score:2)
Haha, yeah, there's always that I suppose.
Re: (Score:3)
Aside from feel-good "adhering to the standards" crap, it makes your site look less inviting to attackers (a 4xx page returning a 200 OK looks, and is, sloppy as hell) which isn't a bad thing. It discourages automatic scanners from marking vulnerabilities that don't actually exist, which can get your site on all sorts of lists that can drive even more (often automated) traffic your way, wasting cycles and bandwidth. I'd much rather Chinese proxy and Wordpress installation scanners get a 4xx FUCK OFF than
Re: (Score:1)
Incidentally, we do need a 400-range "FUCK OFF" status code.
I'll admit: I wanted to write something snarky. So, I went to RFC 2626 looking for a preexisting code that would say something like, "The request was understood, but the manner of presentation raised suspicions. The client SHOULD NOT repeat the request." There is none, and you're dead-on right about the need for a 400-range FUCK OFF status code.
The specs assume that the server will always transmit information unless the request is malformed or the resource is protected/missing: there needs to be an *erro
Re: (Score:2)
Looks like nginx uses a non-standard code 444 No Response for that purpose. May have to modify my Apache config to start using that, if I can...
Re:200 (Score:5, Informative)
Why _would_ you [send valid content with a 4xx or 4xx code]? Is there incentive to be standards-compliant, friendly, and heterogenous-mix-of-clients interoperative with attackers?
Perhaps because you know that the "attacks" are coming from sites that don't know they're attacking you, but are merely asking for content.
The specific cases I'm thinking of are some sites that I'm responsible for, which can deliver the "content" information in a list of different formats such as HTML, PS, EPS, PS, RTF, GIF, PNG (and even plain text ;-). The request pages list the formats that are available; a client clicks on the one(s) that they want and presses the "Send" button, and gets back the information in the requested format(s). The data is stored in a database, of course, and converted on the fly to whatever format is requested. Things like PS and PDF are huge in comparison, so we don't save them. The required disk space would be exorbitantly expensive.
There is a real problem with such an approach: The search sites' bots tend to hit your site with requests for all of your data in all of your formats. Some of them do this from several addresses simultaneously, hitting the poor little server with large numbers of conversion requests per second, bringing the server to its knees. Converting plain text to all the above formats can be quite expensive.
How I handled this was to, first (as an emergency measure), simply drop the request from an "attacker" IP address. This gave breathing space, while I implemented the rest. What's in place now is code that honors single requests, but if it sees multiple such requests in the same second coming from a single address or a known search-site address block, replies to just one of them, and sends the rest an HTML page explaining why their request was rejected.
Over time, this tends to get the message through to the guys behind the search bots, and they add code on their side to be nicer to smaller sites like ours.
I've also used this approach to explain to search-site developers why they should honor a nofollow attribute. After all, they get no information from the expensive formats like PS, PDF or PNG that's not in the plain-text or HTML file, so there's no real reason for a search site to request them.
Note that, in this case, we do actually refer to such misbehaved search bots as "attackers". They're clearly DOSing us, for no good reason. But the people responsible aren't actually malevolent; they just didn't realize what they're doing to small sites. If you can defuse their attacks gently, with human-readable explanations, they'll usually relent and become better neighbors. This helps their site, too, since they no longer waste disk space and cpu time dealing with duplicate information in formats that are expensive to decode and eat disks.
It's yet another case where the usual simplistic approach to "security" doesn't match well with reality.
(It should be noted that the above code also has a blacklist, which lists addresses that are simply blocked, because the code at that site either doesn't relent, or attempts things like XSS or SQL attacks, which are recognized during the input-parsing phase. Those sites simple get a 404. But those are a minority of our rejections. We don't mind being in the search site's indexes; we just don't like being DOS'd by their search bots.)
Re: (Score:1)
How about using Robots.txt crawl delay?
User-agent: *
Crawl-delay: 10
See http://en.wikipedia.org/wiki/Robots_exclusion_standard#Crawl-delay_directive
Re: (Score:2)
#1 solution use a link for your main format that you want the search engines to read (html) then instead of links for the other version use forms. You can still use get, and you can style the submit button to look like a link. Sure its a bit more html then a simple link but as a solution it is simple and effective.
I work at Veracode; here's how we test. (Score:5, Informative)
I work at Veracode, and can share how we test. I'll be brief and technical here, as there's lots of marketing material available other places. In short, we scan web sites and web applications that our customers pay us to scan for them; the "State of Software Security" report is the aggregate sanitized data from all of our customers. We provide two distinct kinds of scans: dynamic and static.
With dynamic scans, we perform a deep, wide array of "simulated attacks" (e.g. SQL Injection, XSS, etc.) on the customer's site, looking for places where the site appears to respond in a vulnerable way. For example, if the customer's site has a form field, then our dynamic scanner might try to send some javascript in that field, and then can detect if the javascript is executed. If so, that's an XSS vulnerability. As you might imagine, the scanner can try literally hundreds of different attack approaches for each potentially vulnerable point on the site.
The static scans are a little fancier. The customer uploads to Veracode a copy of the executable binary build of their application (C/C++, Java, .NET, iPhone app, and a couple of other platforms). From the executable binary, the Veracode systems then create a complete, in-depth model of the program, including control flow, data flow, program structure, stack and heap memory analysis, etc.. This model is then scanned for patterns of vulnerability, which are then reported back to the customer. For example, if the program accepts data from an incoming HTTP request, and then if any portion of that data can somehow find its way into a database query without being cleansed of SQL escape characters, then the application is vulnerable to SQL Injection attacks. There are hundreds of other scans, including buffer overflows, etc.
Personally, I think what we do at Veracode is pretty amazing, particularly the static binary scans. I mean: you upload your executable, and you get back a report telling you where the flaws are and what you need to fix. The technical gee-whiz factor is pretty high, even for a jaded old-timer like me.
Re:I work at Veracode; here's how we test. (Score:4, Informative)
Re: (Score:2)
Thanks for posting, Mark. I'm curious, though: how do you check for stupid mistakes like that in languages that allow first-class functions? For instance, in Python I could write something like:
Your scanner would have to determine that 1) call_func_with_args executes the passed-in function, and 2) there's som
Re: (Score:2)
How do you know whether the data will ever be sent to the DB? That's the problem. You can't simply mock the DB connection and watch for bad inbound queries because there might be an unsafe query that only gets executed once ever 10,000,000 page views. The hard part is telling for sure whether any given piece of data can possibly get passed to a given function, especially when you can pass functions around as arguments to other functions.
At any rate, no, you don't ever have to check the data for single quote
Re:I work at Veracode; here's how we test. (Score:5, Informative)
Re: (Score:2)
Re: (Score:3)
Another related problem I've had is that XSS seems to have a wide range of definitions, and is such a vaguely-defined concept that it applies to a lot of valid web applications.
I've seen a number of definitions of XSS that include all cases where a CGI program gets a URL for a third site, and sends an HTTP request there. I have a number of sites whose CGI software is designed to work exactly this way. The data is distributed across several hundred other sites, only a few of them mine. My main sites ha
what do you expect? (Score:4, Insightful)
This is capitalism/corporations. It's all about profit, and spending extra on IT cuts into the bottom line.
Economy is bad, so companies make cuts. Personnel, IT, Security, and everything but the CEO's bonuses get cut.
Re:what do you expect? (Score:4, Interesting)
Re: (Score:2, Insightful)
If I gave you enough time to do development right, the competition would beat us to market, drive us out of business, and you would be out of a job.
Don't think it is any different working for one of our competitors, they will overwork you just as hard for fear of US beating THEM to the market.
The market has shown a surprisingly high tolerance for bugs and security gaps, so we simply can't afford to proactively fix those.
And if you don't like my high bonus....go start your own company. After realizing just
Re: (Score:2)
Re: (Score:2)
Development is almost always the last link in the chain and, as such, always the department under constant crunch time.
In my experience, QA is the last link in the chain; however, it is the Build team that gets crunched when development overruns. (And, as you pointed out, it's not always development's fault that they overrun.)
Hey, that's the company I work for .. (Score:2)
Strange, and I thought I knew all the software developers working at the company.
Re: (Score:2)
I can make wild-eyed inaccuracies too. I mean it couldn't have anything to do with laws ensuring that failing at data security means less than a slap on the wrist. Wait it means exactly that, it means that you can cut everything and then simply offer an apology. This of course really won't change until either the laws, or case law catches up to the theft of consumer data.
Re: (Score:3, Insightful)
I am sure your point is a part of the problem, but in my (many years) of experience, this has a lot more to do with a myriad of factors, none of which really outweigh the other by much.
I am an independent developer who works on projects with security in mind from the ground up. Time/budget be damned, as it's my reputation on the line. If they can't pay for what it is worth, I tell them to find another developer.
They tend to learn the hard way — it was a better option to stick with a security minded
Re: (Score:2)
I've seen comments that to a lot of management, the IT department is is conceptually similar to the janitorial department, except that the latter keeps the physical facilities clean while the former keeps the data clean (and does a poorer job at its task ;-). Both are pure operational costs that bring in no income, so their cost should be minimized.
It's funny that I've seen this attitude even when the company's products depends in large part on their software people. But the people who build the softwa
Thats why we outsourced our IT to the Cloud (Score:4, Funny)
Now its not my problem, its my Cloud providers problem.
Re: (Score:1)
Nothing new here (Score:3, Interesting)
Re: (Score:1)
If you do such that gives new light to the name "Anonymous tipster".
Re: (Score:1)
If you do such that gives new light to the name "Anonymous tipster".
Not only new light, also a Slashdot nick, an email address, a homepage, a picture and a pretty good estimate of your nationality. All stored in one of the world's most privacy conscious companies. Oh the irony...
Re:Nothing new here (Score:5, Insightful)
Re: (Score:1)
THANK YOU!!!
I can't believe companies aren't held responsible for their (lack of) actions as it regards security!!! It makes me mad!!!!
It seems like we just make the people that find and exploit the security hole as the bad guys, even though it was the companies fault in the first place for having the security hold! We are in a cyber world now, and web security should be a higher priority, especially if you save personal information(credit card numbers comes to mind).
Now maybe LulzSec and Anonymous aren
Re:Nothing new here (Score:4, Interesting)
the media never seem to hold the businesses who left the door open to account.
To a point, I understand their logic: you don't blame the victim. But a company publishing SQL injections in 2011 should be dragged through the mud and humiliated. Maybe someone needs to start a newsroom consulting company where reporters call for technical clarification:
Reporter: Hey, Amalgamated Bookends got hacked by someone who replaced the BIOS on their RAID cards with a webserver. Who's in the wrong?
Consultant: Wow! That's a pretty ingenious trick. I hope they catch that hacker!
Reporter: Hey, Shortcake, LTD got hacked by someone who added "?admin=true" to their website's URL. Is that bad?
Consultant: See if Shortcake's sysadmin is somehow related to the owner. It bet it's his nephew.
Reporter: Hey, Sony...
Consultant: LOL dumbasses
Uh huh (Score:5, Insightful)
Security auditing company produces report that conveniently shows that their services are desperately needed. News at eleven.
Re: (Score:2)
Just because their biased doesn't mean the report is untrue--it just means there's bias.
Re: (Score:2)
Re: (Score:2)
Yeah but in htis case they're probably right... (Score:5, Interesting)
Where I work, every time we get told to put our details into some new provider system for expenses, business travel or whatever (happens regularly with corporate changes) we see who can hack it first. We're developers, it's our personal data, why wouldn't we check ?
The fraction that are hacked in minutes is probably near 50%, and 32% for SQL injection is probably about right.
I'm not sure which is more depressing - the state of the sites or that even though we have a "security" consultancy practice in house, we get corporate edicts to put our data into sites that we haven't even bothered to audit to the extent of sticking a single quote in a couple of form fields or changing the userid in the url...
Re: (Score:2)
Just wanted to clarify with my sibling posts that I'm not even saying that the report is wrong, just that it's incredibly biased. As a professional web developer, I'm quite certain there are many sites with XSS / CSRF / SQL injection issues.
Re: (Score:2)
It's the Same Everywhere (Score:5, Interesting)
You have to realize that somewhere on the net there's a surveillance camera forum with guys saying 'businesses are too cheap to invest in multiple cam setups to cover exploitable deadzones'... and there's a locksmith forum with guys saying 'These companies are still relying on double bolt slide locks, when everyone knows they can be bypassed with a simple Krasner tool!'...and there's a car autosecurity forum wondering why companies still use basic Lo-jack instead of the new XYZ system.. and don't forget the personnel consulting forum where everyone complains that companies don't invest enough in training to recognize grifting attempts on employees.
It's a never ending list and to expect everyone to be on top of all of them at all times is n't realistic.
D
Re: (Score:2)
It's a valid point, but on the other hand you can't routinely try breaking into random houses or cars with little chance of getting caught, and then use them undetected for your personal gains. Your crappy lock will do unless someone from your neighbourhood personally targets your house. With computer security there is a constant global crime spree trying all the locks all the time. This is why I think that computer security needs to be handled with extra care.
Re: (Score:1)
Re: (Score:2)
Why isn't private data also a calculated risk, same as physical goods? Both have a cost associated with securing them. Both have a cost associated with losing them. Security is, always has been, and always will a cost/benefit analysis. If losing data costs less than securing it, then why bother? It's cheaper to clean up the mess than prevent it. Until losing data has a higher cost than security, you aren't going to see it treated well. This idea that virtual things are somehow different from real thi
Re: (Score:1)
Re: (Score:2)
Then we need to hold them accountable for losing it. We should not expect other people to safeguard our things out of the goodness of their hearts. When you give your physical goods to another party for safekeeping, you sign a contract stating what they are and are not responsible for. When you give packages to UPS, UPS accepts a certain amount of liability if they should damage or lose the package. When you place things in storage, the storage company accepts a certain amount of liability. Before you
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
This would provide some interesting metrics (number of failures, severity, dumbness,
Re: (Score:2)
It is. It is just what value do you put on your 'goods', physical or information. I lock my place with a regular dead bolt when I leave, the building is secure, and there is a concierge/security. On the other hand, Fort Knox [wikipedia.org] has steel and concrete walls and an entire army base around it, guarding it. It's a question of what level of security do you need. Make the calculation. Most people figure information is far more important since quite often you can lose mor
Re: (Score:1)
And you then are a sane, rational person. As more people begin making that same choice, companies will adjust their risk models and we will get better security. Unfortunately, its a slow migration. Look at the number of people still giving information to Sony.
Security ratings would be useful. Pretty much everything else has some kind of consumer rating now-a-days.
Re: (Score:2)
Well I think so. But I'm sure there a number of people around here who would argue with you about that. :) But thanks none-the-less. ;)
Re: (Score:2)
Re: (Score:2)
I'm interested to hear more about this Krasner tool..... (I have a friend who picks locks as his party piece and it sounds the perfect xmas present;)
Re: (Score:2)
Little Bobby [xkcd.com] does not expect you to be on top of everything. The basic stuff, lock your car doors, use placeholders in SQL statements should be a reasonable expectation.
Re: sensational (Score:2)
However, they are most certainly included in this report to make it more "sensational".
Ehm, no, they are included because it is hard to tell what the program is doing. Not all things can be resolved with rules, e.g. a chain of regex replaces. And you cannot brute-force it either most of the time by checking all inputs.
All you can do then is to determine the possible outputs by some rules, so a false positive is reported, whenever your rules are not exact.
What's the real story? (Score:3)
The precipitous drop in the "pass" rate for applications was caused by the introduction of new, tougher grading guidelines, including a "zero tolerance" policy on common errors like SQL injection and cross site scripting holes in applications, Veracode said.
Is the story that SQL Injection and XSS are still a problem or that Veracode just recently took a "zero tolerance" stance on SQL Injection and XSS in the applications they test?
Re: (Score:1)
Our cofounders (I'm director of product management at Veracode) helped to coauthor the responsible disclosure standard, and it's linked on our web site [veracode.com]. Short version: we don't disclose details about customer findings.
68% isn't hard (Score:2)
no one cares about security (Score:1)