Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security IT

Study Shows Many Sites Still Failing Basic Security Measures 103

Orome1 writes with a summary of a large survey of web applications by Veracode. From the article: "Considered 'low hanging fruit' because of their prevalence in software applications, XSS and SQL Injection are two of the most frequently exploited vulnerabilities, often providing a gateway to customer data and intellectual property. When applying the new analysis criteria, Veracode reports eight out of 10 applications fail to meet acceptable levels of security, marking a significant decline from past reports. Specifically for web applications, the report showed a high concentration of XSS and SQL Injection vulnerabilities, with XSS present in 68 percent of all web applications and SQL Injection present in 32 percent of all web applications."
This discussion has been archived. No new comments can be posted.

Study Shows Many Sites Still Failing Basic Security Measures

Comments Filter:
  • Citicorp Hack (Score:5, Interesting)

    by Anonymous Coward on Wednesday December 07, 2011 @12:43PM (#38292106)

    Then there is the Citicorp hack, where they dont even bother hashing the account numbers in the URL...

  • 200 (Score:5, Insightful)

    by badran ( 973386 ) on Wednesday December 07, 2011 @12:46PM (#38292152)

    I wonder how they test. Some sites that I manage return the user to the homepage on a hack attempt or unrecoverable error resulting in a 200 return. Would they consider such a system as hacked, since they got a 200 OK return, or not.

    • Re:200 (Score:5, Interesting)

      by slazzy ( 864185 ) on Wednesday December 07, 2011 @01:08PM (#38292418) Homepage Journal
      One of the sites at a company I worked for provides fake data back when people attempt sql injection, sort of a honeypot to keep hackers interested long enough to track them down.
      • That's awesome. They should open source that component.

        • by phorm ( 591458 )

          Hmmm, how about
          a) Have a secondary instance running with dummy/fake data
          b) Have a wrapper around queries that checks for attempted injections (perhaps a pre/post sanitization check), if the query is an injection attempt, grab data from the fake DB
          c) Watch for people using data from the fake DB, attempt to use a fake (but realistic enough to pass a smell test) CC# are fraud attempts flagged to visa...

    • Why wouldn't you return a more appropriate code (something from 4xx or 5xx) in those cases? Since you can always send whatever content you want along with (almost) any code, might as well give standards-compliant HTTP feedback.

      • I should add that appropriate error codes can help drive off traffic from automated scanners of various sorts, looking for open proxies and other problems. Things like your 404 or 401 pages should definitely not return a 200 OK, for that reason if no other.

      • by badran ( 973386 )

        Because I have no control over the specifications.

    • by Anonymous Coward on Wednesday December 07, 2011 @02:41PM (#38293540)

      I work at Veracode, and can share how we test. I'll be brief and technical here, as there's lots of marketing material available other places. In short, we scan web sites and web applications that our customers pay us to scan for them; the "State of Software Security" report is the aggregate sanitized data from all of our customers. We provide two distinct kinds of scans: dynamic and static.

      With dynamic scans, we perform a deep, wide array of "simulated attacks" (e.g. SQL Injection, XSS, etc.) on the customer's site, looking for places where the site appears to respond in a vulnerable way. For example, if the customer's site has a form field, then our dynamic scanner might try to send some javascript in that field, and then can detect if the javascript is executed. If so, that's an XSS vulnerability. As you might imagine, the scanner can try literally hundreds of different attack approaches for each potentially vulnerable point on the site.

      The static scans are a little fancier. The customer uploads to Veracode a copy of the executable binary build of their application (C/C++, Java, .NET, iPhone app, and a couple of other platforms). From the executable binary, the Veracode systems then create a complete, in-depth model of the program, including control flow, data flow, program structure, stack and heap memory analysis, etc.. This model is then scanned for patterns of vulnerability, which are then reported back to the customer. For example, if the program accepts data from an incoming HTTP request, and then if any portion of that data can somehow find its way into a database query without being cleansed of SQL escape characters, then the application is vulnerable to SQL Injection attacks. There are hundreds of other scans, including buffer overflows, etc.

      Personally, I think what we do at Veracode is pretty amazing, particularly the static binary scans. I mean: you upload your executable, and you get back a report telling you where the flaws are and what you need to fix. The technical gee-whiz factor is pretty high, even for a jaded old-timer like me.

      • by kriegsman ( 55737 ) on Wednesday December 07, 2011 @02:44PM (#38293596) Homepage
        Oops, I wasn't logged in. The above comment is from me, Mark Kriegsman, Director of Engineering at Veracode.
        • Thanks for posting, Mark. I'm curious, though: how do you check for stupid mistakes like that in languages that allow first-class functions? For instance, in Python I could write something like:

          >>> def foo(x): print x
          ...
          >>> arguments = ['hello, world']
          >>> def call_func_with_args(func, args): func(*args)
          ...
          >>> call_func_with_args(foo, arguments)
          hello, world

          Your scanner would have to determine that 1) call_func_with_args executes the passed-in function, and 2) there's som

          • by kriegsman ( 55737 ) on Wednesday December 07, 2011 @05:51PM (#38295894) Homepage
            That is a GREAT question, and the full answer is complicated and partially proprietary. But basically, you've touched on the problem of indirect control flow, which exists in C (call through a function pointer), C++ (virtual function calls), and in Java, .NET, ObjC, etc. The general approach is that at each indirect call site, you "solve for" what the actual targets of the call could possibly be, and take it from there. The specific example you gave is actually trivially solved, since there's only one possible answer in the program; in large scale applications it is what we call "hard." And yes, in some cases we (necessarily) lose the trail; see "halting problem" as noted. But we do a remarkably good job on most real world application code. I've been working with this team on this static binary analysis business for eight or nine years, and we still haven't run out of interesting problems to work on, and this is definitely one of them.
            • Sounds like you're doing some really cool stuff, and I admit that I'm kind of jealous because it seems like a lot of fun. Thanks again for the information!
    • by jc42 ( 318812 )

      Another related problem I've had is that XSS seems to have a wide range of definitions, and is such a vaguely-defined concept that it applies to a lot of valid web applications.

      I've seen a number of definitions of XSS that include all cases where a CGI program gets a URL for a third site, and sends an HTTP request there. I have a number of sites whose CGI software is designed to work exactly this way. The data is distributed across several hundred other sites, only a few of them mine. My main sites ha

  • by Nyder ( 754090 ) on Wednesday December 07, 2011 @12:59PM (#38292308) Journal

    This is capitalism/corporations. It's all about profit, and spending extra on IT cuts into the bottom line.

    Economy is bad, so companies make cuts. Personnel, IT, Security, and everything but the CEO's bonuses get cut.

    • by Anonymous Coward on Wednesday December 07, 2011 @01:11PM (#38292454)
      It also seems to come down to ridiculous timescales. A project is declared, a release date is set in stone. The client overruns their alloted time to come up with requirements/content, the release date stays in stone. The legal teams take forever to draw up and agree on contracts, the release date stays in stone. The IA/UX people miss their deadlines for producing the wireframes, the release date stays in stone. The design team go through a million iterations of whether the drop shadow on the footer text should be mauve or fuscia and overrun their deadline, the release date stays in stone. The client pops up again with dozens of last minute change requests, the release date stays in stone. Then it hits development's desk and suddenly the three month project has to be done in two weeks. Development is almost always the last link in the chain and, as such, always the department under constant crunch time. Developing a complex site with vague specs across half a dozen minds isn't easy, but unlike all the other parts of the chain leading up to this point, it's the part where the client can be most punished if it's not done right, yet nobody ever sees the benefit of allowing sufficient time (and doing sufficient testing).
      • Re: (Score:2, Insightful)

        by Anonymous Coward

        If I gave you enough time to do development right, the competition would beat us to market, drive us out of business, and you would be out of a job.

        Don't think it is any different working for one of our competitors, they will overwork you just as hard for fear of US beating THEM to the market.

        The market has shown a surprisingly high tolerance for bugs and security gaps, so we simply can't afford to proactively fix those.

        And if you don't like my high bonus....go start your own company. After realizing just

        • Not all markets show that tolerance. Video game markets, for example. A lot of them have those "set in stone" release dates, and the games don't come out very well. (Of course, my gaming taste is stuck in the '90s where it belongs, so don't take my word for it.)
      • by Thing 1 ( 178996 )

        Development is almost always the last link in the chain and, as such, always the department under constant crunch time.

        In my experience, QA is the last link in the chain; however, it is the Build team that gets crunched when development overruns. (And, as you pointed out, it's not always development's fault that they overrun.)

      • Strange, and I thought I knew all the software developers working at the company.

    • by Mashiki ( 184564 )

      I can make wild-eyed inaccuracies too. I mean it couldn't have anything to do with laws ensuring that failing at data security means less than a slap on the wrist. Wait it means exactly that, it means that you can cut everything and then simply offer an apology. This of course really won't change until either the laws, or case law catches up to the theft of consumer data.

    • Re: (Score:3, Insightful)

      by Ramley ( 1168049 )

      I am sure your point is a part of the problem, but in my (many years) of experience, this has a lot more to do with a myriad of factors, none of which really outweigh the other by much.

      I am an independent developer who works on projects with security in mind from the ground up. Time/budget be damned, as it's my reputation on the line. If they can't pay for what it is worth, I tell them to find another developer.

      They tend to learn the hard way — it was a better option to stick with a security minded

    • by jc42 ( 318812 )

      I've seen comments that to a lot of management, the IT department is is conceptually similar to the janitorial department, except that the latter keeps the physical facilities clean while the former keeps the data clean (and does a poorer job at its task ;-). Both are pure operational costs that bring in no income, so their cost should be minimized.

      It's funny that I've seen this attitude even when the company's products depends in large part on their software people. But the people who build the softwa

  • by Anonymous Coward on Wednesday December 07, 2011 @01:01PM (#38292336)

    Now its not my problem, its my Cloud providers problem.

    • by Anonymous Coward
      Not sure if you're serious.... but if the cloud provider drops the ball you're the one losing clients.
  • Nothing new here (Score:3, Interesting)

    by vikingpower ( 768921 ) on Wednesday December 07, 2011 @01:02PM (#38292354) Homepage Journal
    I am on a project for ( smoke-testing ) the core app. of a major european airport. Same problems there. Management, after having been informed, said: "Not a priority". I guess only their bonuses are "a priority" ? I am thinking seriously of giving pointers to the whole project to Anonymous.
    • by Anonymous Coward

      If you do such that gives new light to the name "Anonymous tipster".

      • by Anonymous Coward

        If you do such that gives new light to the name "Anonymous tipster".

        Not only new light, also a Slashdot nick, an email address, a homepage, a picture and a pretty good estimate of your nationality. All stored in one of the world's most privacy conscious companies. Oh the irony...

    • by delinear ( 991444 ) on Wednesday December 07, 2011 @01:17PM (#38292554)
      The problem is that the media seem to be in the pocket of big corporations, so when Anonymous inevitably find one of these exploits and steal a bunch of data, the media never seem to hold the businesses who left the door open to account. The lack of security should be a massive topic of debate right now, but instead, outside of certain circles, it's a complete non-issue. During the coverage over here of the various exploits of Anonymous, I don't think I once heard any searching questions asked of the global corporations who allowed a bunch of teenagers to make their security look like the equivalent of a balsa wood door on Fort Knox (and that includes the BBC, who should be the least biased since they're not privately owned, but still either don't want to offend the PR departments of companies who feed them half of their content or just believe the company line and don't bother digging deeper for the real stories).
      • THANK YOU!!!

        I can't believe companies aren't held responsible for their (lack of) actions as it regards security!!! It makes me mad!!!!

        It seems like we just make the people that find and exploit the security hole as the bad guys, even though it was the companies fault in the first place for having the security hold! We are in a cyber world now, and web security should be a higher priority, especially if you save personal information(credit card numbers comes to mind).

        Now maybe LulzSec and Anonymous aren

      • Re:Nothing new here (Score:4, Interesting)

        by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Wednesday December 07, 2011 @04:26PM (#38294834) Homepage Journal

        the media never seem to hold the businesses who left the door open to account.

        To a point, I understand their logic: you don't blame the victim. But a company publishing SQL injections in 2011 should be dragged through the mud and humiliated. Maybe someone needs to start a newsroom consulting company where reporters call for technical clarification:

        Reporter: Hey, Amalgamated Bookends got hacked by someone who replaced the BIOS on their RAID cards with a webserver. Who's in the wrong?
        Consultant: Wow! That's a pretty ingenious trick. I hope they catch that hacker!

        Reporter: Hey, Shortcake, LTD got hacked by someone who added "?admin=true" to their website's URL. Is that bad?
        Consultant: See if Shortcake's sysadmin is somehow related to the owner. It bet it's his nephew.

        Reporter: Hey, Sony...
        Consultant: LOL dumbasses

  • Uh huh (Score:5, Insightful)

    by TheSpoom ( 715771 ) <slashdot&uberm00,net> on Wednesday December 07, 2011 @01:03PM (#38292360) Homepage Journal

    Security auditing company produces report that conveniently shows that their services are desperately needed. News at eleven.

    • Just because their biased doesn't mean the report is untrue--it just means there's bias.

    • The report seems suspect to me, but the other way. I deal with security at my job, and most applications of any complexity should be open to sql injection and XSS, especially in PHP which dominates the web right now. So, if anything, their numbers seem low unless they have a large amount of static HTML sites that they're scanning.
    • I've seen this response before on other articles. But who else other than a security auditing company is going to do security audits? A companies internal IT may do this, and I say MAY do this, but they're certainly not going to publish their results to the public. Should we discredit companies that do automobile crash tests because they find that cars are inherently unsafe and need crash testing done to make them safer?
    • by ray-auch ( 454705 ) on Wednesday December 07, 2011 @01:38PM (#38292788)

      Where I work, every time we get told to put our details into some new provider system for expenses, business travel or whatever (happens regularly with corporate changes) we see who can hack it first. We're developers, it's our personal data, why wouldn't we check ?

      The fraction that are hacked in minutes is probably near 50%, and 32% for SQL injection is probably about right.

      I'm not sure which is more depressing - the state of the sites or that even though we have a "security" consultancy practice in house, we get corporate edicts to put our data into sites that we haven't even bothered to audit to the extent of sticking a single quote in a couple of form fields or changing the userid in the url...

    • Just wanted to clarify with my sibling posts that I'm not even saying that the report is wrong, just that it's incredibly biased. As a professional web developer, I'm quite certain there are many sites with XSS / CSRF / SQL injection issues.

    • by Jaime2 ( 824950 )
      Yup. However, having just had one of my applications scanned by one of these tools, I can say that if you fail one of these scans, you're app is worse than it says it is. I got a mostly clean bill of health, but the feedback I got was ridiculous. For example, the security department says that all pages of all publicly facing web apps should use SSL. Fine. But, the scan dinged me for caching pages delivery by SSL. So, do I violate the mandate to use SSL on trivial data? Do I violate the common sense a
  • by derrickh ( 157646 ) on Wednesday December 07, 2011 @01:14PM (#38292504) Homepage

    You have to realize that somewhere on the net there's a surveillance camera forum with guys saying 'businesses are too cheap to invest in multiple cam setups to cover exploitable deadzones'... and there's a locksmith forum with guys saying 'These companies are still relying on double bolt slide locks, when everyone knows they can be bypassed with a simple Krasner tool!'...and there's a car autosecurity forum wondering why companies still use basic Lo-jack instead of the new XYZ system.. and don't forget the personnel consulting forum where everyone complains that companies don't invest enough in training to recognize grifting attempts on employees.

    It's a never ending list and to expect everyone to be on top of all of them at all times is n't realistic.

    D

    • It's a valid point, but on the other hand you can't routinely try breaking into random houses or cars with little chance of getting caught, and then use them undetected for your personal gains. Your crappy lock will do unless someone from your neighbourhood personally targets your house. With computer security there is a constant global crime spree trying all the locks all the time. This is why I think that computer security needs to be handled with extra care.

    • Its one thing when physical goods are at stake, but another entirely when private data of customers is at stake. The former is a calculated risk, the latter should be considered sacred. Furthermore its not like you can just automate walking down the street and trying to open every lock, but the same thing can and is easily automated on a computer. Take a look at your firewall logs, chances are you have a fair bit of attempted "break ins" that are just bots scanning an IP range for vulnerabilities. I'd be wi
      • by mjr167 ( 2477430 )

        Why isn't private data also a calculated risk, same as physical goods? Both have a cost associated with securing them. Both have a cost associated with losing them. Security is, always has been, and always will a cost/benefit analysis. If losing data costs less than securing it, then why bother? It's cheaper to clean up the mess than prevent it. Until losing data has a higher cost than security, you aren't going to see it treated well. This idea that virtual things are somehow different from real thi

        • In regards to things like trade secrets, company information, I agree, it is a calculated risk. But, for companies with customer data (and doubly so for companies storing financial data) they aren't just losing property, they are losing property that really isn't theirs to lose.
          • by mjr167 ( 2477430 )

            Then we need to hold them accountable for losing it. We should not expect other people to safeguard our things out of the goodness of their hearts. When you give your physical goods to another party for safekeeping, you sign a contract stating what they are and are not responsible for. When you give packages to UPS, UPS accepts a certain amount of liability if they should damage or lose the package. When you place things in storage, the storage company accepts a certain amount of liability. Before you

            • I think you hit the nail on the head with your first sentence. Obviously, companies aren't securing data out of the goodness of their hearts. I'm really not one for adding more laws but it seems to me there needs to be legal repercussions for negligence in regards to customer data. Of course, the issue is far deeper than a merely technical one. The US government isn't exactly known for holding corporations accountable; they much prefer to hold an individual's feet to the fire. So hold the whole damn company
              • by cusco ( 717999 )
                Finland had to do something similar to deal with traffic ticket scofflaws. People would get a ton of tickets for running red lights, speeding, etc. and since they had money they just paid them and kept doing the same thing. Now a speeding ticket for a person who makes $1 million/year is 20x the cost of the same ticket for someone who makes $50,000/year. Of course if we did that here in the States the Bush twins would have bankrupted their families in no time (not necessarily a bad thing).
                • by mjr167 ( 2477430 )
                  That is an interesting proposal. I have always viewed most traffic tickets as fund raising by the police. Do you think the rest of the states should follow Virginia's example and start adding heavy ($1000+) fines and jail time to certain traffic violations?
                  • by cusco ( 717999 )
                    I'd prefer to see it applied to certain traffic VIOLATORS (congresscritters in particular). For an awful lot of people a $1000 fine would tip them over the edge to financial ruin, especially if coupled with days of work lost sitting in jail. I guess my answer would be; if the violator makes >$X-amount=New Fine. Violator makes $X-amount=Current Fine.
            • by azalin ( 67640 )
              In my personal happy place we would have an organization to which people could report security vulnerabilities to. The responsible company would be contacted and given some fixed period of time (eg. three weeks, plus maybe a bonus week if they provide a good reason) to respond and fix it. After that the information is published and the company faces charges of gross negligence if bad things happen to them and their data.

              This would provide some interesting metrics (number of failures, severity, dumbness,
        • Why isn't private data also a calculated risk

          It is. It is just what value do you put on your 'goods', physical or information. I lock my place with a regular dead bolt when I leave, the building is secure, and there is a concierge/security. On the other hand, Fort Knox [wikipedia.org] has steel and concrete walls and an entire army base around it, guarding it. It's a question of what level of security do you need. Make the calculation. Most people figure information is far more important since quite often you can lose mor

          • by mjr167 ( 2477430 )

            And you then are a sane, rational person. As more people begin making that same choice, companies will adjust their risk models and we will get better security. Unfortunately, its a slow migration. Look at the number of people still giving information to Sony.

            Security ratings would be useful. Pretty much everything else has some kind of consumer rating now-a-days.

            • And you then are a sane, rational person.

              Well I think so. But I'm sure there a number of people around here who would argue with you about that. :) But thanks none-the-less. ;)

          • by cusco ( 717999 )
            The other day there was a thread about someone who had tested the security of the online payment system they had signed up for and found it disastrously bad, something which I do myself. About a quarter of the posts in the thread were people pounding on him for 'trespassing' and the like, which made no sense to me at all. If I'm going to give them my personal information and/or credit card number then their web site had better be able to handle the very, very basic attack vectors that I know. A surprisin
    • by oPless ( 63249 )

      I'm interested to hear more about this Krasner tool..... (I have a friend who picks locks as his party piece and it sounds the perfect xmas present;)

    • Little Bobby [xkcd.com] does not expect you to be on top of everything. The basic stuff, lock your car doors, use placeholders in SQL statements should be a reasonable expectation.

  • by dreemernj ( 859414 ) on Wednesday December 07, 2011 @01:43PM (#38292834) Homepage Journal

    The precipitous drop in the "pass" rate for applications was caused by the introduction of new, tougher grading guidelines, including a "zero tolerance" policy on common errors like SQL injection and cross site scripting holes in applications, Veracode said.

    Is the story that SQL Injection and XSS are still a problem or that Veracode just recently took a "zero tolerance" stance on SQL Injection and XSS in the applications they test?

  • since the definition of XSS is ridiculously broad. It took me a while to wrap my head around it when a was starting out because when you're looking up how to avoid XSS attacks on your page you come across some books that talk about preventing code injection on your forums and others talking about code running the the wrong security context.
  • Everyone rather have their site cheap and straight away rather than secure. It's no surprise lots of sites are insecure.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...