Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software Programming Technology

Bad Software Runs the World 349

whitroth tips a story at The Atlantic by James Kwak, who bemoans the poor quality of software underpinning so many important industries. He points out that while user-facing software is often well-polished, the code running supply chains, production lines, and financial markets is rarely so refined. From the article: "The underlying problem here is that most software is not very good. Writing good software is hard. There are thousands of opportunities to make mistakes. More importantly, it's difficult if not impossible to anticipate all the situations that a software program will be faced with, especially when — as was the case for both UBS and Knight — it is interacting with other software programs that are not under your control. It's difficult to test software properly if you don't know all the use cases that it's going to have to support. There are solutions to these problems, but they are neither easy nor cheap. You need to start with very good, very motivated developers. You need to have development processes that are oriented toward quality, not some arbitrary measure of output."
This discussion has been archived. No new comments can be posted.

Bad Software Runs the World

Comments Filter:
  • by Anonymous Coward on Wednesday August 08, 2012 @03:18PM (#40921365)

    50% of all software is of below-average quality.

    • by K. S. Kyosuke ( 729550 ) on Wednesday August 08, 2012 @03:23PM (#40921437)
      That would be "below-median quality", I suppose.
    • That's just mean!
    • by pympdaddyc ( 586298 ) on Wednesday August 08, 2012 @03:28PM (#40921513)

      That depends on your definition of average, mathematically speaking that's not true. What percent of numbers are below average in this set: {1, 1, 1, 1, 1000}

      This isn't pedantry, this is a meaningful distinction: I expect the amount of good software is extremely outnumbered by the bad, and even good software developers can be forced into kludges by time pressures, bad team culture, etc. I don't see any reason to think that code quality globally resembles a normal curve.

      • Re: (Score:2, Insightful)

        None are below the median and mode; 80% are below the mean.
      • That depends on your definition of average, mathematically speaking that's not true. What percent of numbers are below average in this set: {1, 1, 1, 1, 1000}

        This isn't pedantry, this is a meaningful distinction: I expect the amount of good software is extremely outnumbered by the bad, and even good software developers can be forced into kludges by time pressures, bad team culture, etc. I don't see any reason to think that code quality globally resembles a normal curve.

        Indeed. I would actually think that global software quality resembles an exponential curve with x representing how craptacularly bad the software is. Very few systems are near the origin (and thus are well written.)

      • by ceoyoyo ( 59147 )

        It depends on your definition of "average" AND on the underlying distribution. If the distribution of software quality is symmetric then 50% of values are below the mean, median and mode (the three most common measures of central tendency or "average").

      • Why wouldn't it? Code quality is something driven by the uncoordinated decisions of millions of individuals around the world. Such things are usually normally distributed [wikipedia.org].

    • Is there a real definition for quality?
      • by gorzek ( 647352 )

        Yes, actually. You can measure it in terms of raw numbers of defects found. You can also determine the number of defects produced per some measure of effort (e.g. 1000 man-hours.)

        You need quality control, which is about ensuring good practices that are documented, repeatable, and measurable, and quality assurance, in which the results of your process are analyzed for their overall quality.

        Good QC should make QA's job a lot easier. (Or harder, depending on how you look at it.)

        There is, of course, no number t

        • by bluefoxlucid ( 723572 ) on Wednesday August 08, 2012 @04:05PM (#40922009) Homepage Journal

          Raw defects doesn't indicate quality. A defect by which the system occasionally has to stop and replay some data write-out because of some hoakey disk driver is not a gerat problem: the disk driver is buggy, and is using a shitty hack to fix it. By contrast, a much better written driver with a very corner case race condition that 1/100 as often simply destroys a ton of data has a huge problem.

          Linux is like that. If a hard disk drive starts to not respond, it'll send it a reset command and continue. It'll mount the filesystem read-only without special options; in some conditions that's important, because the OS view of the FS might be completely different due to undetected write failures. In any case, it's still up and you can get information out of the kernel. I've had the system hose itself so bad I couldn't actually read the logs or run dmesg, but if your boot process copies a few utilities into a ramdisk and sets tty1-5=login tty6='chroot /recovery login' you should be able to switch to that tty and run. Bonus points for statically linking chroot on boot (i.e. the boot process copies everything in from installed fs, then uses ld to statically link chroot to all its dependencies), so in a barely-functional active ssh session you can '/recovery/bin/chroot /recovery /bin/sh'

          A high-quality system that fails 1/10000 of the time and destroys everything is worse than a low-quality system that fails 1/100 of the time without cause for concern. Yet the low-quality system is clearly shitty.

          • by gorzek ( 647352 )

            I agree with you. Severity is certainly relative (and I should have said as much when talking about raw defects.)

          • Not only that, but you've identified another problem with judging quality: software usually does not stand on its own; it's part of a larger system. What if a piece of software is well-written, but the libraries it links to are shit? A programmer may not have much choice if he's required to use system libraries, or some special vendor-provided libraries. He can add in workarounds for some of the bugs in the libraries, but that's it.

        • by Smauler ( 915644 )

          All defects are not created equal. A defect that is known that hits in a 1 in a billion UI interface offchance that just throws up an error, and nothing more is a defect, but 99.9% of people would not fix it, if it requires significant work. Why would they - it's a waste of time. A defect that drops all keyboard interface onto the internet in a secure system every time is still 1 defect - it's just a bigger one.

        • by lennier ( 44736 )

          You can measure it in terms of raw numbers of defects found.

          That's a pretty bad metric. Defects found is most certainly not the same as defects existing, or we wouldn't have the security situation we have today.

          Granted, yes, we know those defects are in there because those defects are eventually found... by someone... but the huge elephant in the room is that 80% of security defects are not found by the company which wrote the code [sans.edu]. In any other industry that kind of failure rate would be not just criminal but verging on hostile military action.

          What we need is posit

      • by luis_a_espinal ( 1810296 ) on Wednesday August 08, 2012 @04:23PM (#40922289)

        Is there a real definition for quality?

        When it comes to software, yes. We have had it for decades: quality entails

        1. architecture, design and implementation decisions that minimize the cost of change,
        2. that does not deteriorate with each change (or at least deteriorates linearly with each change),
        3. that exhibits strong cohesion and lose coupling,
        4. that permits reasonable maintainability and configurability,
        5. with relative small bug count per whatever metric one picks (FP, SLOCs, etc.)
        6. that is amenable for testing
        7. with architecture and design that are understandable (tied with #1)

        Just to mention a few. There might not be a single universal definition of software quality, but there are common desirable attributes that have been known for decades. It's when people code without knowing these attributes (or not giving a shit about them) that we get the crappy stuff we get.

        Software doesn't have to be perfect. It simply needs to be economical due to its qualities (again, qualities well known for decades.)

    • Your one-liner has a bug. It's Open Source though, so it's a shallow bug.

    • Is the distribution curve simmetric?
  • by Anonymous Coward

    "Do you not know, my son, with how little wisdom the world is governed?" -- Axel Oxenstierna

  • Nothing New (Score:5, Insightful)

    by Herkum01 ( 592704 ) on Wednesday August 08, 2012 @03:21PM (#40921415)

    That is because corporate infrastructure software does not generate revenue. Why spend money that does not directly impact the bottom line?

    Maybe when you get people who actually understand the underlying business rather than a MBA graduate, that will change.

    • Fixed (Score:5, Insightful)

      by SuperKendall ( 25149 ) on Wednesday August 08, 2012 @03:24PM (#40921453)

      That is because corporate infrastructure software does not obviously generate revenue, and losses of opportunity are invisible.

      Fixed it for you.

      Basically just supporting the last half of what you said.

    • That is because corporate infrastructure software does not generate revenue...

      Hi. Corporate infrastructure supports the creation of whatever the corporation sells. Thus, creates wealth.

      Where any specific bit of the enterprise fits in the revenue generating chain is an arbitrary organizational decision. Ultimately the entire entity is there to give the sales force something to sell and the ability to accept purchases and support customers.

      In other words, the pointy end of the stick is useless without the rest of the stick. It's just prick lying on the forest floor under a pile of bear

    • I think software engineers share just as much of the blame. Even if they write good clean code, they are usually terrible at making something user friendly and their solutions can be rather wonky.

      The challenge of making good software is getting business people and engineers working together. If one side has too much power, the likely result will be crap.

    • That is because corporate infrastructure software does not generate revenue. Why spend money that does not directly impact the bottom line?

      It does impact the bottom line; it's just harder to see and measure. When lots of employees are wasting time rebooting after crashes, or repeatedly navigating a slow and/or suboptimal user interface, that's wasted time that costs productivity and money. Just because you aren't measuring it doesn't mean it isn't happening.

    • Re:Nothing New (Score:4, Insightful)

      by frisket ( 149522 ) <peter@sil[ ]il.ie ['mar' in gap]> on Wednesday August 08, 2012 @04:42PM (#40922535) Homepage

      That is because corporate infrastructure software does not generate revenue. Why spend money that does not directly impact the bottom line?

      Marketing can always get fat funding to have designers polish the turds on the web site, but the backend people don't have access to that kind of money.

      Maybe when you get people who actually understand the underlying business rather than a MBA graduate, that will change.

      If payroll is threatened, you may get some action, but anything else usually gets a Band-Aid.

    • That is because corporate infrastructure software does not generate revenue.

      Says who? That's like saying a cleaning crew or the electric system does not generate revenue for a supermarket. Revenue is not just a function of what you sell. You need a lot of things to generate revenue. A business that uses corporate infrastructure uses it to generate revenue. The problem in most unorganized enterprises is that they have poor accounting for tracking the cost and revenue of corporate infrastructure. Added to that is that most software developers have no clue about the cost and ROI of th

  • The old adage (Score:5, Insightful)

    by killmenow ( 184444 ) on Wednesday August 08, 2012 @03:21PM (#40921419)
    Good, Fast, Cheap...Pick Two.
    • Re:The old adage (Score:5, Insightful)

      by dkleinsc ( 563838 ) on Wednesday August 08, 2012 @03:24PM (#40921445) Homepage

      Another rule here is out-of-sight-out-of-mind: If management can't actually see the effects of what's going on, they don't care how good it is, which is why UIs can be fantastic while the backend completely sucks.

    • Good, Fast, Cheap...Pick Two.

      Provided neither of them is "Good".
      "Good" is the Pick One choice...

    • by kbolino ( 920292 )

      Good, Fast, Cheap...Pick at most Two.

      FTFY

      I've seen plenty of software in the "none of the above" category.

  • by IflyRC ( 956454 ) on Wednesday August 08, 2012 @03:22PM (#40921425)
    True, most software is badly written and there are entire jobs dedicated to maintaining legacy and even current systems. Some software is so badly written that it requires a team to prop it during peak usage times or War Rooms to determine fixes. Managers usually only care about meeting a deadline and push for that. Young guys don't care about if something is correctly written - just that "it works" in that instance in time. Being a good developer requires being enabled to be a good developer by your team.
  • It's a big world (Score:5, Insightful)

    by MozeeToby ( 1163751 ) on Wednesday August 08, 2012 @03:24PM (#40921439)

    There aren't enough 'good' coders in the world to implement all the software that needs to be written, let along 'very good' ones. Not to mention good architects, designers, requirements analysts, etc, etc, etc. And even if there were, software that needs to work together isn't always designed to do so. Hacks, cludges, and jerry rigged solutions are what hold the tech world together, no amount of wishful thinking is going to change that.

    • by SirGeek ( 120712 )
      Yeah there is. Companies need to figure out how to assign realistic values to the software ("What will this save me in 1yr, 2yr, etc.) vs. which is cheapest to do now ?
      • The implication of your statement is that companies need to sometimes say "this solution would save me money in 5 years vs this solution which costs half as much". There are times, quite possibly the majority of times, where the 'correct' decision is the cheaper one, and what you'll end up with is a world full of 'bad' software. What if the company simply doesn't have the money or time to invest in what you would consider the 'right' solution? What if implementing it the 'right' way is going to make you

    • Ah yes, the mythical good coder who writes it "right" the first time. Yeah...haven't met one yet. I don't think they exist.
      • Furthermore, I'm not even sure there is a right way to write it.
      • Usually, it's not so much that one guys does it "right" the first time vs. others who don't, but more like one guy does it acceptably, and (ideally) flexibly, while another guy does it horribly wrong.

        Sometimes the most valuable skill a developer can have is a good instinct for which business requirements are most likely to change over time.

        • by Duhavid ( 677874 )

          "Sometimes the most valuable skill a developer can have is a good instinct for which business requirements are most likely to change over time"

          Amen!

      • by cduffy ( 652 )

        Ah yes, the mythical good coder who writes it "right" the first time. Yeah...haven't met one yet. I don't think they exist.

        Don Knuth comes pretty close to being a living existence proof.

        Granted, there's only one of him.

      • Ah yes, the mythical good coder who writes it "right" the first time.

        That's not really the work of a good coder. Anyone could get lucky, and no-one writes correct code all the time.

        A good coder though can structure code in such a way that problems do not cascade, that incoming issues are limited in scope in terms of affecting the rest of the codebase. A good coder can make a huge system where you can replace a part of it without magic or too many tears.

        Perhaps it's nothing more than the ability to think a

  • by dgharmon ( 2564621 ) on Wednesday August 08, 2012 @03:29PM (#40921531) Homepage
    "James Kwak .. points out that while user-facing software is often well-polished, the code running supply chains, production lines, and financial markets is rarely so refined"

    I disagree, while the GUI may be well polished, the underlying code is of poor quality, as it has most probably written by some contractor on an hourly rate. Quality control works like this. If it compiles, ship it and fix all the bugs in the next version ...
  • by bobs_lounge ( 2703799 ) on Wednesday August 08, 2012 @03:31PM (#40921555)
    and this article is absolutely correct. Forthe most part, we do regression testing, but a lot of code (a whole lot) is never unit tested, its not written to be used it tested, and there are configuration holes all over the place. Each time there is a Jerome Kerviel or Nick Leeson, a generation of auditors will come through and find systems faults, and put in reasonably effective controls, but that is not the same as programmatic correctness. Programmatic correctness often has to be baked into the code from the start (same with effective unit testing), and by and large, this is not an investment banks highest priority (as an earlier poster wrote, code that is not directly involved in revenue generation does not get funded).
  • by neves ( 324086 ) on Wednesday August 08, 2012 @03:31PM (#40921561) Homepage
    It isn't cost effective to build good software... for a few users. I develop some internal systems. They are very complex and each of them have 40 users at most. The ROI of Apple polishing every tiny bit of a software is great. If each of their 100000 users spend one second less, it is a ROI of more than one day. Human beings are very intelligent. They can learn to play a musical instrument, drive a car, operate a machine and to use shitty software.
  • "You need to have development processes that are oriented toward quality, not some arbitrary measure of stupid."

  • I wouldn't say it's all bad software, I'm sure a lot of it is, but some of it is purpose driven software that has been repurposed as if it were off the shelf software. Dev houses build a piece of software for a specific need for a specific customer, then that customer refers them to others and they all want the same thing. They don't rewrite the software to be off-the-shelf, they just repurpose what they have and shoehorn it in and make it work(well, it works with MS SQL, but this company uses Sybase, so l
  • ... it is interacting with other software programs that are not under your control. It's difficult to test software properly if you don't know all the use cases that it's going to have to support...

    You define the use cases it will support, and reject anything outside of those defined cases. If your software acts upon cases that it does not know how to handle, then it is your problem, and only your problem.

    • by asc99c ( 938635 )

      It doesn't matter to the client whether your software segfaulted or replied 'sorry Dave I'm afraid I can't do that'. Either way, it hasn't completed a use-case that it is meant to do. And that fact may well mean that a load of downstream activities happen differently, and you quite possibly have gained nothing by rejecting it.

  • http://jameskwak.net/about/ [jameskwak.net]

    So he has some actual experience with seeing the kind of effort it takes to do software with high quality.

  • Indeed (Score:5, Funny)

    by Ancient_Hacker ( 751168 ) on Wednesday August 08, 2012 @03:44PM (#40921743)

    Indeed. I've been parachuted in to several companies with major software issues.

    Three had avoided even starting a migration from hardware and databases that hadn't been supported in a decade or more.

    Another placehad no concept of file locking or threading, or QA, and was using 8 different programming languages on just one project.

    Two companies that handled 80,000 to 300,000 transactions a day did not have any way of simulating input or comparing the input to output.

    One company that depended on several million TCP/IP connections a day had no idea that TCP/IP data might not all arrive in one packet.

    Another place whose business was dependent on several custom fonts would not believe the veracity of both the Postscript and TrueType font verifiers when they said "your font has 488 serious errors".

    About 3/4 of the places had not a clue what SQL injection was and how they were vulnerable.

    The quality of the stuff out there is just horrible.

    • Re:Indeed (Score:5, Insightful)

      by MrSenile ( 759314 ) on Wednesday August 08, 2012 @05:28PM (#40923105)

      The quality of the stuff out there is just horrible.

      Having worked under several hats, the latest being as a system architect, I can tell you exactly why this happens.

      We start with some upper management who have this 'nifty idea' that they must have for the business. Ok, fine... now let's get the ball rolling!

      First, you have the budgetary committee. Without any input what so ever from the technical groups that make up the technology and know what is or isn't possible, they work with vendors on a parachute budget for the project.

      Secondly, with this locked in budget in hand, they introduce it to the system architects and project management. The project management are giving timetables saying 'we need this done by this time, no exceptions'. They then pass that timetable, as well as the budget, to the afore-mentioned system architect.

      Introduce stroke-approaching WTF moment for the architect....

      Third. The architect goes back to the project manager saying 'We can't build the specs for the money in the time allowed'. The manager goes 'oh, right'. They go to the budgetary committee and bring this up, and once they realize the bottom figure is wayyyy out in left field, they come back with 'that's impossible, we need this done, with the results of this, for the money you originally quoted us'. So... head back to the system architect...

      Forth, the architect then, to un-bury himself from the absolute disaster sitting in his face, tell the project manager what will be required to minimally meet the ends. This generally requires a ton of over-seas consultants, paid to grind the wheel 24/7, at the lowest dollar, to get it to work on outdated hardware to meet the end core/cpu/memory requirements and still 'work'.

      Fifth, the consultants are hired, you're lucky if they understand english sufficiently to understand the nuances of day to day communication. They also take shortcuts, because they either don't know the right way, don't want to spend time on the right way, or are told that doing the 'right way' is not time efficient for the cost. So now, we have crappy code being tossed in, usually undocumented.

      Sixth, the dev work is slammed in, marginally tested, and quick-shotted through QA because the upper management are in a time crunch and don't have the time to deal with all that 'quality assurance nonsense'. So this work is now fast-tracked to production in a non-fully tested workflow.

      Seventh, it's been live for a while, things break randomly without reason, but it's ok, a restart of the application always 'fixes' it. So what if you have to bounce the app every 2-3 weeks to free up that memory leak. It works, in the budget... well, maybe for a few hundred thousand more... but it's done, it provides the 'nifty feature' that the share-holders were promised by the upper management, and the things that don't work are being pushed on...

      three guesses...

      The management who pushed the idea? Nope.

      The budgetary committee who gave the low-end figures out of their butt? Nope.

      The project manager who gave the tight time-frame for the project without major input from the technical people? Nope.

      I know... the IT professionals who are still at the company like the system architect, network team, dba team, san management team, and the security group who are left holding the bag of the big pile of steaming crap? Yup.

      Soo... when things evidently break so bad to be noticed, and management are told to 'fix' it it will take more money, more time, and more hardware. Shock, awe, and bafflement is shown, bonuses/raises are crushed because the IT professionals obviously can't do their job right, and maybe a few heads roll because management have their golden parachute and are not held to blame for the project they initially started up.

      That. Is why the 'stuff out there is just horrible'.

      It's not any one thing, it's how business runs, because these yahoos frank

  • by RabidReindeer ( 2625839 ) on Wednesday August 08, 2012 @03:45PM (#40921765)

    Software isn't hard. Everyone constantly tells me so. "It's simple! All You Have To Do Is...".

    "Oh, those preliminary mockup screens look almost perfect". So you'll have the entire system ready for production deployment next Tuesday, right?"

    "Just git 'er DUN!"

  • That Lawyers and bloggers know about bad software.
  • by gestalt_n_pepper ( 991155 ) on Wednesday August 08, 2012 @03:49PM (#40921803)

    Bean counter and management do. They don't care how much the staff struggles with lousy software (e.g. Oracle server on Linux). They care about saving a few bucks, getting their bonuses, reorganizing to hide the bodies and moving on to the next job. Hence, there will always be a market for crappy software. Capitalism fails at the interface level. If the engineers and low level end users made the purchasing decisions, you can bet quality would improve in a hurry.

  • The way the article is written, it hints that low quality = more implementation flaws.

    Let's not forget that software can have design flaws, too, and careful programming might still lead to low quality software.

    In the case of Knight, the defects might not have even been a function of the software per se. I'm sure a good bit of probability and machine learning go into HFT; these algorithms may have been the source of the errors, and the flawed algorithms might not even be due to the software engineers.

  • while user-facing software is often well-polished

    I think the distinction is that most user facing software doesn't lose 400million due to a bug.

  • by florescent_beige ( 608235 ) on Wednesday August 08, 2012 @04:07PM (#40922043) Journal

    I have used this program my entire career. For the last 20 years (since MSC bought PDA), it has not changed apart from the odd user-generated macro getting included in. The windowing interface has had the same bugs (e.g. scroll bars that are 1 pixel high) since I was a wee lad. Half the stuff in it that isn't used regularly doesn't work, never has, never will and yet it is the standard for FEM pre/post in aerospace. Staring at this broken-ass POS year after year has filled me with ennui.

    May there be a special circle in hell just for MacNeil-Schwendler.

    Thank you. That is all.

  • >> It's difficult if not impossible to anticipate all the situations that a software program will be faced with, especially when — as was the case for both UBS and Knight — it is interacting with other software programs that are not under your control. It's difficult to test software properly if you don't know all the use cases that it's going to have to support.

    That's why so many industries continue to use FTP (also FTPS and SFTP) to punt files over the wall when real-time response is not

  • by logicassasin ( 318009 ) on Wednesday August 08, 2012 @05:15PM (#40922929)

    why is this news to anyone? Software is -always- shipped full of issues to meet a PM's deadline in order to say "See!!! We got it done on time!" to justify their salary and existence at the company. "Ship it now and fix it after the fact" (if at all) has been the mantra of in-house and commercial software for 20+ years.

  • by n7ytd ( 230708 ) on Wednesday August 08, 2012 @05:44PM (#40923333)

    Remember: Broken gets fixed. Shoddy lasts forever.

  • Bad software also runs unimportant websites [slashdot.org].
  • by aXis100 ( 690904 ) on Wednesday August 08, 2012 @08:12PM (#40925177)

    I'm not trolling, it's a reality check.

    Most big software projects I've seen fail hard, like millions and 10's of wasted dollars hard. By comparison you just dont see that very often in big electrical/mechanical/civil projects, which can be equally complex (eg refineries, cruise liners etc).

    There are software developers with all sorts of fancy titles - architects, analysts, engineers - and yet they cant get the code right. Usually the root cause is inadequete requirements spec and failure to manage the customers expectations but that's no excuse, there are usually numerous poeple employed in the project process specifically to get those parts right.

    Software engineering is still playing catch up, in the sense that most developers and development companies I've seen still dont follow a formal enough process for it really to be called engineering. Usually it's a bunch of computer science graduates having a wild stab at it, and the good ones are closer to artisans than engineers.

    Until the entire software industry gets off it's high horse and admits this to itself - and more importantly admits this to the customers, we are going to continue to be dissapointed with the quality.

BLISS is ignorance.

Working...