Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Software IT Technology Linux

Ask About Proprietary vs. Open Source Code Quality 196

Scott Trappe is CEO of Reasoning, a company that has gained a certain amount of noteriety (and a Slashdot mention) by running its Ilumna automated inspection service on several versions of TCP/IP -- and concluding that the Linux version has fewer bugs than most proprietary ones. Why is this? Let's ask Scott, and also ask him any other question you can think of about software quality and how to achieve it since, after all, that's his business. We'll send him 10 of the highest-moderated questions and post his answers when we get them back.
This discussion has been archived. No new comments can be posted.

Ask About Proprietary vs. Open Source Code Quality

Comments Filter:
  • Well ... (Score:3, Interesting)

    by B3ryllium ( 571199 ) on Thursday March 06, 2003 @01:21PM (#5450017) Homepage
    What kind of open source code do you prefer? GPL, BSD, or the "here, take it" license?
  • by jkusar ( 585161 ) on Thursday March 06, 2003 @01:25PM (#5450050) Homepage
    What is your opinion on this topic? In your experience, does having the source closed make it any harder to find bugs and security flaws?
  • by burgburgburg ( 574866 ) <splisken06@@@email...com> on Thursday March 06, 2003 @01:26PM (#5450060)
    reference to your research? Whose inclusion of your work most impressed? Most confused? Most disturbed? And was there anyone who referenced your work but totally misunderstood what you were saying/doing and their conclusions were way off?
  • by Anonymous Coward
    Would a volunteer house builder make better houses? Would a volunteer Fireman put out fires better than a paid fireman? How about volunteer surgeons? I can see one certain contradiction - volunteer web editors choose mych dumber stories than their paid counter parts!
    • Compare it to any group of people working on something together for everyone to share - especially professionals doing something they like. For example, a group of carpenters building a summer house together because they love doing it and therefore do it pretty damn well and are intent on sharing it together. Except that with open source everybody gets their own copy of it instead of eg. everyone gets to spend X weeks of your vacation here. And in addition to them getting a copy many others do too - and they thus get their eternal admiration and gratitude :)
    • Having seen the mess that a bunch of so-called "professional" house builders managed to make of our house, I have to say HELL YES!!!

      Grab.
    • Think about it, some projects just aren't economical. Open source removes the economic factor, people fix bugs, make improvements etc. to get credit and prove their talents to the community. With open source you get commercial companies working on the code as well as individuals (see kernel changelogs).
  • by tim_maroney ( 239442 ) on Thursday March 06, 2003 @01:27PM (#5450075) Homepage
    How can any conclusions about the relative virtues of two development methodologies with a universe in the millions of components be drawn from a single sample, and one as small and atypical as a TCP/IP stack?
  • by $$$$$exyGal ( 638164 ) on Thursday March 06, 2003 @01:27PM (#5450076) Homepage Journal
    Where, in your opinion, do most products fail when it comes to attaining quality in software?:
    1. Planning (specifications)
    2. Development
    3. Post-development testing
    4. Or anything else? (or a mixture, etc)
    • Where, in your opinion, do most products fail when it comes to attaining quality in software?:
      1. Planning (specifications)
      2. Development
      3. Post-development testing
      4. Or anything else? (or a mixture, etc)


      I don't know what Scott's opinion is on this, but I know I've found the specifications to be the biggest point of failure. I can't tell you how many times I, or someone I know, have written the perfect program that nobody wants because it didn't follow what the customer actually wanted. --Jason
      • Sometimes this is a problem with the spec and other times it is a problem with the customer. Some customers have a habit of changing what it is they really wanted after they have already told the developers and a spec has been produced. At least that has been my experience.
        • That is why the business analyst and project manager are there. In an ideal world, they would be the ones that pick the customer's brain and determine exactly what needs to be done in order for the customer needs to be fufilled. I have found that getting a quality BA and PM on the job reduces the common miscommunications between customers and developers. The problem is that there are not too many quality BAs and PMs out there that are not already overworked!

          Oh, and having the sales dolts sell a product that does not exist to the customer is not a good thing either. Not even a good project manager can help how the code ends up when the development team is asked to do stuff that is out of the scope of the current projects!
    • Keeping this short, I've seen the most products fail in planning. There is not enough effort put into requirements gathering, analysis, and creating meaningful system requirements and software specifications. Not to mention I see a general lack of putting enough time into architecture and design.

      oh what is that old yarn? "You never plan to fail, you fail to plan"
  • by stratjakt ( 596332 ) on Thursday March 06, 2003 @01:29PM (#5450088) Journal
    "Reasoning declined to disclose which operating systems it compared with Linux, but said two of the three general-purpose operating systems were versions of Unix"

    So did you go cherry picking to find OS's that had more bugs than linux, or was it random or what?

    Too often the Open vs Closed argument turns into linux vs windows, and then criteria is arbitrarily picked. Since the two OS's are designed largely for very different purposes, the comparison is by definition never fair, no matter who conducts it.

    Saying that one product is better doesnt necessarily mean that the way it was created is inherently superior.

    Implementing properly documented standards is something the OS community is great at, since they're all on the same page. Creating from scratch is different.

    Hence, TCP/IP is rock solid in linux, yet development on the desktop crawls along in 100 different directions at once, gaining little ground.
    • Yes, but the issue is the TCP/IP stack, something common to all internet-enabled OS's. You think that a large company (MS, SCO, Sun, Linux) would make damned sure to get that right.

      And as for implementing properly documented standards, I've seen a lot of programs which are RFC compliant, yet are incompatible between systems (Kerebos). I've also seen different implementations of an RFC compliant program (MS's DHCP server) take down a seemingly rock solid OS (SCO), rendering it's TCP/IP compatibilities null. Unless it is explicitly spelled out in an RFC as to how something is to be done, you're going to have failures on some things, unless everyone uses the same hardware, the same software, and the RFC's are actually code snippits for the only used prgramming language.

      So Implementing properly documented standards is not as easy as you put it.
    • by sjames ( 1099 ) on Thursday March 06, 2003 @02:20PM (#5450626) Homepage Journal

      Hence, TCP/IP is rock solid in linux, yet development on the desktop crawls along in 100 different directions at once, gaining little ground.

      Actually, the Linux desktop has gained a lot of ground, as have the distro installers. If anything, public perception lags far behind the reality. That's not too surprising given that the OS community isn't pumping millions of dollars into marketing.

    • Since the two OS's are designed largely for very different purposes

      and what would those be? linux was designed as a desktop unix originally. that is is a great server paltform is testament to its quality, etc. but it was designed to runon top of x86 hardware, same with windows.

      oh wait, i know what you mean. one was deigned to enslave and control you...gee i wonder which one, (those pesky finns)
      • Actually, Windows was originally designed to run on a Intel 8088 and Linux on a Intel 386 (or higher?). Since the 8088 has no hardware memory protection, Linux as we know it could never run on it.

        Having to maintain backward compatibility with software written for pre-386 processors accounts for a lot of the stability problems of pre-NT/2000/XP Windows. This is an issue that Linux never had to contend with.
  • Proprietary v Open (Score:5, Interesting)

    by Oculus Habent ( 562837 ) <oculus.habent@gm ... Nom minus author> on Thursday March 06, 2003 @01:29PM (#5450095) Journal
    Some proprietary products like Microsoft Office partially maintain there dominance by not disclosing the details of the file format, or modifying standard formats to reduce compatability. Do you think that competetive free products would be more widely accepted if the file formats were open/standardized, or would the dominance and familiarity of the current packages would maintain their market control?
    • It is true that MS does not pro-actively disclose the details of the file formats they introduce, it is also true they modifying standard formats (expand), but to say that they do that with the sole purpose of reducing compatability and maintainging their market share is something that should not be generalized.

      As is frequently pointed out, in some cases their software is just overall better than others.
    • OpenOffice [openoffice.org] and other OSS with M$ compatibility . How many commercial apps are a true replacement for Office? StarOffice, a fork of OpenOffice, doesn't count.

      We just need a talking penguin that harrasses you while you're trying to work and the market will be ours!
  • Code quality (Score:3, Interesting)

    by Alcohol Fueled ( 603402 ) on Thursday March 06, 2003 @01:30PM (#5450101) Homepage
    What is the best piece of code you've seen? What about the worst? Did the best and/or worst come from open source code, or proprietary? I'm just curious.
  • by davidmcn ( 606752 ) <dmcnelis.gmail@com> on Thursday March 06, 2003 @01:30PM (#5450104) Homepage
    Because there are obvious differences in the cultural enviroment of developing open source versus proprietary software, what is you opinion what factors affect the quality of code that is produced from these two enviroments and how?
    • by BoogerWoman ( 630874 ) on Thursday March 06, 2003 @01:54PM (#5450349) Homepage
      And, in addition to the environment, what motivations within the different environments affect how bugs get found, and what reward there is for finding them?

      For example:

      • Do you think (your opinion, of course) that open source original authors are more motivated to create fewer bugs because their code will be read by others?
      • Or perhaps that open source finds an academic (or similar) niche and one can obtain academic (or similar) stardom by finding bugs?
      • Finally, from these motivations, how do you think motivation itself could improve in proprietary software's ability to fix bugs? (Provided motivation is indeed the problem, and proprietary software does, indeed, tend to have more errors).
  • Fine and dandy. (Score:5, Interesting)

    by grub ( 11606 ) <slashdot@grub.net> on Thursday March 06, 2003 @01:31PM (#5450120) Homepage Journal

    OK "Your TCP stack is cleaner than theirs" but what metrics are being used? How do we know bugs in their testing software doesn't skew the numbers?
  • The right tool (Score:5, Interesting)

    by Vollernurd ( 232458 ) on Thursday March 06, 2003 @01:32PM (#5450128)
    Would it be fair to ask if you are an advocate of any particular type of software, or you merely promote use of the "right tool for the right job"?
  • by uiil ( 413131 ) on Thursday March 06, 2003 @01:32PM (#5450139)
    Is a certain percentage of bugs that result from the interaction of two or more otherwise bug free components?
  • by argmanah ( 616458 ) <argmanah AT yahoo DOT com> on Thursday March 06, 2003 @01:34PM (#5450167)
    If open source has such a direct correlation to better quality, why do you feel companies are still keeping their source proprietary? Do you think that we should try and convince them to open source their code in every case, and if so, what do you think needs to happen before they can be convinced to change their minds?
    • If open source has such a direct correlation to better quality, why do you feel companies are still keeping their source proprietary?

      What gave you the idea that companies care about better quality?
  • by Anonymous Coward on Thursday March 06, 2003 @01:36PM (#5450176)
    In my book, quality is broken down as:

    50% Stability, efficiency
    33% Form, structure
    17% Ease of build

    Stability and efficiency, of course, is the most important thing. Does the code work? How well does it cover all cases? Does it do it efficiently? Does it make 10 copies of a string just to return a substring?

    Form and structure are important too. This is key for maintainability. Is the code broken down unto logical modules? Or is the entire 50000 line code base contained all in one monster if/else function? Does the code itself follow sensible, consistent conventions? Or did the developer purposely obfuscate it to prove how smart he is? Or did the developer hack the whole thing due to a failure to understand the actual problem to be solved? How well documented is the code?

    Ease of build - how many #define's (or their analogues in other langs) do I need to get the thing to compile? Does it come with a makefile or build script? Do I need to install a 100MB SDK because the author decided to use 1 small function he could have written himself?

    These are the factors by which I use to measure the quality of source.

    • But what about the other 5%?

      </humor> (hopefully)

    • In my book, quality is broken down as:

      50% Stability, efficiency
      33% Form, structure
      17% Ease of build

      You must be a clueless programmer:
      Usability should be number one. If the code is ugly but the user is happy, the program is a success. If the code is beutifull but hard to use or functionless, then the user will look for alternative software. My list would be:

      60% Usabilty
      25% Stability
      15% Form, Structure, Design, etc...

      It is not that stability and the rest are not good and important but programs are meant to help make life easier for users. Computers/programs are tools (and entertainment)- If the user does not get what they want, you are not doing your job right no matter how well designed and debuged your programs are.
  • by ikoleverhate ( 607286 ) on Thursday March 06, 2003 @01:36PM (#5450177)
    Do you believe the 'evolutionary' pressures that led the Linux tcp/ip implementation being better are in action in other areas of opens source activity? I can see the tcp/ip implementation getting a lot of attention from coders as linux is primarily a server platform but are less obviously important areas performing similarly?

    If so, which areas do you think are benefitting and which need more community action / peer pressure to excel?

    Are there any areas you think this phenomenon will never apply? (eg areas in which proprietary code will always be better)
  • What about BSD? (Score:5, Interesting)

    by kidlinux ( 2550 ) <<ten.xobecaps> <ta> <ekud>> on Thursday March 06, 2003 @01:39PM (#5450203) Homepage
    Sometimes I think Linux takes more that its fair share of the limelight. Generally when I see some aspect of Linux compared to other OSes, I'm interested in seeing how BSD fares as well. Not so much to decide which is better, but it's interesting to see how the two do against each other, given that they're both open source projects. It seems to me that they both have many different and similar goals, and take different approaches at doing things. I'd like to see how it all adds up.

    So did you take a look at the BSD tcp/ip implementation, if so, how did it compare to the rest?
    If you didn't, why not?
    • Re:What about BSD? (Score:2, Informative)

      by b0r1s ( 170449 )
      For questions like this, watch the FreeBSD lists.

      People like Terry Lambert pop up often with quasi-benchmarks taken from personal experience.

      Check out http://news.gw.com/freebsd.arch/9169 [gw.com] for a detailed way to get 1.6 million simultaneous connections in FreeBSD, a number that Linux simply can't match.

      Check out http://linuxpr.com/releases/5611.html [linuxpr.com] for IBM's simultaneous connection limit:
      In a critical measure of secure Web serving performance, a 4-way eServer p630 set an industry record for entry level (4-way) systems supporting 1,988 simultaneous connections, far outpacing the 568 simultaneous connections achieved by the 4-way Sun Fire V480 on the SPECweb99_SSL performance measure.[2]

      The eServer p630 set an additional 4-way Web serving record when the system processed 6,895 simultaneous connections, offering greater than 50 percent more performance than a 4-way Sun Fire V480 with 4,500 simultaneous connections.[3]


      1.6 million compared to 6,900. To be fair, one is excessively tuned, but despite that, it's a huge difference.

      • Are you sure that those connections talked about are the same types of connections?

      • I think you are comparing apples with oranges. SPECweb99_SSL [specbench.org] is a benchmark that shows limit on a number of simultaneous connections for web server with SSL. Terry Lambert's tests are much simplier.
        • Number of connections with SPECweb99: 250,000/500,000.

          If you read the posting he referenced, you can see the calculations, and how you can get useful work done. It all boils down to transmit buffer usage (mbufs).

          Remember that for most HTTP traffic, you have very small requests, and it's the responses that are larger, so the mbuf usage is asymmetric between inbound and outbound data.

          The product this was for was a reverse proxy cache, and so if you didn't care about a lot of content, just getting it out fast, you could compromise between connections and cache size, and operate with 500,000 simultaneous client connections.

          This was back in the days when there was an mbuf required per connection for the tcp_template structure. The thing that let me push it to 1.6M was I shrunk the size of that from 256b to 64b. But as of FreeBSD 4.5, the structure went away; a FreeBSD 4.5 based port of the same changes could probably gain another 150,000 connections, which would move the number up to 1.75M. The number of useful connections would (based on cache size) moved up to 300,000 (or 600,000) as a result.

          Practically, the cache was a special case, because it was possible to share mbuf chains containing cached content between connections.

          -- Terry
  • Now your organization could have caught only the most obvious bugs (unless you guys have some special ability that no other programmer on earth has); isn't it possible that these obvious bugs are more easily caught by the many-eyes advantage of open source, while deeper bugs may in fact be better found by the more structured methods of proprietary, closed source software engineering?
    • >> more structured methods of proprietary,
      >> closed source software engineering

      Please pass the crack pipe.
  • by arvindn ( 542080 ) on Thursday March 06, 2003 @01:40PM (#5450221) Homepage Journal
    The parallelizability of bug-fixing is quite clearly very effective for high-visibility projects such as the linux kernel and apache. However, considering that most open-source projects have only between 1 and 5 developers, how popular do you think a project needs to be for it to significantly benefit from people looking at the source code?
  • by gosand ( 234100 ) on Thursday March 06, 2003 @01:42PM (#5450241)
    I work in software Quality Assurance, and have for going on 10 years now. My experience tells me that true software bugs are only part of the quality of software. So much can get lost in the software development lifecycle. An unclear requirement can travel through the lifecycle and come out the other end as a bug to the customer. Usability is another part of quality. It could be bug-free, but if it is really difficult to use or doesn't fit the needs of the customer, it may not matter.

    It sounds like your company focuses on analyzing the code bugs, and not necessarily the perceived bugs. What are your opinions on this? I know that locating and eliminating the bugs *is* a critical part of software QA, but do you feel that bug-free ensures true quality? A bug-free Open Source project may still be too difficult to use or confusing for the non-technically inclined.

  • Irony (Score:4, Interesting)

    by hafree ( 307412 ) on Thursday March 06, 2003 @01:44PM (#5450257) Homepage
    Ironically, the reason most companies will opt for closed source solutions is because they have large companies behind them: Microsoft, Sun, IBM. Although this gives the illusion of having someone to hold responsible, the EULA usually contains a clause relinquishing the vendor of any real responsibility or culpability. Whereas with open source software, you have no legal recourse if the latest release of sendmail or bind has an exploit, but rest assured that within 24 hours a fix will be released. Compare that with response times from commercial closed source vendors...
    • Whereas with open source software, you have no legal recourse if the latest release of sendmail or bind has an exploit, but rest assured that within 24 hours a fix will be released. Compare that with response times from commercial closed source vendors...

      Sure, because it's well known that commercial software vendors never fix serious vulnerabilities as fast as the open source community. Particularly ones like Apple, for example, who have fixed several vulnerabilities in MacOS X way before the equivalent Linux patches were released. Since you like sendmail so much, I suggest you check how fast the major commercial *nix vendors released their patches compared to the open source world, and get back to us.

      Now please pick up your ill-informed pro-OS FUD and go away.

  • by arvindn ( 542080 ) on Thursday March 06, 2003 @01:46PM (#5450276) Homepage Journal
    It is natural to expect the number of bugs to go down when more people look at the source. However the downside to being open source from the security viewpoint is that possibly makes it easier for the bad guys to find bugs. Have you measured the effect of this? Is it actually easier for crackers to find bugs when they have access to the code? If so, do you think the smaller frequency of bugs adequately compensates for their increased exploitability?
  • by binaryDigit ( 557647 ) on Thursday March 06, 2003 @01:47PM (#5450288)
    The tcp/ip findings were interesting, but were they really relevant. Let's say that one of the tcp/ip stacks that had more "bugs" was Solaris. Now I assume that the Solaris tcp/ip stack has not had significant changes in a very long while, also, that if these "bugs" actually negatively impacted the working of the code, that they would have repaired. So my question is, how does your application mark a bug? And from the original /. article, it would appear that you treated all "bugs" the same. Is a failure to check for a null pointer "a bug". Having a more explicit rundown of the type and scope of bugs found would be much more meaningful.
  • by scotpurl ( 28825 ) on Thursday March 06, 2003 @01:48PM (#5450296)
    It says they found so many bugs per 1,000 lines, but did they submit the errors, or fixes, to CVS, or to the vendor?
  • by pro-mpd ( 412123 ) on Thursday March 06, 2003 @01:48PM (#5450297) Homepage
    Do you find that the quality of the programming depends upon the geographic location of the programmers? So, for instance, an open source program will be troubleshot and combed over by people from potentially a dozen different countries. Closed source software is checked by people where it is written. Since, as a general rule, education varies in quality and areas of emphasis around the world, does it help having people attacking a program from many different angles (i.e., open source, cheked world wide) rather than simply drawing from a set of people who may share many of the same abilities, backgrounds, etc.?
  • by arvindn ( 542080 ) on Thursday March 06, 2003 @01:52PM (#5450328) Homepage Journal
    What do you mean by "defect rate"? Is it a measure of bugs your group found for the first time or were you looking at already discovered and documented bugs? In either case how do you ensure that you have enumerated all the defects in the code?
  • by James Chamberlain ( 653054 ) on Thursday March 06, 2003 @01:52PM (#5450333) Homepage
    Why did you choose to look at the TCP/IP code over any other particular subsystem? Do you have plans to review any other portions of the code? For instance, I think it would be very interesting to see a similar comparison which examined the code for file systems or virtual memory. Have you reported the bugs you found back to the authors of the code?
  • by Carnage4Life ( 106069 ) on Thursday March 06, 2003 @01:53PM (#5450338) Homepage Journal
    I assume some of this information may be "company secrets" but I'm very interested in learning what metrics are used to determine which source code is "buggier" than others. Is this something as simple as running lint + "gcc -Wall -ansi -pedantic" then piping the output to "wc -l" ?

    Are there checks for use of unsafe functions like gets and the str* family of functions in C? Are there more complex data flow analysis algorithms at play here like those in the used in Stanford's Meta-level compilation [stanford.edu] techniques?

    Inquiring minds want to know. A pronouncement like OS foo is has more/less bugs than OS bar is meaningless without a definition of what having more/less bugs means.
  • by Tekmage ( 17375 ) on Thursday March 06, 2003 @01:53PM (#5450343) Homepage
    One of the bigger challenges facing open source projects as compared to their proprietary equivalents is how to manage confidentiality of test cases. With companies such as Red Hat and Ximian involved, it's certainly less of an issue for their core products and projects they over-see, but there will always be cases where there is friction when the best/only person who can fix a particular problem is on the outside, unable to work with the test cases in question.

    What are your thoughts on this trade-off between test case management and confidentiality as it relates to proprietary v.s. open source code development?
  • by Anonymous Coward on Thursday March 06, 2003 @02:01PM (#5450414)
    Please compare and contrast the services your
    company offers with those offered by Checker [stanford.edu]
    or Smatch [sf.net].
    They seem pretty similar. In fact, do you
    use Checker or Smatch internally? It would
    seem logical.
    - Dan Kegel (dank@kegel.com)
  • by spakka ( 606417 ) on Thursday March 06, 2003 @02:01PM (#5450419)
    Since you are a small team rather than the entire open source community, don't your own conclusions imply that you are likely to be detecting only a small fraction of the defects, invalidating the study?
  • What this company Reasoning does sounds pretty much like what Rational's Purify product does. IMO, all software should be tested with a system like this before going through the QA cycle. I use Purify quite often, just to check that there are no such memory errors.
  • by arvindn ( 542080 ) on Thursday March 06, 2003 @02:11PM (#5450513) Homepage Journal
    Given that on more than one occasion "independent institutions" which conducted similar studies (and concluded that closed source is superior) were revealed to have been sponsored by the other side [microsoft.com], how do you convince other people of your neutrality? Since you are selling a service [reasoning.com], not a product, I would guess that the confidence of your customers in your independence is pretty important from a business perspective. How do you win and keep that confidence? The article notes that you agree with ESR's pro open-source reasoning. Wouldn't the perception of your having a OSS bias be something you'd want to avoid?
  • Peer review (Score:5, Interesting)

    by ralico ( 446325 ) on Thursday March 06, 2003 @02:25PM (#5450679) Homepage Journal
    How much of a role do you think peer review plays in software quality?
    In proprietary source systems, there is generally formal peer review, as per CMMI [cmu.edu]. But I have seen this done rarely (almost exclusively for CMMI level 3+ projects). There seems to be a disincentive to do formal peer review. There seem to be various reasons for this, cost, workplace environment, and group dynamics. Which do you think are most significant?

    Whereas in open source projects, there is not the formal peer review, but rather seems like a mass informal peer review. This seems to foster an enviroment of besting each other, trying to find the most and most obscure bugs.
    What do you say?
    • In a formal setting peer review of code is an expensive process. It takes a team of people, who aren't necessarily experts or have knowledge in the whats or whys the code is exists, to inspect code before reviewing. Management sees this time as empty or lost. Developers who have other deadlines would rather being their work than commenting on other work.

      For formal peer review to work it must be scheduled in and implemented with the blessing of management. The surest way to fail at code reviews is to up and one day say that code reviews are mandated but never provide time or the framework to execute.

      As mentioned in the parent, Open Source has more informal review structure. Before you implement new features you inspect the code and ask the author questions which can lead to improved and robust designs even without implementing new features. Either the author gets sick of answering questions or seeing comments about their weak design and implements a new one or a newcommer goes ahead and does it. Its a win-win.
  • by Anonymous Coward on Thursday March 06, 2003 @02:25PM (#5450688)
    Do programming methodologies actually increase code quality, in your opinion?

    What I mean is this: over the years there have been numerous methodologies that to some extent all claim to make programmers write better code in less time. eXtreme Programming is a recent and - imho - fairly impressive example. All of them boil down to a slightly different approach to the task of programming.

    So if you find fewer programming defects in the Linux IP stack, would you think that this indicates that there is something that works well about the way open-source programmers approach programming? Or could it be simply that people willing to donate their time to a project tend to be talented?
  • by anthony_dipierro ( 543308 ) on Thursday March 06, 2003 @02:31PM (#5450757) Journal
    Where can I get the source code to these automated inspection tools?
  • by monkeyboy87 ( 619098 ) on Thursday March 06, 2003 @02:31PM (#5450762)
    Is this really just a problem of the resources that can be brought to bear on producing the final product? Does the quality simply come from the shear number of people that plays on the law of averages/big numbers ? ie the open source got 10,000 hours of development time vs 2,000 hours in a closed source environment restricted by cost/budget/time etc. If the resources to producing the "product" were the same would the quality be any different ?
  • In your humble opinion, when dealing with large software vendors whose closed-source TCP/IP stack supposedly has more bugs than an open-source one, could too many cooks at the large vendor be spoiling the broth? In many large places, I notice that "teams" don't talk with one another.

    Do you think that Linux, being "benevolent dictator" is a better model than having "teams" make every development decision by committee?

  • by Ed Avis ( 5917 ) <ed@membled.com> on Thursday March 06, 2003 @03:10PM (#5451197) Homepage
    Something that interests me is the trend for code quality analysis tools to pick the Linux kernel or other well-known free program as an example. So researchers developing the tool get to boast 'we found 47 bugs in Linux', which is a statistic people can understand (even if it may not always be strictly true), while Linux benefits from some extra bug reports.

    Strictly speaking, static analysis tools measure what is called kwalitee, a property which isn't the same as code quality but is usually closely correlated with it. In other words the tools do make mistakes, but most of the time they are on the right track.

    It would also be possible to have a big online 'databank' of C source from many projects - the top thousand on Sourceforge plus the GNU programs, or something like that - and make this a standard 'corpus' for code analysis tools.

    Hmm, I have to get a question out of this. Do you think that code analysis tools like Splint could improve free software quality further? What sort of infrastructure could be created for doing code kwalitee checks across a whole Linux or BSD distribution?
  • by phamlen ( 304054 ) <phamlen&mail,com> on Thursday March 06, 2003 @03:10PM (#5451198) Homepage
    According to the article, it appears that you look for buffer overflows, freeing memory early, and other memory issues.

    What errors are currently hard to detect automatically but which you would really like to be able to find?
    What is the next category of errors that you're trying to detect with automatic code inspection?

    To give you some ideas, what about:
    • "unrefactored" code - code which has a lot of duplication and should be cleaned up
    • "untested" code - code (or branches in the code) that are currently untested by unit tests?
    • "programmer intention" errors - code which doesn't do what the programmer intends
  • Language Choice (Score:4, Interesting)

    by gregfortune ( 313889 ) on Thursday March 06, 2003 @03:32PM (#5451418)
    Does the choice of the implementation language affect the number and/or severity of bugs found? Obviously, the skill of the programmer will affect the quality of the code, but perhaps a study like this can lend credibility to the idea that the choice of a language can predict a certain level of quality in the code. ie, maybe it's easier to write bug free code in some languages. Any data that would suggest this?
  • Test first (Score:5, Interesting)

    by neurojab ( 15737 ) on Thursday March 06, 2003 @03:56PM (#5451667)
    What do you think about the new "test first" software development methodology? For those that haven't heard of it, it's a method wherein the test cases for a program are written, and no code is written that doesn't cause a failing test case to pass. All test cases are automated and run after every code change. Would you advocate this in an open-source project? This would mean every contributor would write test cases for each new feature, and add it to a project's common test case repository... What do you think?
  • by poot_rootbeer ( 188613 ) on Thursday March 06, 2003 @03:57PM (#5451679)

    Do you think part of the difference in resulting code quality is due to the developers' motivation for working on the project -- that perhaps closed-source programmers are more likely to be doing it just to earn a salary, while open-source programmers are more interested in the art of coding itself?
  • It is possible to have a proprietary model and to have code reviews required (and documented) done by competent system architects and security experts. It is also possible for proprietary developers to do no reviews and to lack the skill and experience and coding standards and automation to produce reliable code.

    It is possible to have an open source model and have the code reviewed by no one but the original coder. Or to have 15 reviewers of varying competence looking at ever line and debating it vigorously.

    It is possible in the same OS to have source files or code fragments from various sources with various development and review methodologies. Some can be as extreme as using/requiring automated tools to find potential errors and requiring skilled reviewers. Some as lax as no review by anybody or anything.

    Given this diversity, how can the terms open and proprietary be used to usefully describe software quality? Doesn't it depend not on the open/closed but on the amount of skill of the coder, automation of the review and experience of the reviewers. And isn't that independent of open/proprietary?

  • What opinion you have about this?
  • by Lodragandraoidh ( 639696 ) on Thursday March 06, 2003 @04:45PM (#5452120) Journal
    Do you study the makeup and practices of the development team as part of your analysis? Would you find it useful to know if a team favored one lifecycle methodology over another - and are there any correlations you have seen along these lines?
  • I, Code (Score:3, Funny)

    by AvantLegion ( 595806 ) on Thursday March 06, 2003 @04:55PM (#5452197) Journal
    Is code aware of its own open/closed source nature? Does this knowledge affect its performance? Does closed source code feel isolated? Is open source code kinda slutty?

  • by iabervon ( 1971 ) on Thursday March 06, 2003 @05:08PM (#5452330) Homepage Journal
    There seems to be a lot of effort in automated testing which goes into trying to determine what the program is supposed to do. An automated tool can never find all of the bugs, since some of them will be that the program doesn't crash or anything, but fails to follow the specification, which isn't given to the automated tool.

    How would you extend C/C++ to include information about the intended behavior of programs, so that programmers can tell the tool directly what is supposed to happen?
  • by andymac ( 82298 ) on Thursday March 06, 2003 @05:28PM (#5452501) Homepage
    I would like to move my company towards creating s/w that will operate on Linux, both as a client and as embedded (target) s/w. Our clients are the large primes for militaries around the world.
    Most primes and militaries are moving towards COTS products to reduce costs and improve reliability and support. If we were to port our product s/w to run on Linux, how on earth can we achieve similar value and benefits of COTS-like s/w, s/w like WinRiver's Tornado, that have great robustness, standard (purchasable) support, and carry the perception (remember: perception is reality here) of greater security?
    For those of you who think support is not important, market data has shown that for larger organizations, the number one "care about" is support. And since Sept 11, security is moving to the top of the list of care abouts for the militaries and primes.
  • by swordgeek ( 112599 ) on Thursday March 06, 2003 @06:22PM (#5453185) Journal
    Bug-free software is obviously an ideal goal, but it's not the only thing that measures code quality, in my mind.

    Do you forsee any metrics in the (near) future to measure other aspects of code quality? Performance is obviously important, but what about things like code style, modularity, and 'cleanliness?'
  • by Anonymous Coward
    It is maybe Slashdot that distorted your results, but I couldn't understand how open source translates into fewer bugs in the software.

    This is because, although being open makes it possible to involve many more people, it is not necessarily true that many people will look at your code. Coding is not an easy task, it takes time. In general, many open source projects are maintained by few people, which is actually worse than the commercial applications, since these commercial companies can hire the top people in their area, and they can hire as much as possible if that's needed to compete with any other product, including open source applications. So being "open" does not translate into anything. It is the number of people, their quality, their time to dedicate for the project, not the license of the product.

    I am partially open source advocate, and I really appreciate people working for open source. But there are big problems associated with it, and I think instead of trying to cheat people to use open source, we need to focus on the problems of the open source itself. Otherwise it will be a hobby for people, geeks, but nothing more.


    In short, can you explain your logic behind this conclusion, because it just seems to me either you or Slashdot is making it up.

  • by oldCoder ( 172195 ) on Friday March 07, 2003 @02:35AM (#5456806)
    The companys bug scan software looked at TCP/IP stacks from different OSes. Presumably they implemented the same functionality. The statistics given are not for the stacks as a whole, but are given in "Defects per 1000 lines of code".

    Think about that.

    If Stack A is 3 times as large (bloated code) but has only 2 times the bugs as stack B, then stack A (worse in all respects) gets a better grade!!!

    You can halve your defect count by doubling the number of lines of code in your module. What a rip! How could so many people read and write about this and not see the problem.

  • I wonder if there were marked similarities in the bugs in Proprietary code compared?

    Were these similarities found in FOSS code that was looked at, or did the dendritic peer review process handle that to some degree?

    Were bugs found in the proprietary code that were already (verifiably) marked as things to be fixed, and if so, what was the average lag time (Bug turn over)? Do these companies keep track of their bug turn over periods, and what is the empirical comparison with that of FOSS?

    Was there pro-active debugging done in the FOSS code that were results of known bugs in the proprietary code base, and if so, were these bugs addressed in the proprietary code?

    Was there a verifiable process for maintenance in the proprietary companies that had changed in the 3-6 months prior to the testing?

    I think that will do for now. Plenty more where that came from. :)

    Taran
  • ... in your opinion, for one piece of code to be of higher quality than another?

    easier to understand? (and how do you evaluate this)

    more compact code? (i.e., fewer LOC)

    more evidence of encapsulation and data hiding?

    more comments (to better explain what the code is doing)

    fewer comments (the code stands on its own)

    rigorously standardized naming conventions?

    choice of language?
    All too often, one man's notion of quality is another's nightmare.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...