Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Open-Source != Security; PGP Provides Cautionary Tale 225

Porthop points out this "interesting developer.com story regarding the security of open source software, in regards to theories that many eyes looking at the source will alleviate security problems." It ain't necessarily so, emphasis on necessarily. Last week it was discovered that, in some (uncommon) cases, a really stupid brainfart bug makes PGP5 key generation not very random. The bug lived for a year in open-source code before being found. If you generated a key pair non-interactively with PGP5 on a unix machine, don't panic and read carefully; you may want to invalidate your key. Update, next day: several people have pointed out that although PGP5's code is available (crypto requires code review), it can't be used for any product without permission. Incentive for code review is therefore less than for other projects of its importance, and I really shouldn't have called PGP "open-source." Mea culpa.
This discussion has been archived. No new comments can be posted.

Open-Source != Security; PGP Provides Example

Comments Filter:
  • by Anonymous Coward on Sunday May 28, 2000 @06:25PM (#1041156)
    Would this "bug" have been discovered if the source was closed?
  • by PollMastah ( 174649 ) on Sunday May 28, 2000 @06:26PM (#1041157) Homepage

    Umm... since when is Open Source = security?? Somebody has already posted this link [acm.org] on a previous story already. It describes a kind of trojan that not even source code auditing can prevent.

    But of course, seeing that slashdotters never bother to do their research (in spite of habitually telling newbies to RTFM), here comes my obligatory Slashdotter response poll :-P

    Poll: Most typical response to this article:

    1. See? It's right in your face and you still won't admit that Open Source is flawed! M$ forever!
    2. What?? Open-source != security? Oh no!!! My world... collapsing!!
    3. PGP is eVil! Down with PGP! Everybody use GnuPG! We all know that the GPL makes it secure! (huh?)
    4. *ahem* *cough* umm..., yeah, IIRC, IANAL AFAIK, but *ahem* yeah, this doesn't prove anything, you see, open source is always right, *ahem* this is just a special case, blah blah *ahem* ok please gimme my daily dose of karma.
    5. For your information, Signal11 ... (hmm, anyone know if the moron who posts this to every other article is a spam-bot?)
  • I'm actually really grateful to see something like this happen.
    Not because I'm anti-open-source, or anti-PGP. Because I think that open-source has led to a few bad habits
    1) It's 'good' software. By this I mean most people (Including myself) think that the software, while looking like it works - does exactly what you think it's doing. Oh, some other programmer has checked it I'm sure. Unfortunately I don't think that's the case anymore, after releasing a few things myself - and receiving one piece of feedback for about 1000 downloads.
    2) Constant upgrading. I do it. You do it. Everyone does it. I'm not saying that constant upgrades are a bad thing, but it does seem that releases (Aside from the more major of projects) are tested at any deep level. This is more of a bad habit of programmers (Once again I raise my hand, I suck at Q&A) I'd love to see some open source Q&A people inside a project.. I've yet to see an internal release be posted before going up. I know that's what the x.x.1 version is for, but a lot of bugs shouldn't even be in there and they're from 4am coffee splurges and should be checked by friends or whatnot.
    3) Ripping code that isn't tested with that setup. *cough* This part really bit me once with some network stuff. Ohh, they did it this way - I want it that way too! Not the best approach in my experiences. It's great to re-use code, but check it out first. I've seen snippets from other peoples code that is both broken and misused, and of course causes small bugs to show up in the app.
    K, that's my rant. My 3 bad habits anyway.
  • by fluxrad ( 125130 ) on Sunday May 28, 2000 @06:27PM (#1041159)
    I think the principle that people are missing is that, all things being equal, a bug/security hole is going to be found a LOT quicker by examining the source than by simply using the program.

    I used to think that security through obscurity was a valid security model, reasoning that so long as no one knew how or why something was built, at least in source terms, than it would be better for everyone. A person can't exploit something they don't know is there. The largest problem with the obscurity model is the fact there *are* people who just look for exploits. they get home from work/school and hack away at these utilities. By not allowing the source to be released, and scrutinized, you're going to see bug-fixes arrive later than they should, you're going to see exploits that go for months/years completely unpatched. This makes for all around buggier programs, and, by inference, more exploitable programs.

    Open source is by no means the best practice in some specific situations (at least right now). There are other factors than just bugginess and exploitability that software manufac's take into account. But in *general*, the open source model is much more effecient and robust than the *alternative*


    FluX
    After 16 years, MTV has finally completed its deevolution into the shiny things network
  • by Anonymous Coward
    From what I've noticed over the past few years, most projects have a small circle of core developers that understand enough of the source to actually catch subtle bugs. The 'many eyes' concept tends to be another one of those 'in theory it should work' conculsions. Not to knock open source projects, because many are absoultly amazing in their accomplishments, yet tend to suffer from inner-circle syndrome.
  • Meaning that...

    pgp5i will eat out of /dev/random when it is used non-interactive.

    In linux, with entropy based random (take time between irq requests into a 'pool', then feed into randomness generator as seeds) it is just fine.

    Its the other unicies that are broke, not pgp5i.

  • Read this article [earthweb.com] by John Viega, one of the authors of Mailman. He talks about how Open sourced software does not necessarily mean security, how many eyes on does not mean they will look for loopholes, and why. Also other interesting points.
  • The article's main objection is that going open source will give hackers the ability to find weaknesses in security.

    This is a complete joke, if a person wants to find a security hole in a program don't care one bit if their copy of the source code was obtained legally, they will just get it any way they can, whether it be downloaded from an illegal site or decompiled themselves.

    The friendly programmers however, do care about the legality of their source code, and are the ones who will gain access through open source.

    So quite simply, open source means little increase in hackers finding flaws to exploit, but gives a huge increase in the number of programmers solving the problems.

  • by aardvaark ( 19793 ) on Sunday May 28, 2000 @06:32PM (#1041164) Homepage
    If it were proprietary, would anybody have even found it? This isn't exactly a fair comparison, as I'm sure you could find a bunch of bugs like this in proprietary code if you could just look at it. OSS will always be more open to critizism because the source code is actually there to criticize!
  • The article mentions two very important aspects unique to open source projects: security audits and obfuscated code. I know that I hate looking at spaghetti code and the world of open source community development can be great spaghetti code generator. It works but can be very difficult to debug. Under strict development guidelines this can be reduced (e.g. NASA space shuttle code). Unfortunately, it's going to be difficult to dictate a strict programming model when the programs are developed on the programmer's own time. Plus, security audits by other companies is something that OSS can't afford (unless a company like Red Hat pays for it). Yes, I know that many security experts examine the Linux OS and its code. While I still believe that Linux is more secure that most OSes out there (especially Windows 98/2000), we should not be zealots. Commericial development houses can enforce standards and pay to have extensive external audits.
  • GnuPG [gnupg.org] is okay, right?
  • by Anonymous Coward
    "This problem was found by Germano Caronni , and verified by Thomas Roessler and Marcel Waldvogel . "

    I.e. If it wasn't open-source the problem would *STILL* be unfound. PGP 5i isn't a bazarr developed app, so it still has all the 'benifits' of commercial development. It was OSS that found that bug.

    OSS make it more secure. The article that claims otherwise is irresponciable journalism.
  • Closed source systems have security bugs too

    Try hitting escape in a Win 9x Password dialog.

  • Having many eyes looking at source code almost invariably leads to bugs being found. However, the popular misconception is that any piece of open source software actually IS continuously looked at by alert people.

    The reality of the issue is, at least in the few projects I'm involved with, that just distributing software in source format doesn't mean it will be looked at. Not by the end users (this is obvious), but not that much by developers either -- even the core developers usually divide workload by assigning module owners and such, and as a result, code in someone-elses-module rarely gets properly reviewed. Sure, someone might keep an eye on the commit logs but that's hardly a decent way to review evolving code.

    So, with regards to security issues, I think things boil down to this: unlike proprietary, binary-distributed software, open source as a distribution mechanism isn't explicitly designed to prevent code review.

    If the opportunity for peer review has been left unutilized in a single project, others can use the example to learn. Open source isn't about automatic benefits in software quality -- it's about making work towards better software possible.

  • by Anonymous Coward
    PGP 5i doesn't meet the DFSG (or the OSD), so it's not open source. In particular it fails the "okay for for-profit use/distribution". Lots of people refuse to even both with the new PGPs, especially with GPG out now.
  • Professional programmers, like the guys at Microsoft or Apple do this stuff for a living and thus have to get it right or they're out of a job

    Does this mean that Microsoft now employs about 5 staff worldwide? So far I yet to see Microsoft get it "right". Yes opening up code to a million eyes does mean that more idiots see the code, but it also means that more vetern programmers see it. When was the last time you took a look at any Windows source code?

    So a bug was discovered in Open Source software, big deal. It'll get fixed and people will move on. To fix a bug in Windows, you first have to beat Microsoft over the head serverly with it, then, when they deny it exists, you have to create some program that illegally demostrates their bugs. Only then will they admit that there was an unplanned "feature (read bug) and will promptly proceed to shut your program/site/self down permanently... oh and if they get some time... maybe... they might fix the bug (in service pack 13).

  • by Effugas ( 2378 ) on Sunday May 28, 2000 @06:38PM (#1041172) Homepage
    Background: I've been auditing GPG lately for using it as a high-throughput non-interactive key generator. So I have some right to talk about this.

    Everybody, generating keys non-interactively is ridiculously difficult, because to be honest there's a very small amount of entropy in your system. Clock differentials and specific CPU traces are pretty good, but everything else other derives from the network(and is therefore remotely attackable) or traces itself back to PRNGs(various memory assignment algorithms, etc.)

    That's not to say that this isn't a problematic bug, and that it doesn't need correcting. But non-int keygen just isn't that common(yet; I'm working on that), so the exposure is thankfully smaller than it otherwise might be.

    As for Microsoft, to be honest I have very little confidence that the RNG's in any web browser are anything that would survive an audit by Counterpane Labs. MS does very good stuff; crypto isn't generally among them(though any of us would be a fool to not note that they're shipping 128 bit SSL by default.)

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • Never mind the non-randomness: It's a buffer overrun!

    staticunsigned
    pgpDevRandomAccum(intfd,unsignedcount)
    {
    charRandBuf;
    unsignedshorti=0;

    pgpAssert(count);
    pgpAssert(fd>=0);

    for(i=0;i<=count;++i) {
    RandBuf=read(fd,&RandBuf,count);
    pgpRandomAddBytes( &pgpRandomPool,(byte*)&RandBuf,sizeof(RandBuf));
    pgpRandPoolAddEntropy(256);
    }
    return(i);
    }
    If count is anything over 1, that call to read() is gonna stomp on the stack.

  • While no one argues that open source provides perfect security, it isn't fair to say that open souce is insecure. Bottom line: what would the likelyhood of discovering this bug and getting a fix out be if the source was closed?

  • by Chalst ( 57653 ) on Sunday May 28, 2000 @06:39PM (#1041175) Homepage Journal
    I'm reminded of Bill Joy's retort to the idea that many eyes make bugs shallow from the recent Salon article [salon.com]:

    • "Most people are bad programmers," says Joy. "The honest truth
      is that having a lot of people staring at the code does not find the
      really nasty bugs. The really nasty bugs are found by a couple of
      really smart people who just kill themselves. Most people looking at
      the code won't see anything ... You can't have thousands of people
      contributing and achieve a high standard."
  • by Anonymous Coward
    PGP is NOT opensource developed. It's OSS released.. I.e. cloistered programmers code away then a 'final' product springs forth from their loins fully formed.

    The bug was found by outsiders, so OSS made it more secure.
  • by VAXman ( 96870 ) on Sunday May 28, 2000 @06:43PM (#1041177)
    The Windows password dialog is not meant as a secure log-in, it is meant to provide different user options to different users who share a computer. Windows doesn't even have file permissions; this is not a bug, but a consequence of the fact that its file system is backwards compatible since the original release of DOS. The Windows NT is highly secure.

    Slashdot is almost as insecure as Windows, and delivers only bare-minimum security.

    I challenge you to find a security bug in any version of VMS past 4. This is one of the most closed, propritary operting systems in production, and also one of the most secure (even attained B2 - when is an open source OS going to get a security rating?)

  • Just because it is Open-Source doesnt mean all bugs are surely spotted.. but at least there won't be any freakin NetscapeEngineersAreWeenies backdoors!

    make some money [slashdot.org]

  • by KFury ( 19522 ) on Sunday May 28, 2000 @06:45PM (#1041179) Homepage
    Open-source is more secure in thge long run, but is less secure immediately.

    The idea is that security through obscurity is perfect until someone finds the hole, then it's worthless. In contrast, when using an open source solution, the security is inheirently flawed, because there is no obscurity, but as time goes by it gets less and less flawed, as responsible people find and patch holes, to the point where it's a safer bet than the obscure method.

    The most effective real-world security may be to combine both, or only use open methods that have been analyzed long enough that they're virtually certain to be secure.

    The security of obscure methods is simply harder to quantify, and you don't know when they become worthless.

    Kevin Fox
  • Did you read the article? It reads out of /dev/random, but then it overwrites that with the return value from read (always 1). It gets a series of 1's and thinks it has random data. It only has a problem on systems with /dev/random, because then it's fooled into thinking it's getting random data when it's not.

    Please read the article before you respond.
  • Not that I want to bash on the topic, but I think that, "Open-Source != Security; PGP Provides Example" is going a little too far. The linked page says, "Chances are very high that you have no problem. So, don't panic." It says that this only effects the "PGP 5.0i code base". Is the topic supposed to scare everybody out of their shoes to rethink open-source software? I believe that this topic is going a little too far considering that this software probably isn't being looked over by too many people anyways. Oh please, Slashdot, don't turn into ZDnet!
  • by mrdlinux ( 132182 ) on Sunday May 28, 2000 @06:47PM (#1041182)
    This is true, but that was not the problem. The problem was that they were assigning the read count to the array that was supposed to hold the values that were read! Since they only read one byte at a time, the array always contained the value 1.
    Here is this relevent code:

    char RandBuf;
    for(i = 0; i <= count; ++i) {
    RandBuf = read(fd, &RandBuf, count);
    ...

    From the read man page:
    ssize_t read(int fd, void *buf, size_t count);
    On success, the number of bytes read is returned


    As you can see, RandBuf was being set to the number of bytes read, instead of the byte read.

    In fact, I have my own issue with that code. The for loop should read:
    for(i = 0; i < count; ++i)

    But I am not very familiar with the context of this code. The original code would loop count + 1 times while my version will loop count times. This may or may not be the desired behaviour. I guess I'll go send in another bug report ;)

    Anyone notice that Extrans doesnt seem to be working? or is it just me.
  • I think common sense can explain this one. You simply can't generalize something like "Open Source == Security" without finding exceptions like these. Open Source can only be considered "Secure" if the benefits really are benefits, in this case if the openness of the source is actually used--if nobody looks for the bugs, they may never be found.

    Likewise, you can't argue that Closed Source is or isn't Secure, because closed source could be more secure if it's professionally audited, or it could have blatantly dangerous bugs pass through if it's not.

    I think what I'm really trying to say, is that I'm not surprised, and I don't understand why everyone else is surprised about this. Good thing they found the bug though!

  • From the article:
    The count parameter is always set to the value 1 by the calling code. The byte read from the file descriptor fd into the RandBuf buffer is subsequently overwritten with the read() function's return value, which will be 1. The actual random data are not used.

    Err... did i miss something?

    phobos% cat .sig
  • It's obvious that security through obscurity is not the ideal, however I'm not so sure that open source programs are more secure.

    Consider a program (such as PGP) which is written with open and closed source versions. Both are just as likely to have bugs at first, but both are inspected by other programmers. The closed source program is inspected by a couple of programmers inside the company who are quite knowledgable about security. The open source program is inspected by dozens of programmers, most of which know very little about security. The score so far? I'd say about even.

    Now the programs are released and people start using them. Hackers start trying to find exploits in them. If an exploit is found by a "white hat", it is reported and fixed. If it is found by a "black hat" it is used to attack systems for a while before being noticed. For the closed source program it is less likely an exploit will be found outside the company than for the open source porgram. So it's likely more exploits for the open source program will fall into the hands of the "black hats".

    Of course, this assumes similar numbers of white and black hats, if there are more white hats then bugs in an open source program will be found quickly. Apparently this has not happened for the programs discussed.

    Offset against this is the fact that bug fixes are likely to be much quicker in the open source program.

    I'll still use open source programs for another reason. Security flaws can also be intentionally introduced for several reasons. This is one type of bug which is extremely unlikely to occur in an open source program.

  • No, you're wrong. It doesn't matter how the /dev/random numbers are generated.

    If you'd read the entire thing you'll see that the bug caused the program to not read ANY (real) data from /dev/random. So whether /dev/random is working or not it doesn't matter.

    The page specifically lists Linux as one of the systems that this bug occurs on.
  • 3) Ripping code that isn't tested with that setup. *cough* This part really bit me once with some network stuff. Ohh, they did it this way - I want it that way too! Not the best approach in my experiences. It's great to re-use code, but check it out first. I've seen snippets from other peoples code that is both broken and misused, and of course causes small bugs to show up in the app.

    I think this is possibly one of the worst habits in programming, and it's why I never rip code unless I know why and how it does what it does. At least, I think I don't... =)

    There's a guy at work who has no clue how to program, and yet somehow has ended up with the job of writing various stuff, usually shell scripts or Windows batch files. He is literally unable to do anything unless he rips an example from somewhere and tweaks it. Problem is, he can't find relevant examples.

    For example, he needed to check whether a file existed. I suggested various methods, but instead he found some code to automatically compress the largest file in a directory. I have no idea why he did this, except maybe because both problems had the word "file" in them. He ended up adding 20 more lines which somehow made it sort of do what was needed. Now there are 30 lines of DOS batch file code, only 2 of which are needed or even relevant.

  • Most people looking at the code won't see anything ... You can't have thousands of people contributing and achieve a high standard.

    That's a non sequitur if I ever saw one.

    A 'high standard' is set by having each part of the program do exactly what it's supposed to do - nothing less, nothing more. This is not some ethereal concept - it can (and should) be verified mathematically or with equivalent methods such as 'design by contract'.

    If more people did that - if programmers working in complex and 'mission-critical' (I hate the term, but I can't come up with anything better at the moment) systems would just admit that they aren't perfect and that they should use all tools at hand to get everything just right (as opposed to just working with bare-bones tools, which unfortunately seems to be a favourite of *nixers) - then more software projects would achieve truly high standards, whether said software were Free or proprietary, whether it were designed by 2 or 2000 people.

    Because otherwise, it's just the same story from both sides - just as 'proprietary' doesn't necessarily mean 'high-quality', neither does 'open source'.

    Then again, I guess I can't expect much from Mr Joy, after he made an ass out of himself by playing Prophet of the Apocalypse... Ah well.

  • One of the problems I see is that a lot of the users who download and run Open Source software have no desire to learn to program and couldn't contribute to the security even if they wanted to. Just because a piece of software is downloaded 1,000 times, doesn't mean that it's been downloaded by 1,000 programmers who have a could understand the source.
  • On the second link of this article, if you scroll up, you will see convincing evidence that open source software has nothing to fear about security competition from closed source software.

    Here is a direct link [cryptome.org], read the first article, although I doubt you will be surprised.

  • The article that I read said nothing of the kind, but rather, said, 'Open-source advocates tend to assume that open-source code has been thoroughly reviewed for security by the many-eyes theory, but this isn't necessarilly true.'

    The alternative of closed-source was mentioned, and dismissed as not being any better.

    This article was, in short, saying 'this is a shortcoming of open-source' but it was -not- rehashing the security-by-obscurity argument from the closed-source camp, but discussing the fact that those many eyes may not be looking as close as we assume.

    Your response makes -no- sense at all, and has -nothing- to do with this article. It's an answer to the -usual- security debate around open-source but has nothing whatsoever to do with -this- article.

    --Parity
  • I'll still use open source programs for another reason. Security flaws can also be intentionally introduced for several reasons. This is one type of bug which is extremely unlikely to occur in an open source program.

    Or we just don't notice it... Remember that UNIX backdoor perpetuated through the C compiler? The one that would perpetuate itself when the compiler was recompiled? It would be completely invisible when looking at the source.

  • i looked at the report, and it does not appear that your assessment is correct. the problem is in pgp5i, being that it does NOT read from /dev/random where it should. it doesn't matter if the entropy is there or not. basically the problem was something like

    randomBuf = read(random_fd, &randomBuf, 1)

    intending to read one random byte into randomBuf. which it does in evaluating read, but then promptly overwrites that value with the return of the call to read, which is the number of bytes read, which is always 1 (unless you get an error, but how often does that happen?)! so the buffer is always '1', even on linux. damn, if that doesn't suck.

    a comment on the issue of open source having more eyes on the code...

    it may be nice to have more eyes on the code, but what worries me is testing. it's what even the most experienced coder wouldn't think of that can come up in those really weird deviant test cases, after which you smack your forehead, say "shit!", and fix it. true, we have tons of people using and reviewing the code, but does it really get as rigorously tested as when, in commercial development, people are paid to do nothing other than put it through the wringer? just a thought.
  • Can someone comment on the likelihood that this is a genuine bug, vs. the possibility that the flaw was introduced deliberately by some party to weaken PGPi?

    I guess I'd be interested in knowing how long the flaw has been in the code, and also who wrote this particular block of code.

  • The scary part is not the hole in PGP. That's been found, and if it hasn't been fixed already, it will be very shortly.

    The scary part is things similar to this that HAVEN'T been found.

    I get the feeling that the really succesful crackers are probably the types of people who spot things like this and never mention them, just exploiting them for their own use.

  • If you want people to carefully look over your code, make sure that you put an error in it, one that generates a really obvious error. I've been using this technique for a long time now, and it's worked wonders.

    Those PGP people are too competent for their own good. If outsiders trust PGP too much to check it, everybody loses.

    On a related note, my own incompetence has saved me from this bug--because I've never memorized the command-line options to PGP, I have to use it interactively.

  • Meaning that...
    pgp5i will eat out of /dev/random when it is used non-interactive. In linux, with entropy based random (take time between irq requests into a 'pool', then feed into randomness generator as seeds) it is just fine. Its the other unicies that are broke, not pgp5i.


    I have no idea where you got that from. It sounds like you don't either. Check out this alert [computerworld.com] in CompuWorld:


    The flaw was discovered in the PGP 5.0 code base
    and is specific to Linux and OpenBSD command-line
    versions ... Versions 2.x and 6.5 of PGP aren't
    affected and nor are PGP versions ported to other
    platforms.

    _____________
  • The Windows password dialog is not meant as a secure log-in, it is meant to provide different user options to different users who share a computer. Windows doesn't even have file permissions....

    Lame attemp at tongue in cheek humour aside you highlight the point that I was trying to make beautifully.

    This bug is not in the code. Login security is totally absent from the code.

    Win9x security was designed to be backwards compatible with a security flaw in DOS 1.0

    Also, I don't hava a VAX handy is there a port of VMS for the i386?

    When is an open source OS going to get a security rating?

    Good question: Does anyone know of work in progress?

  • Slashdot's choice of "sensationalized" headlines is getting to be as bad as the mainstream media! saying that PGP is proof that opensource is insecure.. That is the most misleading headline I have ever read.. I would almost call it blatently lying.

    If PGP was NOT opensource this "flaw" would have never been released to the public. this is why Opensource works. Yes it took a YEAR for this flaw to surface. BUT IT SURFACED! Besides, pgp is pretty sophisticated software, it makes the linux kernel look like a "hello world" program.

    Over the past few weeks, I have seen some very por choices for headlines, you guys either need to hire an editor to process stories before publishing, or start thinking before posting.. (Gee, something all slashdotters should do!)

    I would hate to compare slashdot reporters with that of the holland sentinel's or muskegon cronicles reporters (Sensational first! facts last!)

    change that headline or post an apology, PGP is proof that Opensource is more secure! (If microsoft owned pgp, they'd call that a feature!)
  • Although, adding some hardware can help solve the entropy problem. See http://lavarand.sgi.com/ for one example. Or feed a radio set in between stations into your sound card... Some problems are really hard to solve in software.
  • Gee, Slashdot flinging FUD at open source development? Who'd have thought...

  • Does anyone know of work in progress?

    SGI is creating B1/Orange Book Linux.

  • *insert tongue in cheek*

    I propose a new method of security programming that takes the best from both models.
    Just imagine; the security and stability of Win2000 excellently obfuscated with the ease of use and proprietory extensiveness of all *nices.

    I call it "Steve"

  • by Anonymous Coward on Sunday May 28, 2000 @07:25PM (#1041204)
    No one thinks open-source makes software invincible to bugs. Anyone who does... well I have some magic beans I'd like to sell you.

    The peer-review aspect of open-source is just a nice feature, and actually works most of the time. It isn't an ultimate and guaranteed aspect of it.

    People trying to be smart saying that "oh most people looking at the code aren't qualified." Wow, such a revelation. Yes, we thought there was a mystical army of highly trained CS experts poring over all open source code for bugs.

    Things slip through the cracks, even in the scientific community's peer review. Humans aren't perfect. Get it through your head.

    And yet, people fail to turn this accusing finger all the way around and wonder the same about commercial software. They just excuse it saying "Oh their jobs depend on it, they must check it."

    The major driving force in open source is that the programmers actually *use* the software they create. If a bug is found, they *want* to fix it because they are using this software too. They are directly affected. In the case of commercial software, even expensive software, they are not directly affected. Does Microsoft really want to fix bugs? No, it costs them money. In most cases, compatibility issues require companies to buy their software anyway.

    So you might say "Hey paying a lot for softare ensures getting good software because the company can pay for experts to pore over every line of code for bugs." Well yeah, but who says they will? They'll only do it as long as it's profitable. Then you'll be stuck with the bugs as fast as you can say COBOL. Oh wait, it will be worse than that because you CAN'T fix it.

    No one said open-source was perfect, and just because it isn't doesn't mean the alternative is automatically better.

    Maybe there should be a Frequently Used Arguments list. I bet a whole bunch of posts say about the same thing I have. That was a pretty stupid flamebait comment in that article. Oh was it supposed to make us stop and think about something? There are better ways to do it than pasting FUD-style(yes, it was.) flamebait.
  • the point I'm trying to make it is that no one is accountable for the Open Source screw ups. most of the positives of Open Source are merely conjecture or urban legend at this point. As more of these stories make the rounds, the more luster will be lost from Open Source. Open Source cannot work. It won't work.
    If you read the EULA on the pirated Microsoft software that you install, IT CLEARLY STATES THAT MICROSOFT HAS ABSOLUTELY NO ACCOUNTABILITY OR FAULT IN THE FAILURE OF SAID PRODUCT.
  • a) Its Rob's site he can do as he pleases.

    b) Having your postings at -1 is hardly censorship. I read at -1 all the time.

    c) Arguing that having a post at -1 is censorship is like saying "hey, my letter to the editor in your newspaper isn't on the front page, I'm being censored!"

    d) Did I mention its Rob's site, he can do whatever he wants? If you don't like it, you are free to post / go elsewhere.
  • by sreeram ( 67706 ) on Sunday May 28, 2000 @07:26PM (#1041207)
    I think you have to agree that "security through open source" is not a given. Let me try to summarize the arguments we've heard while adding some of my own.

    Against: If you open the source code, you are making it much easier for crackers to find flaws in your system.
    For: Yeah, but there will also be good guys finding flaws too, which will let us fix the bugs faster.

    For: If you close the source code, it doesn't mean that crackers won't find flaws. A determined cracker will get in, eventually.
    Against: Yeah, but just look around. There are a lot of good guys finding holes in closed source software as well, e.g., Bennett Haselton of Peacefire.

    For: Yeah, but the many eye-balls effect is a unique advantage of open source. Closed source software doesn't have that.
    Against: Well, the many eye-balls principle is just that, a principle. As this article shows, a lot of people just assume that others are doing the security audit; most are not competent to find flaws even if they are looking; nobody wants to look at a tangled mess of C code, etc. In reality, if your program is not an obviously security-related product (say it's your run-of-the-mill application), you've to admit that many eye-balls won't find any problems there. But a lot of systems are still put at risk because of these "applications".

    I think what the critics of open source security are missing is the deterrent power of open source. If they are really right in their claim that more crackers than good guys will be finding flaws in my program, then that's a strong deterrent for me to just code away as I wish. I have a sort of moral responsibility for the code I write (the warranty disclaimers notwithstanding) and I would be peeved if a cracker penetrated a system because of gaping security holes in my work.

    The incentive for writing better code is that much lesser if I know that "hell, who's going to be spending time disassembling this code, I've got a deadline to meet".

    Sreeram.
    ----------------------------------
    Observation is the essence of art.

  • Could you at least try a bit harder to be an entertaining troll :)
  • The scary part is things similar to this that HAVEN'T been found.

    I have to say this topic has been quite a bit unnerving. I'm a web developer like many of you are out there, I imagine. I am responsible for designing and programming small to medium size web applications for my company. I can use M$ products if I want, or I can use open source products. It's my call, but also my ass on the line.

    I am a good programmer, but I am *not* a security expert, nor do I have the time to learn how to be one on top of my other responsibilities. I don't want to use M$ products like IIS and ASP, but I know that if I do - and if a bug or security hole is found - it will pretty much be written off as M$' fault, and not mine, although I will probably have to go back and fix the damage

    However, I choose open source software, and we get hacked, my company will *definitely* view it as my fault. Now, I'm not one to play it safe, and I've got Linux/Apache/MySQL/PHP/Perl running all over the place, but still.....this topic makes me worry.

    Does anyone else have any thoughts on this? Feel the same way as me?
  • Windows is stupid? What do you expect? 50% of the population has an IQ under 80. - Trivial Pursuit

    huh? Isn't 100 the median IQ? Doesn't that mean that 50% of the population has an IQ under 100, not 80?
  • > for(i = 0; i <= count; ++i) {

    in fact, I have my own issue with that code. The for loop should read:
    for(i = 0; i < count; ++i)

    Yep. The original coder probably thought that using ++i instead of the "standard" i++ would somehow magically make the increase happen before the loop and the test.


    --
  • by Anonymous Coward
    That wasn't the problem as count was always set to 1 by the caller anyway, although it is very poor coding because, if count has to be 1, why bother having it as an argument. asking for bugs.

    The actual bug is assigning the return from read() back to Randbuf. DOH! Turning on compiler warnings probably would have found the bug at the first compile, because the conversion from ssize_t (the return type of read()) to char loses precision. (and I would argue is one of the many implicit conversions in C that shouldn't exist anyway).
  • I interpreted him here as meaning consistently high standards, the
    idea that everyone's contribution is equally valid: I don't suppose he
    means that code submissions necessarily pollute the code. His general
    point is well taken: most of the running on a successful and ambitious
    open source project is done by a small number of people.

    Exacting software engineering techniques such as design by contract
    are less likely to make their way into the democratic free-for-all of
    free software than the totalitarian discipline of in house
    development.

    Bill Joy isn't no opponent of open source. He is simply critical
    of the idea that many eyes are of much use in spotting really subtle
    bugs, especially ones to do with security. I have to agree with him
    on this. And yes, Bill Joy isn't infallible - csh was a pretty bad
    idea - but he should be regarded as one of the pioneers of open
    source.

  • PGP used to be developped in a closed-source way.

    Consequence : unseen bug.

    Since viewable by open source community, the bug was discovered.

    Far from being a demonstration of the insecurity of open sourced code, it's a perfect example of the contrary.

  • by istartedi ( 132515 ) on Sunday May 28, 2000 @07:58PM (#1041215) Journal

    I think the principle that people are missing is that, all things being equal, a bug/security hole is going to be found a LOT quicker by examining the source than by simply using the program.

    No. Finding any type of bug by using is a heck of a lot easier than finding bugs by examining source. Just imagine auditing 50k lines of source. Now imagine using a program, and discovering some subtle flaw in the output, like the wrong number of significant digits in some tabulated data displayed on a web page.

    The value of Open Source is not the ability to find bugs, but to fix them. In fact, one of the strong motives for free releases of betas is so that the program will have lots of users, thus increasing the chances that bugs will be found before the official release.

    It would be interesting to do a study. I bet that if you graph bugs/line it falls proportionately to the number of users for both closed and open source programs.

    In other words... test Test TEST. And then test again. And when your finished testing, you might want to consider some tests.

  • by Todd Knarr ( 15451 ) on Sunday May 28, 2000 @07:59PM (#1041216) Homepage

    It doesn't look like open-source provided an advantage in finding this bug. But because PGP is open source, there are still two advantages:

    • The nature of the problem was found. Had this been closed-source software, we likely would have known the keys were non-random but would have no clue why they were non-random under certain circumstances, at least until the creator decided to release this information.
    • I can fix the problem. Literally minutes after viewing the Slashdot story, I was in the process of rebuilding my copy of PGP5 after having modified it to fix the bug. I would still have been waiting on a fix for a closed-source program.
    As far as I can see, open source still provides advantages over closed source when it comes to finding and fixing bugs.
  • by PhiRatE ( 39645 ) on Sunday May 28, 2000 @08:10PM (#1041220)
    The number of errors in that code is truely disturbing. Here's my contrib for a first try at a decent fix. I hate the code layout though :)

    God knows whether this thing will format ok when it turns up on /. tho :) My apologies if gt's or lt's go missing.

    Not too comfortable with the sizeof(unsigned char) stuff, probably better as something like sizeof(*ReadBuf). Anyway, I'm sure theres plenty of errors, get stuck in.

    static unsigned
    pgpDevRandomAccum(int fd, unsigned count)
    {
    unsigned char *RandBuf;
    unsigned i;

    pgpAssert(count > 0); /* Make sure we have a count */
    pgpAssert(fd >= 0); /* Make sure we have a valid filedesc */

    /* Allocate a buffer for the count, and check we got a valid alloc */
    RandBuf = malloc(sizeof(unsigned char)*count);
    pgpAssert(RandBuf);

    for (i=0; icount; i++) {
    /* If the read fails, bail */
    if (!read(fd,RandBuf,count))
    break;
    pgpRandomAddBytes(&pgpRandomPool,RandBuf,count*siz eof(unsigned char));
    pgpRandPoolAddEntroy(256);
    }

    /* Free buffer */
    free(RandBuf);

    return(i);
    }
  • Yeah we lost a bit, in the for line, i &lt count, /. ripped the &lt out. Hey CmdrTaco, how about a Code Mode for submissions that fixes stuff like that? :)

  • I understand that "the calling code always passes a count of 1", but so what? What that really means is "the calling code always passes a count of 1 as it stands right now." The code is still crap.

    That's like coding while saying "We don't have to handle that case; it'll never happen". Bad code is bad code, whether or not the effect is immediately seen.

  • by ryanr ( 30917 ) <ryan@thievco.com> on Sunday May 28, 2000 @08:20PM (#1041223) Homepage Journal
    I am a good programmer, but I am *not* a security expert, nor do I have the time to learn how to be one on top of my other responsibilities. I don't want to use M$ products like IIS and ASP, but I know that if I do - and if a bug or security hole is found - it will pretty much be written off as M$' fault, and not mine, although I will probably have to go back and fix the damage

    However, I choose open source software, and we get hacked, my company will *definitely* view it as my fault. Now, I'm not one to play it safe, and I've got Linux/Apache/MySQL/PHP/Perl running all over the place, but still.....this topic makes me worry.


    It shouldn't matter which technology you use. if you get hacked, it's your fault or it isn't regardless of which set of stuff you pick. Obviously, if your employer or whatever is going to assign blame because you picked something "weird", you have to cover your ass.

    But the point I want to make is that it doesn't matter if you're a security expert or not. Someone, you, the OS vendor, the web server vendor, has already screwed up. There's a decent chance that someone might find said screw-up. If they come after you, you'll be defaced, and there's not a lot you can do to prevent it. In such a situation, the thing to do is to prepare a plan on how to react and recover.

    This includes things like buy-in for downtime to apply patches, whether or not you'll want to do forensics and prosecution, or whether you'll just try to get back on line as quickly as possible.

    The advantage of open-source is that you'll probably get a patch quicker, or you might even be able to make your own when you see a vulnerability report.
  • Exactly. Look at any mature OSS project and you won't find too many security holes. Sendmail and BIND, to name two. On the rare occasion that bugs are found, they are patched within hours, if not minutes. Compare that with literally /any/ hole that has been found in CSS products (like... Windows, for example =]), where we are lucky to have a patch within a week, and where bugs are found almost every week. Or like Sun, where people have literally found exploits in Solaris that Sun just plain ignored or didn't deem it worthy of a patch. You're fscking screwed in that case. It's not like you can fix them yourself.

    No; I'd say OSS is the far more secure approach in the long-run. That being said, however, security through obscurity is a pretty wise approach for short-lived apps. For example, I have a feeling the reason with didn't see a Slash release for years is because they were still cutting their teeth on Perl and Apache security for the first few years. Releasing the source would really have screwed Slashdot - every "haxor" would have found some sort of hole and messed with the site at a pretty crucial time in its history. I know this is pure speculation, but Taco has admitted numerous times to massive code overhaul, either through posts or interviews. One can only guess why that was. Even now, when Slash is quite mature, people have found ways to exploit it, BTW.

    --
  • Does anyone sensible actually believe OpenSource = error free? I don't think so. What a load of sensationalist propaganda. Far from being a bad thing this is a good thing because an error has been found. I get so tired of people jumping on the bandwagon having a go at OpenSource everytime an error is found. Of course there are going to be errors no-one is perfect.

  • by gnubie ( 2277 ) on Sunday May 28, 2000 @08:54PM (#1041232) Homepage
    What are the chances of getting some editorial accountability around this place?

    Jamie, before you go stating that "OSS != Security," please consider:

    • Bugs in crypto systems are extraordinarily difficult to hunt down and squish. Read Applied Cryptography [fatbrain.com] if you feel like getting your brain around why.
    • A bug of this magnitude in a product with source code not available would probably never have been discovered.

    PGP's license has never met the Open Source Definition (it's free to use only under certain circumstances). Despite this technicality, your headline is stupidly sensational and self-defeating. Wouldn't it have been much better to title it "Key Generation Bug Found in PGP 5"?

  • Hmmm...reminds me of a program I was working with at work a year or two ago. It was a Java program accessing a database through JDBC, and worked just fine with the database we were using. Then we tried running to an MS SQL Server database instead. Run into a couple of bugs in various MS components (most notably their JDBC driver). After reducing it to a simple test case, we wrote a bug report to MS.

    A month or so later we finally got a reply from MS which said that they received our report, but that the problem was in compliance with the spec, and not a bug in their driver. So I replied with a direct quote from the spec that showed that they were indeed doing it wrong.

    Another month or so went by, and I got another reply from them. This time they conceded that they didn't match the spec, and assured me that this "feature" would be added in the next version.

    Don't know if it ever got fixed...by this time we had given up on it and moved on to other things.
  • shouldn't your read be !read(fd,RandBuf,1) as per the Buf-Fix that was posted :)

    Nope. Because I allocate the correct buffer length etc, and because I don't assign the read return value into the buffer.

  • t's likely a mistake not to check the return on malloc(). Calling free() on a null pointer isn't good ;) Unless of course, some error checking is
    done in pgpAssert, haven't looked.


    Thats exactly what pgpAssert() does :) Checks for a 0/NULL value and bails if that occurs.


  • by Effugas ( 2378 ) on Sunday May 28, 2000 @09:26PM (#1041243) Homepage
    Bizarreness. I spent about two hours the other night studying using the mic port.

    Best solution I found mentioned hooking a AM radio mistuned up to the mic port--then people mentioned FM had more entropic properties. Your big problems are, 1) You've seriously got to deal with the fact that a 60hz bias is coming off of the nearby AC transmitter/power supply, and 2) an attacker can pretty easily broadcast patterns at you on the exact frequency you're trying to be mistuned to. Since anything that's receiving a signal is also transmitting it(thus causing major privacy issues when a parking lot scans to see what stations people are listening to by picking up their "sympathetic"(corrent word?) retransmissions), you should remotely be able to determine the AM/FM band being used. Not Good.

    I was thinking for a bit that deriving entropy from a the differential sync between many different NTP servers might be decent, but A) This doesn't scale and B) The differential sync, even at the minute scale, likely isn't more than a couple bits per resync. So you'd need to scan a few hundred servers a dozen times before you could create a 2048 bit key.

    I need to create about 200 of 'em. A day. Soon to be 500. *sigh*

    Interesting thought of the hour: Randomness isn't contained in the numbers themselves. Is a Royal Flush random? Depends how it was dealt.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • Actually, the more I think about the Lava Lamp Randomizer, the more I wonder about its actual entropy. Yes, lava lamps themselves are quite entropic, but how much of the overall image is of the lava lamp? It seems most of the signal they derive comes from the quantization noise from the CCD in their O2Cam--and that's pretty predictable. Now, granted, they munge and one-way hash their original content to oblivion, but that doesn't mean their original content is as highly entropic as they might think.

    I'd probably be more secure if the camera was mounted such that the entire image was a near microscopic scale view of the melting wax--but even then I'd be curious literally how many different possibilities of wax melting, unmelting, and wax separation there might be. It's not miniscule, but I do have to wonder how high it might be.

    The real thing that comes to mind isn't that you need 100% accuracy...it's that there's probably a good amount of work you can do by eliminating 90% of impossible occurances(like the wax flying out from the lava lamp!)

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com

    P.S. That's not to say that the Lavarand system isn't the coolest damn RNG ever invented.
  • Most OSS projects lack the funding to get certified. Each new minor version of the linux kernel would be enough to force a recertification.

    Care to fork over the funds? I am kinda broke.

    My question for you. What in the hell does the security of Slashdot have to do with anything?

  • So somebody finds a bug in a open source product, and suddenly that's a proof of how insecure open source is ?

    Gimme a break ! This is a proof of the contrary.


    Seriously folks, how many bugs like this do you think exist in closed-source commercial products ??

    You know, the type of softare where you will never know about bugs like this.

    --
    Why pay for drugs when you can get Linux for free ?

  • in proprietary code, we'd all have to break laws to even demonstrate that it was broken.

    Here we can see that it is fixed, and we can learn from it. _we_ are assured that it was our error of omission, noone else's.

    I am willing to pony up and say I screwed up for using open source without reading the source. I have no problem accepting my part of the blame.

  • Decaying of a radioactive element? Funny, that. Just read about some guy with a parallel port geiger counter and a microcurie of Americium.

    There are better sources that are more environmentally sound--dirty diodes and whatever they've built into the Pentium III look pretty decent.

    --Dan
  • by geirt ( 55254 )
    >
    > Versions 2.* and 6.5 of PGP do NOT share this problem.
    >

    This is how this was fixed in pgp 6.5i:

    if((fdesc=open( devrandom, O_RDONLY|O_NONBLOCK)) > 0) {
    while((numread = read( fdesc, buffer, BUFFSIZE)) > 0) {
    for( p = buffer ; numread > 0 ; numread--, p++ ) {
    PGPGlobalRandomPoolAddKeystroke(*p);
    *p=0; /* burn it.*/
    }

    RandBits = PGPGlobalRandomPoolGetEntropy();
    StillNeeded = TotalNeeded - RandBits;
    }
    }

    <conspiracy mode>
    This bug was introduced in PGP 5.0 and fixed in PGP 6.5. Why wasn't
    this reported on bugtrack, a long time ago ? Although the code is
    substantially rewritten, I am would be very suprised if the author
    of this code in 6.5 didn't see this bug (after all he fixed it ...)
    </conspiracy mode>
  • <i>Actually, they use 3 lavalamps standing together, and 3 cameras, in one implementation Ive seen used.</i>

    Yup, but some bits are more random than others. With a static camera, there will be bits that are entirely determined from variations in light and sensitivity.

    There's likely to be enough bits to seed a RNG, but the extensive work I've heard of being done by eliminating impossible combinations(31 round Skipjack was defeated in greater than brute force, while official Skipjack is 32 round!) leaves me wondering.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • PGP cannot provide an example of Open Source not meaning security because PGP IS NOT OPEN SOURCE!!!

    bye
    schani
  • Bill Joy does have a partial point. Most of the programs recognized as "kick ass" in our business got that reputation after being mostly written by a single person. C.F. sendmail and bind.

    Most of the exceptions are rigidly controlled by a single person, and have wildly-varying parts. C.F. Linux, FreeBSD.

    Are there exceptions? Of course. But look closely before you call something an exception; a lot of it started with a single-person core, and has added cruft from there.

    However, all of this misses a vital point; it doesn't matter if 99.9% of the eyes looking at a given program are incompetent eyes, if that remaining .1% is the best of the best.

    There are people at RSA who use PGP. Bruce Schneier uses PGP. Lots of folks who are good at writing crypto use PGP.

    They see the bugs. They don't see them all; if they did, they'd have been fixed long ago.

    Meanwhile, PGP is still better than most of the alternatives. That's *BECAUSE* it's open, not in spite of it being open.

    Open Source benefits me even if I never look at the code, because if PGP had been written by, say, RSA, people like Bruce Schneier would never have been able to look at the code either.

    The advantage of Open Source is that those few really good people can look at it, even if they work for different companies.

    Unless, of course, the lawyers screw it all up by demanding employees not look at outside code.


    --
  • In this particular situation, the bug was eventually found, so yes, it did work, it just happened to take a year. There are bugs in certain other operating systems that took 4 to 5 years to find. Prime example: The bugs *STILL* being found in Solaris 2.5.

    Now, lesse how long it takes for there to be a source patch to correct the problem. I bet it doesn't linger for months and/or years as other OS's due.

    Heck, look how long it took MS to fix the darned smurf attacks.. :-)
  • This is not hypocrisy (or even hipocrisy), though. PGP isn't open-source, its licensing is a pile of pants.
    However, the source is available so the bug has *been* found *and* located, and most importantly a world-verifyable patch has been produced. Beat that, you closed-source fanatic you...

    If M$loth make a mistake they try to close it up, which is utterly stupid. If an open-source project has bugs, they get fixed.
    ~Tim
    --
    .|` Clouds cross the black moonlight,
  • by jamiemccarthy ( 4847 ) on Monday May 29, 2000 @05:09AM (#1041308) Homepage Journal
    What are the chances of getting some editorial accountability around this place?

    Comments like yours are our editorial accountability :-)

    Jamie, before you go stating that "OSS != Security," please consider:

    Bugs in crypto systems are extraordinarily difficult to hunt down and squish. Read Applied Cryptography if you feel like getting your brain around why. A bug of this magnitude in a product with source code not available would probably never have been discovered.

    Many crypto bugs are hard to find. This bug should not have been. Passing in a pointer to a buffer and then assigning the function result to that same buffer? I bet there exists an automated tool which understands the parameters to read() and would find that error.

    It's not like read() is an obscure system call. Using it improperly like this is practically criminal.

    And I never said "OSS != Security," in fact, I explicitly said the two were not necessarily equal, "emphasis on necessarily."

    PGP's license has never met the Open Source Definition (it's free to use only under certain circumstances).

    OK, you got me there - Dan Kaminsky also wrote in to mention that its license prohibits commercial use, adding "many of the eyes that would have otherwise been directed at the PGP codebase wouldn't touch the product."

    I'm not entirely sure that's true. PGP should naturally attract a lot of eyes by virtue of being high-profile. Many of the people who would be or should be looking for bugs like this one are up-and-coming cryptographers, for whom finding a bug in PGP would garner street cred. They wouldn't care whether they could use the code commercially.

    Still, point taken. Let me talk to a friend who knows PGP better than I do, and I'll look into revising the headline and/or updating the story in the next few hours.

    Despite this technicality, your headline is stupidly sensational and self-defeating. Wouldn't it have been much better to title it "Key Generation Bug Found in PGP 5"?

    When we get two submissions that are both important, and related, it makes for a more interesting discussion to link them together. Unfortunately I think many readers are only reading the PGP story, and skipping John Viega's excellent article [earthweb.com] - or at least there hasn't been much discussion of it, which is a shame.

    Jamie McCarthy

  • >I don't use ANY stolen Microsoft software any more.

    Good for you!

    Given that if you are using stolen Microsoft code your rights are nill, what rights and accountability did you GAIN by buying that Microsoft code?

    The EULA applies, and the quote of "MICROSOFT HAS ABSOLUTELY NO ACCOUNTABILITY OR FAULT IN THE FAILURE OF SAID PRODUCT." is valid.

    So, lets connect the dots, as you won't answer the accountability question, and instead launch into a comment on stolen software.

    Lets look at OpenSource

    1) You claim No accountability
    (Reality: Most OpenSource has a contract that says the author is not responsible and is to be held harmless)
    2) Code is aviable to fix bugs if you find one
    3) Bug fix time can be under a week
    4) Abandoned programs are supportable by users who find the program still useful

    Now lets look at closed source
    1) Contract claims No accountability
    2) No code is aviable to fix bugs if you find one
    3) Bug fix can be never
    4) Abandoned programs are abandonded, never to run on newer platforms

    Looking at this list, OpenSource puts a burden of responsibility on the user to support themselves, whereas because you can't support yourself with closed source, the burden is (supposed) to be elsewhere. Yet, no closed source company has a LEGAL RESPONSIBILITY to take on that burden. If you are not into personal empowerment or the rights of individuals to control the tools they use, I can see where the 'promise' of some closed source behemoth holding your hand and guiding you along is a sedutive siren song.

    As you are an opinionated AC, the question is this: What accountability does Microsoft have to their code?
  • If you send an unsigned message, then ANYONE can corrupt it, alter it, or trunctuate it.

    The idea of message signing is to PREVENT such things. If you don't sign your message, why are you expecting PGP to protect you against attacks which only message-signing would prevent?

    There are two rules with PGP:

    1. If you don't want anyone without the correct key reading the message, encrypt it.

    2. If you don't want anyone to undetectably alter the message, SIGN IT.

    The two are orthogonal considerations.
  • If this bug happened in proprietary software.....

    Would it have ever been found?

    Would the company have advised its users that they might have weak keys that should be revoked? Or would they fix the flaw silently and keep their customers in the dark?

    Open source has other advantages for the consumer.

  • But Sendmail has a long history period. The amount of bugs found in Sendmail pales in comparison to that of Windows. Go to Bugtraq/NTBugTraq and search for 'Sendmail' and 'Windows'. Which one do you think you'll find more posts on?

    --
  • If you really would like to generate random data, I suggest you get a local EE to whomp up a diode noise generator: you take a zener diode, and lightly reverse bias it (if I remember correctly; I'm not in my office right now). As a result, you get a lot of random thermal noise, which you feed into a digitizer. Instant real random noise with two caveats
    1. Your soundcard has a lowpass filter on it: therefor any data sampled by the soundcard will have a high degree of short-term correlation. Better wait at least 10ms between reads to get good uncorrelated data
    2. Unless you are very careful in how you adjust the levels of the signal, it will not cover the whole range of values from the card. You are best off taking the data and gzipping/bzipping it to increase the entropy of the data to maximum.

    You can also use an FM radio that does not have muting: tune it off frequency and the FM discriminator will spit out band limited white noise. However, the same caveats apply, and the nice thing about the diode approach is that nobody can screw up your random number source with a simple carrier of reasonable amplitude.

    Of course, you always could use a Lava lamp [sgi.com]
  • More and more, good randomness is critical for important computing functions such as encryption. I think it's entirely reasonable for CPU's or other hardware components to include an analog source of randomness, like a tiny white-noise generator. Do any big chipmakers plan to do something like this? If not, why not?
  • Actually, Intel ships a ostensibly good RNG with every Pentium III--I don't think it has /dev/random support yet, but it supposedly can be polled for 75K/s of good random data. *drool*

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • Convergence, you're wrong.

    Consider the CDMA model of Encryption: Know the key, get the data. Don't know the key, you get noise.

    Period.

    If I can not know the key but modify the data stream--and it still decrypts *without complaint*--then something's wrong. Truncation attacks are different--they're essentially selective DoS where the cipherstream suddenly stops being valid--but if PGP doesn't *complain* that suddenly something broke and this was all that could be read--i.e. "there was more that was part of this message, but I can't read it"--then this is a cryptographic failure.

    PGP doesn't protect you against an email server silently deleting your mail--there is no conceptual way it could or should. But silently passing truncated messages means that somebody can reconstruct a message without being able to read it. The fact that avoiding this weakness is as simple is encrypting the one way hash of the message as a whole with each independant truncatable block(such that the hash of the decrypted document would then fail to match the original hash derived before the message would sent) means that this is a weakness that should have been addressed.

    Of course, mind you this attack hasn't particularly been verified, and GaryH is the first person I've ever heard to speak positively of S/MIME. But you're completely wrong to state that message authentication is *entirely* orthogonal to encryption. Knowing *who* sent a message is orthogonal. Knowing that *this* specific message--which may contain identifying information in the untruncated blocks--was sent isn't.

    It's still tied to the destination aspect to know whether *all* of a given encrypted message reached the destination. I don't particularly accept that any File-Oriented Cryptographic System should, or needs to accept selective DoS. It's just too simple to prevent.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • <i>The easiest way to generate random numbers using existing system components is to time one of the mechanical storage drives. Seek time when measured to the nanosecond is random. </i>

    Unfortunately, the widespread layering of memory caches throughout the computer infrastructure(in OS, in drive controller, in drive itself, etc.) prevent this from being as slick of a solution as I'd like.

    --Dan
  • this bug has absolutely nothing to do with how hard it is to find random data. hell, the solution is to call dev/random. it's just that the buffer happens to get overwritten due to a far more mundane bug. ferchrissakes, did you even read the article? did any of the moderators read the article??

    Yes, I did read the article. Your annoyance is understood, however.

    /dev/random, rather than the internal entropy engine, was being called in the first place *because* non-interactive entropy gathering is such a difficult problem. PGP had no similar issues when used interactively because they essentially wrote their own interactive entropy gatherer. They couldn't do the same for non-interactive content, so they wrote a (buggy) bridge to /dev/random.

    Obviously, they should have verified that the content coming *out* of that bridge was something other than all ones. But the most interesting thing to me is the similarity of this accident to an airline crash or a school shooting--an intensely rare situation, made notable and newsworthy *by* its rareness. We pay little attention to the moderately common problems(invisible security issues lurking beneath most closed source cyrpto), but both the extremely common issues(buffer overflows) and rare ones(this PGP hack) get lots of press.

    Interesting.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
    • It is already bug-free and perfectly engineered.

    I'm a huge fan of OpenVMS. But this is a absurd exaggeration. Making ridiculous claims makes OpenVMS advocates look ridiculous and, by association, doesn't do the perception of OpenVMS any favors.

    Compaq is regularly issuing ECOs for OpenVMS, many of which have instructions that insist that ALL CUSTOMERS install at once. Each of these ECOs addresses one or more bugs or at least lack of "perfect engineering".

    • For example, VMS does not contain C style buffers...

    A lot of OpenVMS these days uses C and internally has C buffer overrun problems. I could quote you the ECOs, if you're interested. I saw lots of these for UCX 4.x.

    There was even a system service (System API) that was instituted awhile back that used C style buffers. Now, OpenVMS Engineering later realized the error of their ways and now offer an equivalent system service using the safer string descriptors.

    • It also doesn't have setuid.

    You're right, it doesn't have setuid - if you are referring to setuid scripts/programs and not setuid(), here too, OpenVMS has Persona Services which are equivalent. Instead of setuid scripts/programs, OpenVMS has installed privileged images, that offer a rough equivalent.

    I do happen to believe that installed priv'd images offer a number of advantages over setuid scripts/programs, but they offer about the same functionality.

    I really do think that OpenVMS has a number of inherent advantages to the alternatives in the areas of security and reliability (not to mention scaleability!), but we need to be objective. Making claims about it being "perfectly engineered" and "bug-free" are not objective.


    -Jordan Henderson

  • If you want a truly random source, how about measuring the Brownian motion in, say, a nice hot strong cup of tea...

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • Yes, a standard CFB or stream encryption algorithm has that weakness. Which is why you usually combine it with an (encrypted) CRC or cryptographically sign it. These are protocol-level issues.

    And the fix is obvious, you said it yourself. If you don't sign the message in some fashion, you deserve to lose. Signing the message prevents most (all?) of these attacks.

    If you do a 'gpg -e', you had better know what the security issues are. Of course, this applies to using ANY encryption software, no matter what settings you use on it. If you don't know your encryption software and what the options mean, you shouldn't be using it.

    BTW, just including (and encrypting) the hash of the decrypted message isn't enough to protect against these attacks.

    If the message is for multiple recipients, recipient A could decrypt the message, alter it, compute the correct hash for the altered message, and then repackage and send the altered message along to the other recipients who will accept this message as legitimate. To prevent this, the hash of the correct message should be 'signed' in some fashion where only the origional sender can create it. This process describes GPG's 'sign' option, which we know works.

    Signing and encryption are orthogonal features and needed in different situations. If I am sending details of an auction to N people, I don't care who reads them, but I do NOT want them to alter the messages sent to the other recipients. If I just encrypt the message but leave it unsigned, noone can read the messgae, but they may alter it. (Admittedly, not very useful for communication, but useful for storage. For example, storing private stuff on a hard drive/floppy.) Finally, I may encrypt and sign the message.

    You could consider it a user-interface issue. Maybe GPG should (by default) sign the message whenever the user requests encryption?

    [[BTW, if you want to take this conversation to email, I'll be happy to. Unfortunately, my email address no longer works; I can email you.]]

  • <i>If the message is for multiple recipients, recipient A could decrypt the message, alter it, compute the correct hash for the altered message, and then repackage and send the altered message along to the other recipients who will accept this message as legitimate. To prevent this, the hash of the correct message should be 'signed' in some fashion where only the origional sender can create it. This process describes GPG's 'sign' option, which we know works. </i>

    Hadn't considered the multiple recipient problem when it came to unsigned hashes.

    But, just as strongly, you haven't considered the reality of PGP allowing me to receive mail from untrusted individuals with a modicum of cryptographic security. Segment out your security scheme--when you're the receiver, you can't control who then sender transmits to. When you're the sender, you can't control who the receiver retransmits to. But if you, as the receiver, can trust that the user hasn't given away their secret(the message, in this instance) to anyone else but you, truncation detection through hashes or anything else lets you recognize when the message/secret you receive is incomplete.

    That's valuable--you're able to receive non-multicast messages without concern for the integrity of that message! Essentially, both the verification key and the message itself get condensed down into the content of the message. Presumably whatever it says authenticates the author, provided the message is complete.

    Once the security architecture is formalized, this property just can't be suppressed. It's not ideal by any means--you can't extend the trust you've established from one message to any other, as you can with stored private signing keys--but the arbitrarity of the trust is identical, down to the fact that a truncated key offered for verification purposes had best not work either(here's my 2048 bit verification key, oops truncated to 256 bits...)

    Go ahead and email me if ya like.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • Damn. you're right :) Is a new bug *frown*, I guess I should probably get rid of the loop, but without knowing exactly the function of the procs below the read I can't be sure what is appropriate. Rats.

  • DES was insecure for YEARS and the government knew it and deliberately did nothing

    No it wasn't. DES was and remains secure for suitable key length. There are no known bugs in DES.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...