Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology

GCC 3.0 Released 210

Phil Edwards, GCC Developer wrote in to say: "The first major release of the GNU Compiler Collection in nine years, GCC 3.0, is finished. There is a long list of new features, including a new x86 back-end, a new C++ library, and a new C++ ABI, to pick my three favorites. Note that the GCC project does not distribute binary packages, only source. And right now the server is heavily loaded, so if you intend to get the source tarball, please /. one of the many ftp mirror sites instead. Plans for 2.95.4 (bugfix release), 3.0.1 (bugfix release), and 3.1 (more user-visible features) are all in progress." MartinG points to this mailing list message announcing the upload.
This discussion has been archived. No new comments can be posted.

GCC 3.0 Released (USE THIS STORY WHEN IT HAPPENS!)

Comments Filter:
  • by Anonymous Coward
    Yes. See the changelogs.
  • by Anonymous Coward
    My guess is they are focussing on features and compatibility and being free as in beer and speech rather than heavily optimizing code and processes at this time. On the other hand good programming practices which they seem to be embracing are steadily bringing preformance increases. Sure it may not be up to MSVC in speed but you can bet that given enough time and versions it will.

    Since you seem to have a technical inclination you might want to look at the source and see for yourself. Even a grep/diff between the present and last two versions may prove enlightening.

    pingmeep
  • by Anonymous Coward
    Try turning off optimization on files like that. Every C++ compiler I've used on UNIX systems is painfully slow at optimizing template code.

    On the other hand, support for templates in MSVC++ is notoriously poor. For one thing, the MS compiler & linker don't support automatic template instantiation. You may not have noticed this because the IDE does the bookeeping for you and generates the template declarations needed by the compiler. For some reason, most compilers are faster when you turn of automatic instantiation and explicitly list the template instances you need.

    Also, perhaps the MS compiler isn't performing some of the same optimizations on template code.
  • by Anonymous Coward

    GCC, as most other compilers, bootstraps itself, that is, a small assembler program compiles a subset of the language in which a compiler for the whole language is then implemented (xgcc).

    No, it doesn't. That would be grossly nonportable. It uses your existing C compiler to compile itself, without optimization, and then compiles itself with that (to get the benefit of its optimizations &c); then it recompiles itself with *that* and compares the last two. If you don't have a C compiler, you can't build GCC (equally, if you don't have an Ada compiler, you can't build GNAT).

  • by Anonymous Coward on Monday June 18, 2001 @05:39AM (#143889)
    2.96:

    hello world
    0.01user 0.00system 0:00.00elapsed 142%CPU (0avgtext+0avgdata 0maxresident)k
    0inputs+0outputs (52major+9minor)pagefaults 0swaps

    3.0:

    hello world
    0.01user 0.00system 0:00.00elapsed 125%CPU (0avgtext+0avgdata 0maxresident)k
    0inputs+0outputs (31major+11minor)pagefaults 0swaps

  • GJC [bell-labs.com] is indeed cool, and it had the name before GCJ [gnu.org]. Watch those acronyms. :)

    BTW, gcj and gjc work together. :)

  • Well, it seems like all the upcoming Linux distributions are going to .0 now (Redhat 8.0, Mandrake 9!, SuSE 8.0) - as all the stuff are not backward compatible with 2.X (gcc 2.96 is also not compatible with 3.x nor 2.x)

    Has anyone done any performance tests? some benchmarks maybe? it would be really interesting to see this GCC against the upcoming commercial Intel C/C++ compiler..
  • You're telling me. About 3 years ago I wrote C++ code and wanted to use the standard stringstream class. But all they had was strstream, a rather buggy knock-off of it.

    Now, stringstream still isn't in the standard library in Red Hat 7.1! One can only say that it's taken too long.

    ---
  • Which features do you need? The export keyword is not supported, but automatic instantiation works very well.

    I'm greatly anticipating the new C++ library. Finally we will have stringstream! For me the C++ library was the largest hole in the C++ implementation. The compiler itself has been quite good for some time, feature-wise.

    --

  • by David Greene ( 463 ) on Monday June 18, 2001 @09:16AM (#143898)
    For one thing, the MS compiler & linker don't support automatic template instantiation. You may not have noticed this because the IDE does the bookeeping for you and generates the template declarations needed by the compiler.

    This is not a bad thing. In fact, it is a very valid (I'd say good!) design decision. Explanation below.

    For some reason, most compilers are faster when you turn of automatic instantiation and explicitly list the template instances you need.

    The reason explicit instantiation is so much faster is that the compiler doesn't have to compile your code twice. Automatic template instantiation requires some sort of support outside the compiler proper. For all the gory details, I recommend Stan Lippman's Inside the C++ Object Model. It's a little out of date and inaccurate (or more properly, misleading) at times, but for anyone interested in why C++ works the way it does and what sort of decisions the compiler makes when generating code, it's a great book. It dispels many of the common myths about C++'s performance and makes an honest evaluation of the cases where performance is negatively affected.

    But I digress. One of the most popular strategies for automatic template instantiation involves some sort of "collector" program. The basic idea is to collect all the object files that go into the final link and look for undefined symbols that refer to template code. The GNU collect2 program does this for g++. Once the symbols have been identified, the compiler needs some way of knowing how to recompile the source files that contain the template elements. Strategies include using control files generated by the compiler and collector, entries in the object files themselves (strings or symbols are common) or a combination of the two. Other strategies are possible as well. The driver script (the IDE in VC++) gathers this information and reinvokes the compiler to recompile the source files containing the needed template code, passing flags to tell the compiler to instantiate particular templates.

    After having implemented some of this, I have to tell you that it is all a tremendous pain in the neck. It's also quite, quite convenient and necessary for the user. :)

    As for the MS IDE, that's just another strategy for handling the problem. No compiler that I know of fully handles automatic template instantiation by itself. The closest that a compiler could come to this would be to aggregate the collector actions into the compiler as a separate phase. This is really no different that running a separate program and the "compiler" becomes the driver script (think g++), with the compiler proper (i.e. translation and transformation) being but one (usually, more than one) phase of compilation.

    Every C++ compiler I've used on UNIX systems is painfully slow at optimizing template code.

    Is this not true under Windows? I'm curious, as they should have many of the same problems. Optimizing template code is expensive because there is a lot of it and most of it is inlined. Inlining is not as trivial one might initially expect and it has large implications for transformation (optimization). Inlining usually greatly expands the size and scope of functions that are transformed. There are more nodes, more symbols and more analysis bookkeeping to handle. Many compiler algorithms have complexity of N^2 or worse (lots are NP-complete!) so things get dicey as code size expands. Strangely enough, this is also why transformation can speed up compilation -- it often removes nodes and symbols from the program!

    --

  • by Bill Currie ( 487 ) on Monday June 18, 2001 @09:03AM (#143899) Homepage
    And gcc comes with it's own compiler (also not built), just to compile the compiler.
    That's not particularly accurate. There is source for one, and only one, C compiler in gcc (and one each of the other languages). The way the gcc build process works (esp when using "make bootstrap", not sure if that's default now or not, been a while) is to:
    1. build just the C portion of GCC using the system compiler using no optimisations
    2. move the freshly built gcc aside (bins and object files) into ./stage1
    3. using the compiler in stage1, build all of the selected language portions of GCC, this time with optimisations.
    4. move this second compiler (bin and .o) into ./stage2
    5. using the gcc in stage2, build a third copy of gcc, with the exact same optimisations.
    6. compare the third copy of gcc with that in stage2, if they differ, bail out
    7. build aditional libs (stdc++, iberty, etc)
    8. (if specified) install libs and the stage3 compiler
    A slow, painful process (esp when doing porting work), but it ensures that GCC is at least good enough to build itself as the installed compiler is an exact copy of the compiler used to compile it.

    Bill - aka taniwha
    --

  • There's a nifty preprocessor hack you can use to get around this problem in compilers that don't properly support the standard:

    #define for if(0);else for

    This causes the scope of the for control variable to be correct, without affecting other control flow semantics. The only practical disadvantage is your compiler may warn about the constant value in a conditional. And your sensibilities may be offended by using the preprocessor to redefine a keyword. :)

  • by emil ( 695 ) on Monday June 18, 2001 @06:47AM (#143901)

    AFAIK, gcc requires a *K&R* C compiler, as documented in the first edition of The C Programming Language. It need not support function prototypes or the void type (I think).

    On UNIX systems that do not natively support and include gcc, one uses the system's C compiler to generate xgcc, which is GNU C (but not compiled by GNU C). One then uses xgcc to generate a GNU-compiled gcc. I don't know why xgcc is not normally installed and used, but I assume that it would be an ease-of-debugging issue (and you can also debug gcc-optimized code, which most vendor compilers will not do).

    HP-UX natively includes a K&R (non-ANSI) C compiler. It is almost useless, but it will successfully compile gcc. On most other commercial UNIX systems, if you lack a compiler, you must rely upon someone who has a compiler to generate a verstion of gcc for you (which accounts for the popularity of packaged gcc versions on many platforms). This can also be complicated by licensing of the system include files and libraries.

  • What about the following, surely it's good ?

    #pragma once


    It's not good, and don't call me Shirley.

    Seriously, though, the gcc approach to this is, I think, better, if rather more verbose and awkward-looking. As I understand it, gcc looks for include guards, code like

    #ifndef _MYFILE_H
    #define _MYFILE_H
    ...
    #endif

    and if it finds it, it treats that like #pragma once for the given file. So there's no incompatibility with compilers that don't support #pragma once. Meanwhile, if for some necessary evil you need to include MyFile.h again, simply #undef _MYFILE_H and go.
  • In short, the 2.9x line (which Redhat admittedly bastardized a bit by grabbing a snapshot and calling it 2.96) will still be fixed for bugs.
    Acutally, Red Hat GCC 2.96 is much closer to GCC 3.0 than to GCC 2.95.3. Except for the C++ library, which is basically the same as in 2.95.3. This put Red Hat in an odd situation, they will have to track the 3.0 compiler and the 2.95 library, if they want to remain compatible with their own 2.96 release. However, they employ many of the best GCC engineers, so if anyone can do it, it will be them.

    I'd prefer them to swicth to 3.0 at the earliest possible location, though.

  • It also compiles with GCC, with a warning. The code used to be correct, but the standard commite changed the rules.
  • Which means you must compile your C++ applications with the same GCC version as your C++ libraries.
  • I believe GCC 2.8 was only fully merged back into EGCS in GCC 3.0.
  • Yes. That is the point of the "gcc 2.96 Red Hat fiasco". The C++ library and ABI has been changed in 3.0 in order to conform to the C++ standard, as well as new cross-compiler C++ ABI standard, and to be much more efficient. The problem with the Red Hat release was that it was released half-way through that process, so it would be both forward and backward incompatible with the official FSF releases.

    In most other ways, the unofficial gcc 2.96 is an improvement over 2.95, and for C code compatibility is as good as between any two releases of gcc. Mostly GCC 2.96 catches a few bugs that 2.95 failed to notice.
  • by Per Abrahamsen ( 1397 ) on Monday June 18, 2001 @06:22AM (#143912) Homepage
    Programs that uses iostreams will tend to be slower, because of the new (ISO mandated) template based iostream implementation. In particular, a "hello world" program will tend to compile much slower, since it spend most of its time in the header.

    Other programs can compile much faster or much slower, depending on what the bottleneck used to be, and what features they excersize.

    There are several implementations of precompiled headers for GCC, which are likely to give a large boost in compilation speed when one of them is selected for inclusion.

  • by Per Abrahamsen ( 1397 ) on Monday June 18, 2001 @07:10AM (#143913) Homepage
    GCC is basically a new name for EGCS.

    I don't know about PGCC, but since GCC 3.0 has a brand new ia32 backend with focus on Pentium II performance, chances are that PGCC is no longer relevant.
  • I can't quite tell if this post is serious or not, assuming it is:

    It may be great for GCC that RedHat provided them with a wide-scale test of their software, but what about users of RedHat who were stuck with a buggy version of GCC for months?

    --

  • by Sabalon ( 1684 )
    Good point. Since at some point the first GCC had to be cross-compiled or bootstrapped from a non-gcc, non-gpl, and therefore probably commercial/proprietary compiler, does that mean that all off gcc is non-gpl derived?

    So, since the original gcc was non-gpl derived, then everything built with gcc, while it may be gpl derived, is truly non-gpl derived.

    So, if you wanna build an application based on gpl code and not distribute it does that mean it's okay?

    (Yes...for the GPL or death people, this is a joke)
  • Wow: they're really going to fix that bug? They've known about it for several versions of MSVC (4, 5, 6), but seemed unwilling to change it. Fixing it will surely break a lot of code and presumably require a new compiler directive to build old code unmodified. This bug has been irritating me for a long time as I've been writing code that must also compile on the Mac, and every now and again I slip up and forget to declare a loop variable before the loop, and then try to use it after the loop.
  • by Malc ( 1751 ) on Monday June 18, 2001 @06:27AM (#143918)
    From http://gcc.gnu.org/news/inlining.html (linked to in the list of new features):

    "As a result, the compiler may use dramatically less time and memory to compile programs that make heavy use of templates, such as C++ expression-template programs. One program that previously required 247MB of memory to compile, and about six minutes of compile time, now takes only 157MB and about two minutes to compile. "


    I remember try to build the TAO (the ACE ORB) a few years ago. Under Linux using GCC, a couple of files consumed all of virtual memory (requiring 250MB of memory). When I had sufficient, it would take my 128MB machine 45 to 70 mins to compile those particular files (lots of swapping). The same files under Windows with MSVC would take less than 20MB and thus compile in under a minute. What is GCC doing that requires so much memory? Is it me, or is this new inliner (that is such an improvement) still a memory hungry hog? Why? (Technical answers preferred).
  • C++ really doesn't have much to do with it. gcc's C and C++ front-ends are kept separate and have been for some time.
  • C99 does allow declarations to be mixed with statements in any order. See section 6.8.2 of the standard.
  • by Lally Singh ( 3427 ) on Monday June 18, 2001 @06:18AM (#143924) Journal
    I'm really glad to hear about the new C++ libraries. The compiler's been pretty good about compliance so far, but the libraries have sucked for quite a while now. Glad to see the improvement...

    --

  • I'm sure they can do the same thing gcc does, which is produce a warning message and then compile the code as it was before.

    The Irix C++ compiler has this problem as well, but worse than VC++ because it produced an error if you declared a variable more than once (apparently VC++ silently allows this, which is why I was not even aware it did the scoping wrong until I tested it explicitly). The Irix problem meant that two for loops with the same "local" index variable would not compile, and since we also had compilers that obeyed C++ rules we have to insert the "int" declaration before the first for loop. This also broke a macro that relied on a local variable.

    Fortunately the switch "-LANG:ansi-for-init-scope=ON" turns it on (they seem to have figured out how to be even more verbose than gcc, sigh...) I just recently learned you turn this off (ie switch to C++ mode) with

  • The first thing you do when you've released something is work your ass off to get it right.
    This isn't supposed to take 3 months like M$ sometimes seems to think it should.

    The way programmers are pushed today the first x.0 is mostly more of a beta than a release.

    Not a problem realy if you got more time at bugfixing but since the company that's behing them don't earn money unless it's out there we won't see a change before that changes.

    // yendor

    --
    It could be coffe.... or it could just be some warm brown liquid containing lots of caffeen.
  • VC++ 5.0 and 6.0 shipped with an old iostream library (iostream.h etc, global namespace) and a mostly standard C++ library including iostreams. The standard C++ library was provided (indirectly) by Dinkumware. Due to a legal dispute between Dinkumware and the company in the middle, this wasn't updated in 6.0 and so was not updated to match the standard (in any case, the VC++ compiler could not support all library features). This should be fixed in VC++ 7.0 (currently in beta, AFAIK).
  • Using macros to redefine keywords generally results in undefined behaviour (standard paragraph 17.4.3.1.1.2), so one's sensibilities should be offended. As a compiler-specific workaround this macro definition is obviously a necessary evil, but it should never be included in code that is meant to be portable.
  • The VC++ debugger will certainly show the contents of a std::string if you use the supplied standard library. I'm not sure how you would expect it to support replacement libraries.
  • at least they say ahead of time that they will be releasing a "bug fix" version. Not like other programs that don't release bug fix versions for years ;-)
  • by Ian Schmidt ( 6899 ) on Monday June 18, 2001 @06:33AM (#143935)
    The very reason GCC 3.0 is out now rather than in 2005 is precisely because RH "jumped the gun" and submitted hundreds of bugfix patches to GCC 3.0 in the process. Meanwhile Redhat's GCC 2.96-81 is less buggy in my experience than 2.95.2 and the new features are great.
  • Yeah, but now when we fix code we'll have a reasonable expectation that the compiler feature that's complaining works (less guesswork as to whose bug it is).
  • No, his comments on that are quite straightforward. GCC has never been binary compatible with itself across versions before. Redhat 7.x isn't binary compatible with *anything* because it uses glibc 2.2. 2.95.2 doesn't support IA64, one of the supported architectures of Redhat. Only needing to support one compiler across all architectures is much simpler.

    I believe that those reasons pretty much account for both his reasons why it doesn't matter and why Redhat went with the decisions.
  • kgcc was included because the kernel was broken, not GCC. You can even look it up in the kernel mailing lists. Linus considered gcc 2.96 a good way to start getting Linux ready for gcc 3.0. Neither the gcc developers nor the Linux developers could have made it a "policy" that redhat had to include kgcc. Redhat did it because they knew that the kernel wouldn't compile on gcc 2.96 and they had to include a compiler that would compile the kernel.

    Futhermore, even the gcc guys have admitted that using gcc 2.96 (which was really the name of the development branch) spead up development because of the bugfixes that were generated.

    Why don't we all complain about them including glibc 2.2 as well? I mean, that breaks binary compatibility with all the other distributions too.
  • Slashdot already talked about this [slashdot.org]. (Actually the link is stating that 3.0 is rumored to be released soon)
  • Red Hat did ship the older compiler also. They just gave it a different name. If you wanted, there wasn't any problem in using it. If you used the new one, you were getting ready for now.

    (Well, actually for a few weeks from now. This is a *.0 release, so I'll give it a bit of time to settle out, and use 2.95x for now.)

    Caution: Now approaching the (technological) singularity.
  • There have been indication that that's what they intend on doing. What I wonder is 7.2 or 8.0 ... my guess would be 8.0 (keeping up with the Mandrake's?).


    Caution: Now approaching the (technological) singularity.
  • by Tim C ( 15259 ) on Monday June 18, 2001 @05:16AM (#143947)
    I'd say they're just being realistic. No matter how good your QA process, the chances of catching and squashing every single bug before release are minimal. The best you can realistically hope to do is catch all the real show stoppers. (Assuming that you actually do want to release the product at some point, that is)

    Having said that, this is the first time (that I can remember) that I've seen an officially-planned x.0.1 bugfix release announced at the same time :)

    Cheers,

    Tim
  • by Raphael ( 18701 ) on Monday June 18, 2001 @05:36AM (#143948) Homepage Journal
    [...] the big question is "how much stuff is this going to break?

    Not much, hopefully.

    The only major thing that can affect the binary packages is the new C++ ABI. But for plain C programs, there should be no big difference. Most of the Linux programs and libraries are written in C and should not be affected significantly. This could be a problem for Qt and KDE packages, though.

    See also the list of caveats [gnu.org] on the GCC web site.

  • I did not hear anything about a fix/implementation of the precompiled header support.
    It has been discussed many times, and there are some experimental implementations.

    Or is it really so difficult to correctly implement it?
    It is not only difficult to correct implement it, it is also difficult to agree on what it should do. People say "precompiled headers" but what the hell does that mean? What do you compile header files into? Do you compile a single header file into some kind of "precompiled header object file" (doesn't save you that much but is easy to use) or do you use some kind of pre-project data base (saves you more but may be more difficult to use) or what?

    Note also that GCC developers use Makefiles, while most commercial Windows developers use an integrated IDE. It is easier to do all kinds of performance tricks when everything is integrated in one big program that you control.

    I know there are many people interested in the problem, and I believe some may actually be working on it. It just isn't that easy if you want something that isn't a kludge. My impression (un-verified) is that many of the existing compilers that support pre-compiled header files use various kludges that may be useful but not necessarily well-designed. A pre-compiled header file solution for GCC has to be something we can live with for many years.

  • by Per Bothner ( 19354 ) <per@bothner.com> on Monday June 18, 2001 @08:44AM (#143950) Homepage
    Er, it's GCJ [gnu.org] (GNU Compiler for the Java [TM] programming language), not GJC. (You're not supposed to call it plain "GNU Compiler for Java" because of Sun's trademark on "Java".)

    We did consider the name GJC (back in '96 when the project was started), but for some reason I don't remember (I think it was trademark-related) we decided on GCJ.

    I'm very glad to see GCJ in a mainstream GCC release, and hope it will finally get the attention I think it deserves.

  • I have a feeling quite a few people are gonna be red in the face over this one.

    I doubt it. There *is* no C99 compliant compiler... and since C and C++ standards almost always seem to contradict themselves in one or two spots, it's unlikely that there ever will be one.

    And it's not such a bad thing - the face="" paramater to the font tag is invalid HTML. I guess the web in general should be red in the face?

    --
    Evan "And I'll invent *another* dozen specs *after* lunch!" E.

  • I didn't even get the compile of 2.4.5 to finish. Died pretty early on in kernel/timer.c

    Good thing I didn't put it on our production server (no I didn't really consider it).

    --
  • You've got it backwards. MS VC has it wrong too. the declaration of i in your example is scoped with the body of the for loop. 'i' does not exist after the closing brace of the for-loop.

    This is per the ANSI/ISO C++ standard.
  • My problem with gcc's C++ compiler has been that there is no workable compiler for it. Gdb just falls to pieces when you try to debug C++ code (it can't find symbols, it can't handle user-defined operator overloads, sometimes it gets mixed up and gives you the wrong values for variables, etc.) Is there an update to gdb to go with the new gcc that provides reasonable support for C++?
  • Finally, all my keen little utility programs ... will run as fast as OS level stuff

    It could be. If I'm understanding it correctly, many common operations have to be performed at runtime anyway (i.e. checking type safety for downcasts, which is usually the case if you use the collection classes, or checking array boundaries, etc.). Please also consider that critical sections are already implemented natively by the Java environments you can find out there, and I doubt these are going to gain anything...

    There probably will be a speedup because of optimizations (i.e. inlining, or loop unrolling optimized for the specific platform), but I'd be far more cautios before making enusiasthic statements like that.

  • Yuk! Does C99 allow nested scopes at all...

    {
    int i;
    {
    int i;

    Is this legal, or a redefinition?
  • In the gcc-v3 distribution they've been working for many months.

    Exactly. Now, step back and think about WHEN Red Hat 7.0 was released.... Red Hat made a hard decision. They had to release a 1.5-year old C++ compiler that did not track the standard or the latest version of C++ off of the 3.0 development snapshots. They did the latter because they felt it would serve more of their customers' interests, and they had the technical staff to back that decision up.

    Then, they got burned from both sides. The folks who were writing non-standard C++ complained because their programs no longer compiled. The folks who were tracking the GCC development and the ANSI standard complained that it did not go far enough.

    The kicker is that, had they been Sun, releasing ACC, people would have groned because their code broke, put a bunch of #ifdefs in their code (or modified their autoconfigs) and gone on with life. But, because we have a porthole into the development process, we feel we're qualified to second-guess the distribution-creation process. Personally, I'm impressed that Red Hat (or Debian or SuSe, etc) can package a distribution at all, given the huge number of projects and no real coordination between them.


    --
    Aaron Sherman (ajs@ajs.com)
  • This is just not true. GCC 2.95 and was horribly out of date with respect to the ANSI C++ standard. The problem was always that GCC was one of the first C++ compilers, and so it's been evolving along with the C++ language. So, one of the major thrusts in 3.0 (or rather, something so radical that it was not entering the 2.x series) was to incompatibly change much of the C++ implementation to match the ANSI C++ standard (hopefully this will freeze most major work on GCC's C++ so that people can start to rely on source-compatibility with future versions).

    Red Hat's 2.96 was a pre-release of 3.0, which they released because they felt that their customers needed many of those features, and that back-fitting them to 2.95 would be too large a task for too little return to the GCC effort (which, recall Cygnus is a MAJOR participant in).

    --
    Aaron Sherman (ajs@ajs.com)
  • by ajs ( 35943 ) <ajs.ajs@com> on Monday June 18, 2001 @06:31AM (#143967) Homepage Journal
    Of course, they will do what they've always done (and every other commercial vendor with large customers does). They will distribute "obsolete" software until their next version, when they will go with the almost-latest-and-greatest that they can get to work.

    You have to understand, Red Hat bowed to their customers. Many of their C++ customers told them they needed ANSI C++ compliance badly. GCC 2.96 offered that.

    Red Hat has a history of working hard on the compiler, and distributing a custom version. They were the first distribution to ship egcs (remember, 6.2 ran egcs for the C++ compiler). They also did a lot of work on making egcs work for the Linux kernel.

    --
    Aaron Sherman (ajs@ajs.com)
  • by bkuhn ( 41121 ) on Monday June 18, 2001 @05:35AM (#143971) Homepage
    For those of you who prefer a non-hacker announcement of the release, a press release is available [gnu.org].
  • Which is why I always code these sort of loops as:

    int i;
    for (i = 0; i < x; i++) {
    ...
    }
    for (i = 0; i < x2; i++) {
    ...
    }

    Just to avoid problems on either compiler.

    Just to avoid problems on either compiler.
  • by hattig ( 47930 ) on Monday June 18, 2001 @05:34AM (#143975) Journal
    Has anyone done any performance metrics using code generated by the "all new singing dancing x86/PPC/ARM backends"?

  • My purely speculative guess is that there should even be speed up in non-natively compiled code (code *you* write), because the entire core API can be natively compiled, as opposed to being in Java itself (as Sun's is). This would mean you wouldn't break WORA, but once you flip the native toggle on the GCJ runtime, you should see vast improvements in speed.
  • Is it safe to compile the Linux kernel with GCC 3.0 ?
  • by chrysalis ( 50680 ) on Monday June 18, 2001 @07:07AM (#143980) Homepage
    I've got some troubles with the newly compiled kernel. Sometimes, the kerboard blo

  • There are some patches and plans for this, but they were put on hold until 3.1.
  • There are two C++ libraries: v2 and v3. Only strstreams were available in v2. v2 has been dead for a couple years now; it gets bare-minimum maintaining while everybody works on v3.

    GCC 2.9x, including the RH versions, ship with v2. Many people have written their own implementation of stringstreams for use with 2.9x.

    v3 has had stringstreams for quite a while, but GCC 3.0 is the first official release to ship with v3.


  • My girlfriend just asked me this question. Here was my answer:

    An ABI for a platform/language/environment combination specifies things like the byte sex, the location of certain global variables (global offset pointers, etc), which registers are used for passing parameters during a function call, whether it's the calling function or the "callee" function that saves and restores register contents during function calls (and which registers can be ignored), the order of parameters passed on the stack... basically all the things that have to be agreed upon at the bitwise level in order for you to use Compiler A to make foo.o, and me to use Compiler B to make bar.o, and then to be able to link foo.o and bar.o together successfully. Usually for C++ everything all had to be done with the same compiler because there wasn't an ABI that everyone could agree on. Now various vendors can use various compilers and this stuff will "just work" on IA-64 families.

  • by British ( 51765 ) <british1500@gmail.com> on Monday June 18, 2001 @05:09AM (#143984) Homepage Journal
    So how did they compile the first version of the GCC compiler? Seeing is that there was no prior GCC compiler first?
  • Does Jikes do array bounds checking by default? If not, then there's not really much difference.
    ------
  • by alannon ( 54117 ) on Monday June 18, 2001 @06:57AM (#143987)
    This reminds me of the currently popular theory of how life arose, by simply taking constituant elements, heating them up and zapping them with lightning until they form amino acids and eventually proteins.
    My theory, however, involves a freak accident involving a cage full of monkeys, a box of hand-held hole-punchers, a large stack of stiff paper and a punchcard reader.
  • Troll? How is this a troll? Sorry, I'm confused, I thought was just common sense.

    And it is the policy of the gcc developers, isn't it, and the developers of the linux kernel itself that Red Hat had to also supply their "kgcc" secondary system-space compiler for?

    Or is there someone out there who thinks the people are obligated to support the "gcc-2.96-REDHAT" stuff?

  • by Velox_SwiftFox ( 57902 ) on Monday June 18, 2001 @05:23AM (#143992)
    So... since Red Hat jumped the gun and grabbed development releases using libs that apparently gcc will no longer be compatable with - and according to what they've said is their policy that they won't change to incompatable libs in mid-major-version - I wonder if this means RH 8.0 will come out simultaneously with 7.2, or if they are going to just skip now to 8.0?

    Or if Red Hat users will now be forced to continue to use what has become obsolete software?

  • How does the new x86 backend generated code perform in relation to previous compilers. I know icl (Intel C compiler) is coming out for Linux, and from the demoes of it I've used on Windows, it looks pretty damn spiff. (Stupid 30 limit...) Compile speed is fast too. If you look at the icl docs, you'll find that the optimizer on this thing is insane, it does all sorts of inter-intra function optimizations and prepiplines code and everything. I have only a vague understanding of what the hell that means, but it sure SOUNDS fast. Anyway, I'll try out the Windows version and see how it performs in relation to gcc. I'll post benchmarks when I get the chance. I'd appreciate it if someone would post Linux benchmarks as well, because as much as I like Windows, I can barely stand not having bash. (Ah... BeOS, where art thou?)
  • by bconway ( 63464 ) on Monday June 18, 2001 @05:09AM (#143995) Homepage
    Guess what? Everyone that complained about GCC 2.96 being broken (and not reading http://www.bero.org/gcc296.html [bero.org]) despite the fact that their code wasn't C99 complient STILL WON'T COMPILE. Now you can't complain that your code won't work because it's a developmental compiler, you'll actually have to fix it. Numerous examples of this are listed at the above URL, I'd highly suggest you try it out. I have a feeling quite a few people are gonna be red in the face over this one. ;-)
  • > So the int i should be in the outside scope.

    No version of the C language allows a declaration in the middle of a block (i.e. between two statements), although C++ does. Your explanation assumes that such a thing is permitted in C.

    C99 specifically addresses how constructs like "for (int i=0; i10; i++) STMT" should be handled; it specifies that the compiler should treat it as if there were a new enclosing block around the "for ... STMT" that began with the declaration of the variable(s) in the init-part of the "for" loop.

    So C99 says that:
    for (int i=0; i10; i++) STMT
    should be transformed to act like (in older C standards):
    { int i; for (i=0; i10; i++) STMT }

    C99 makes it unambiguous indeed.

    (I can't speak towards the ISO C++ standard, but I would imagine that they also stipulate that the scope of the defined variables ends after STMT above.)
  • AFAIK, gcc requires a *K&R* C compiler, as documented in the first edition of The C Programming Language. It need not support function prototypes or the void type (I think).

    No, it does not require such a compiler; it is required to be bootstrappable with such a compiler.

    And K&R did have void. They didn't have pointer to void.

    On UNIX systems that do not natively support and include gcc, one uses the system's C compiler to generate xgcc, which is GNU C (but not compiled by GNU C).

    Not really. xgcc is the name of the result of the stage1 and stage2 bootstraps; the stage2 one is created by the stage1 one (i.e. by GCC).

    I don't know why xgcc is not normally installed and used

    It effectively is, if you type "make install". The bootstrap process just ensures that the result of the stage3 bootstrap has identical object files as the result of the stage2 bootstrap, which formed xgcc. In other words,
    that optimized GCC compiles itself into the same optimized GCC - a consistency check. Those binaries are what get installed, so it is effectively the same GCC.

    but I assume that it would be an ease-of-debugging issue (and you can also debug gcc-optimized code, which most vendor compilers will not do).

    Nothing to do with it. Everything after stage1 is a consistency check.
  • by rsw ( 70577 )
    A specific and interesting example, from a talk Dennis Richie gave a while ago:

    Imagine that you want your compiler to support a "vertical tab" escape, '\v'. When you write the compiler, you'll have some statement somewhere that reads a character and decides what to do with it, and in that statement you'll have something like:


    case '\v':
    printf(0x0B);


    Compile this code, and suddenly your compiler can recognize the vertical tab character. Now, since it can, you can simplify the above code. You modify it to:


    case '\v':
    printf('\v');


    You compile this code with your new compiler, and, because you can recognize the '\v' character escape, everything works. Now you can just replace the original source with the above source and, using your compiler, it will compile. Strangely enough, however, nowhere in the source is it evident that '\v' == 0x0B!

    This can be applied in nefarious ways, as well. Let's imagine that I want to install a back door in the 'login' program. I can write the code to give 'login' a backdoor, but a code audit will show that it's there. Instead, I can modify the C compiler so that when it recognizes that it's compiling 'login' it will modify the code to have a backdoor. However, as above, an audit of the C compiler source will show that this is going on.

    The solution? I modify the compiler such that when it recognizes that it is compiling a compiler, it adds in the code the recognizes the 'login' binary being compiled and adds a backdoor, and the code the recognizes a compiler and makes it modify compilers in the proper way. Then I compile the new compiler and replace the code with the old, unmodified code.

    Now anyone can audit the source code for the compiler and find that it's perfectly clean. If they compile it on the system with the modified compiler, however, their compiler will have both the 'login' backdoor and will make compilers that have the backdoors included. All from clean source.

    The moral of the story? It doesn't matter how trusted your source is, you always place an implicit trust in lower level utilities unless you're writing opcodes for the processor directly (and even then, a processor microcode virus isn't so far-fetched that you can completely disregard the risk). That's why your C library and your compiler, while seemingly unrelated to system security, are actually a critical part of your system's integrity.
  • by selectspec ( 74651 ) on Monday June 18, 2001 @06:11AM (#144000)
    C99 standard [dkuug.dk]. Overview [kuro5hin.org].The Honorable Dennis Ritchie (father of C) on C99 [e-businessworld.com]
  • Actually, (if I am reading the C99 conformance page right), variable sized arrays are broken. I guess that is why the release notes said 3.0 supports _almost_ all of the features of 2.95.x (because 2.95.x did support variable sized arrays). Now that I know gcc has a typeof keyword...evil gcc! How dare it tempt me to write non-standards conforming code! (of course, the only use I see for typeof is for safe vararg functions...typeof(va_peek(arglist)) foo = va_next(arglist)...). Well, back on topic I guess (wait, this entire message is about gcc so hah!)

    -------------
  • From my lazy and half hearted poking around GCC's web page I didn't find information on ABI let alone why it is important to put in the compiler/compiled binaries. So what is ABI and why should people using the 3.0 compiler care?
  • by kdgarris ( 91435 ) on Monday June 18, 2001 @06:28AM (#144004) Journal
    The fact that RedHat used an earlier snapshot of this compiler allowed for extensive testing early in the development process, resulting in many bug-fixes being made to the snapshot as well as to the pre-3.0 GCC development code.

    I'm sure this release came about sooner with a lot less bugs due to Redhat's move to use the earlier snapshot in their distro.

    -Karl
  • Wait... is the monkey-puncher theory your theory of how life arose, or of how gcc arose?
  • I believe it does. Jikes (which I haven't used for about a year) was a quite amazing piece of code. The compiler itself was blazingly fast, and gave excellent error messages.
  • by DrCode ( 95839 ) on Monday June 18, 2001 @09:49AM (#144009)
    I tried GCJ about a year ago with a large parser I'd written in Java. It was a pure command-line program, with no GUI at all, and I compared a GCJ-compiled version with one compiled and running with IBM's Jikes compiler/JRE.

    To my surprise, the Jikes version ran much faster, about 2X, than the native code. Only when I recompiled with GCJ with the option to skip array-bounds-checking, did the native version run at about the same speed as Jikes.

  • It was a snapshot, which was then QAed and patched selectively for some time before release - it was better than any other alternative at the time, as the compiler we shipped before was rather old, and 2.95.x was buggy. We also got support for IA64, so we didn't have to use lots of different compiler for different architectures.

    The only thing wrong about what happened, was that we didn't properly communicate that this was a Red Hat, not an FSF, release. Other than that, it showed the power of free software by allowing us to do what we felt was needed and not being locked in- nothing wrong about that. It has served us well for two releases now - in the future, we will eventually move to gcc 3.0(.x?), but this was obviously not an available compiler back then.

    PS: Since our initial release, Mandrake has done the switch as well.

  • This post [google.com] lists the non-compliance issues that VC7 will ship with. The for-loop scoping problem isn't there.
  • Huh? IIRC, the ANSI standard, which was later adopted as the ISO C standard with no changes apart from section numbering, commonly known as C89, requires that comments are replaced by a single space, making the /**/ token splicing specifically not work if you were using your compiler in a strictly conforming mode.

    (Although a number of compilers still provided a non-strictly-conforming 'traditional' mode which would allow such constructs, along with various other bits and pieces that we (and our code) had all got used to)

    K.
  • by Karellen ( 104380 ) on Monday June 18, 2001 @02:59PM (#144017) Homepage
    ABI == Application Binary Interface.

    It's the format for putting things like function names in object files so that the linker, when fixing up function calls across object files, can match the call with the correct function.

    While this isn't a problem in C (just use the function's name for christ's sake), C++ allows overloaded functions; multiple functions with the same name but different parameter lists. For the linker to match the correct call to the correct function, the parameter list needs to be munged into the function name stored in an object file.

    The new way of doing it is generally less clunky and takes up less space in your object files than the old way. But is incompatible with it. :(

    (Note that this only matters where GCC is being used as a native compiler to compile files that only need to be linked with each other. If compiling for a platform where gcc is not the native compiler (e.g. using GCC on, say, solaris, alongside the default compiler) you need to use the ABI defined for that platform to allow your object files to be linked with all the other object files you have that were compiled with the native compiler.)

    Yeah, it's more than 100 words. Sue me :)
  • If ever there was a piece of software where I want to wait until the .01 release, this is it.

    Normally, I'll snag most anything off the server to live on the bleeding edge, but not this time...
    --

  • by marble ( 120353 ) on Monday June 18, 2001 @05:41AM (#144022)
    The C Programming language was originally standardised in 1989, and this was known as C89 (and also as C90 - long story...) In 1999, the C language was updated with some new features and library functions. This is now known as C99.
  • by marble ( 120353 ) on Monday June 18, 2001 @08:54AM (#144023)
    #pragma is evil - anything after the #pragma is implementation defined. This means that #pragma blah on one compiler might do one thing and something else entirely on another. You can't conditionally use it per compiler since it's at the pre-processor level. (You can't #ifdef out #endif, for example.) _Pragma is nicer, however, as you can do: #if <test for GCC> #define BLAH _Pragma whatever #elif <test for other compiler supporting such enlightenment> #define BLAH whatever #endif BLAH Unfortunately I don't know of any other compilers that realise this, so you're sort of stuffed. Most of them allow #pragmas to be ifdef'd out which is nearly as good.
  • by StandardDeviant ( 122674 ) on Monday June 18, 2001 @11:31AM (#144024) Homepage Journal

    a) printf statements and I/O inside loops in a performance benchmark? hello, McFly... you aren't really testing the compiler there.

    b) gcc only as source??? (see your installation media for any free unix to get binaries, cygwin for win32, etc. etc. GNU may only distribute source but other folks can and do distribute binaries, and I'm sure gcc 3.0 binaries will be released for your platform of choice Real Soon Now)

    c) FP perf: what do those numbers mean? There is no explanation given of how they're arrived at or what scale they're on. "Naked numbers unlike naked ladies aren't terribly interesting."

    d) Ease of Use/Installation: Totally subjective and totally irrelevant to the merits of the compiler. Just because a preteen could install VC++ doesn't make it's code any better.

    e) "overhyped","not ready" gcc: ok, so you're a troll. Just try not to be flaming stupid while you're at it. If it isn't ready then why is the operating system I'm using to type this reply on built with it? Is gcc the best compiler ever, well, no, there's no such thing. Frankly I wish gcc supported something more recent in the fortran family than F77 (not that I like Fortran per se but as a scientific coder it's sort of common and stuff).


    --
    News for geeks in Austin: www.geekaustin.org [geekaustin.org]
  • by RevAaron ( 125240 ) <revaaron AT hotmail DOT com> on Monday June 18, 2001 @06:35AM (#144025) Homepage
    What is Google?
  • by istartedi ( 132515 ) on Monday June 18, 2001 @07:14AM (#144026) Journal

    No matter how good your QA process, the chances of catching and squashing every single bug before release are minimal

    Unless you're Microsoft; then you're an incompetent, obnoxious, FUD-spouting dinosaur and every bug that escapes is an indictment of you, your business practices, design methodology, family heritage, preference for breakfast cereal, haircut and anything else associtated with you or anyone you have ever met, slept with, laid eyes upon, or casually passed on the street.

  • "They" (the standardisation committee) didn't. However, some early C++ compilers did. VC++ and pre 5.x versions of Sun Workshop at least. Maybe also some old gcc versions (I even think the ARM recommended it, back in the days when it described what most people called C++)

    This made somewhat sense in that declarations in one scope should last until the end of that scope, so it made the language simpler (for compiler writers (not users)), but it was definitely wrong in the sense that it wasn't very useful.

    In summary: The standard got it right, and the original poster got it backwards.

  • By setting your environment variables properly, so that you only use one at a time (and needs to run a shell alias each time you want to switch the compiler you use)

    But really, that is only nessecary if you use a prebuilt compiler by someone else. If you download the source and read the build instructions you will see that it is no problem at all. I don't remember any details, but it was definitely not an issue last time I experimented a bit with the gcc source (as an expirement in using it as a backend for my own toy language, but gcc turned out to be much too difficult for a toy language :-)

  • by martinde ( 137088 ) on Monday June 18, 2001 @05:22AM (#144029) Homepage
    Did they manage to speed up g++ at all? In the prerelease versions, I was seeing compilation times that would put g++ 3.0 at about 2-3 times slower than 2.95.2. And 2.95.2 was significantly slower than any egcs version. Anyone know if there are plans to address this?
  • by langed ( 142123 ) on Monday June 18, 2001 @05:24AM (#144033)
    Of course, they did the same thing that is done in the first stages of building gcc as a cross-compiler: build a compiler that compiles the compiler you'll eventually use. This early-stage compiler need not support all of C, only the parts that the compiler uses.
    if you look in this Makefile, you'll see that in stages one and two it builds a program called xgcc, which it later deletes, but not before it compiles your cross-compiler.

    the nice thing about doing this is that the compiler that is finally built when the entire compilation process is complete doesn't have to necessarily be "real C" in the source code. It could be a nice intermingling of any number of languages. It isn't, but it does give them that freedom.

    So to review:
    gcc (installed) -> xgcc -> new gcc compiler

  • by TeknoHog ( 164938 ) on Monday June 18, 2001 @05:19AM (#144040) Homepage Journal
    The first GCC compiled itself. There is nothing contradictory here since, according to Novikov (see his book The River of Time) the present can be affected by future events.

    This process is somewhat similar to the beginning of the universe, which according to Perl zealots started when tiny bits of eval() statements arose from quantum fluctuations. These immediately produced more and more eval()s, resulting in a big bang of code. Eventually, other functions appeared, forming the Universe as we know today.

    There is also a controversial theory which asserts that the first GCC was written with assembler, and that the first assembler was written in binary code by Real Men (TM), but the evidence for this is questionable.

    --

  • by Erasmus Darwin ( 183180 ) on Monday June 18, 2001 @06:04AM (#144051)
    Or if Red Hat users will now be forced to continue to use what has become obsolete software?

    The existence of a new version of software does not (unless you're dealing with the ugly, ugly world of MS Office documents and their lack of backwards compatibility) automatically break older versions. You will not go bald, get cancer, or get attacked by a pack of rabid dogs just because you're using an older version of gcc. You will not see:

    [erasmus@localhost ~]$ gcc -o test test.c -Wall
    gcc: Version 3.0 has been released. You must upgrade. Sorry.

    Furthermore, from the Slashdot blurb (right at the top of the page -- you don't even have to click a link), we've got the following:

    Plans for 2.95.4 (bugfix release), 3.0.1 (bugfix release), and 3.1 (more user-visible features) are all in progress.

    In short, the 2.9x line (which Redhat admittedly bastardized a bit by grabbing a snapshot and calling it 2.96) will still be fixed for bugs.

    Also, for a production system, new, untested code is considered unacceptable. There are bugs in the 3.0 version of gcc, period. Over time, they will get fixed. But just like running an experimental kernel (or even the very first new stable release of a previously experimental kernel) is a great way to shoot yourself in the foot, you don't want to jump to gcc 3.0 unless you've got a reason to use it. And people who have a reason will generally be able to locate and install a copy of gcc 3.0, anyway. Hell, give it a few weeks and they should be able to locate and install an RPMed copy of gcc 3.0. And I'm sure there are people out there who will need gcc 3.0 -- it just won't be the core Redhat demographic, yet.

    In short, did the people at work bitch at me for running RedHat 6.2 on our production machines at work? No. Would they have bitched if I had upgraded to 7.0 when it came out, broken things in the process, and used the excuse that they just need to wait a year for RedHat 7.2? Hell, yes. Even Slashdot does something similar -- there's plenty of lag between the latest version of Slashcode and what's running on Slashdot.

  • by oconnorcjo ( 242077 ) on Monday June 18, 2001 @07:07AM (#144067) Journal
    The way programmers are pushed today the first x.0 is mostly more of a beta than a release.

    Actually what is happening is that as the tools we create and use become more and more sophisticated (and thus more lines of code in use), the harder it becomes to catch all the possible things that could go wrong in an application. With big projects like the Linux Kernel, GCC, Mozilla (now in beta), KDE, Gnome, XFree86- it is just realistic to assume that even though the developers worked very hard for a stable release, people will find bugs.

  • by iloveAB ( 247945 ) on Monday June 18, 2001 @05:24AM (#144070)
    I use a lot of JAVA, and it looks like we will finally have a full JAVA front-end for gcc with dependency generation (for automatic makefiles).
  • GNU JAVA COMPILER!

    I can finally write in Java and not get made fun of by my elite C++ hax0r friends!

    In case you weren't aware, GCJ is the first Gnu toolset for Java, and it's not just a nasty rehash of Sun's stuff...it's JRE, JIT and NATIVE CODE COMPILER rolled into one. They have an odious refutation of the Write Once Run Anywhere creedo which I don't necessarily agree with (the guy must be writing some pretty fierce code if he's had problems like he mentions, I've done distributed Java with the Swing libraries for about a year and never had a problem that wasn't related to Netscape sucking). What I care about, though, is the speed ups. Finally, all my keen little utility programs I've written in clean, attractive Java code (to do stuff like rename files, play music and so on) will run as fast as OS level stuff. I intend on compiling the sweet ass netbeans [netbeans.org] ide as soon as they get AWT working. Maybe I'll finally be able to get it to run as fast on my shitty Celeron windows machine as it does on my MACOS lappy.

    GNU TOOLS FOR LINUX: BECAUSE LINUX USERS HAVE A RIGHT TO CLEAN, ATTRACTIVE, EFFICIENT OBJECT ORIENTED CODE, TOO.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...