GCC Gets PCH Support And New Parser 83
Screaming Lunatic writes "GCC will finally get precompiled header support which should help with faster compile times. GCC will also be fitted with a new recursive descent parser that fixes more than 100 bugs in GCC. I'm not sure how they decomposed C++ into a context free grammar so that it could be parsed using recursive descent."
Uh Links? (Score:1)
the changelog for 3.4 does a better job (Score:3, Informative)
http://gcc.gnu.org/gcc-3.4/changes.html
Because implementations just got contributed (Score:5, Informative)
January 10, 2003
Geoffrey Keating of Apple Computer, Inc., with support from Red Hat, Inc., has contributed a precompiled header implementation that can dramatically speed up compilation of some projects.
December 27, 2002
Mark Mitchell of CodeSourcery has contributed a new, hand-crafted recursive-descent C++ parser sponsored by the Los Alamos National Laboratory. The new parser is more standard conforming and fixes many bugs (about 100 in our bug database alone) from the old YACC-derived parser.
Re:Because implementations just got contributed (Score:2)
January 10, 2003
Geoffrey Keating of Apple Computer, Inc., with support from Red Hat, Inc., has contributed a precompiled header implementation that can dramatically speed up compilation of some projects.
Where are the "Apple only steals, never gives back" trolls???
ANTLR? (Score:5, Informative)
Basicly: you aer NOT context free but context sensitive, of course.
Terrence showed that in practice the hughe drawbacks of a look ahead of k does not appear often.
I would think the typical look ahead for C++ is 1 in over 85% of the cases and 2 for the rest and in rare cases 4
angel'o'sphere
Re:ANTLR? (Score:2, Informative)
Terrence Parr (author of the antlr compiler construction tool) says that most languages can be parsed with LL-k grammars, provided you have a deep enough 'look-ahead' (i.e., 'k' is big enough).
Basicaly: you are NOT context-free, but you are context-sensitive.
Terrence showed that, in practice, the drawbacks of look-ahead do not appear often.
I would think the typical look-ahead for C++ is:
1 in over 85% of the cases
2 for the rest
4 in rare cases
Re:ANTLR? (Score:1)
Having k lookahead is not the same as being context sensitive. Context sensitive grammars come in the form.
non terminal or terminal 0 or more times
Using K lookahead to resolve ambiguities is not the same as being context sensitive.
Re:ANTLR? (Score:3, Insightful)
a d;
Now, is this:
a) a strange but legal expression parsed
like:
(a ) d;
Both are legal interpretations. The only way you can tell is by looking at the type of a before parsing the rest of the expression. And even this may not help. IIRC, the issue may be undecidable, as template names are in a different "namespace" than variable names (i.e. I can have both a template type and a variable with the same name at the same time.
Note that d being a more complex expression than just a name also allows you to decide. But we're up to a k=7 on the simple case. Make b and c arbitrarily complex expressions themselves, and you can make k need to be arbitrarily large. Or heck, consider:
a d, e, f;
(three new variables d, e, and f, or four different expressions combined with the comma operator?).
And it doesn't matter how infrequent this code construct is. The compiler has to deal with this situation, and situations like it. And deal correctly. C++ simply is not LALR(1) parsable, and likely not really parsable at all. It's a fundamental flaw in the language.
And this does cost. YACC was developed because it's easier to write and maintain parsers in YACC than hand-rolled parsers for any nontrivial but LALR(1) parsable language. Time and effort will go to the development and maintainance of the compilers. For commercial products, this means higher prices. For free products like GCC, this means fewer new features and fewer bugfixes in other parts of the code.
Brian
Re:ANTLR? (Score:2)
The above code should look like:
a < b, c > d;
(a < b), (c > d);
(a < b, c >) d;
a < b, c > d, e, f;
etc. Must remember to preview before posting.
Re:ANTLR? (Score:2)
building a LL-k parser on a language wich is not realy suited was a long time considered un good.
Reason: k may be infinite.
Reality: you parse only finite source files
Fact: constructs in wich you need a big k are rare!
I did not say that your lists of comparisions do NOT exist or do only exist in rare cases, I ment: if they appear they are usualy short.
So the specific k needed in the samples you brought up is: 6?
angel'o'sphere
Re:ANTLR? (Score:1)
I came across a simpler example of this (I think) when I tried to write a C parser. Take the following code:
This can mean one of two (or possibly more) things:
The meaning depends on what 'f' represents. If 'f' is a type, then it is a variable declaration. Otherwise, it is a function call. I think this is what makes C impossible to parse with a plain context-free grammar. You need to do some weird stuff in your YACC production rules to get the proper result.
As far as I can tell, it's really a silly thing that could have easily been avoided by changing the pointer syntax a little. As a matter of fact, I think that the thing causing problems is the way pointer-type variables are declared (Java doesn't have this issue). I guess the reason the problem exists is that the original parser was hand-written and so the Bell Labs guys didn't bother with the computational-theory side of things.
Re:ANTLR? (Score:4, Informative)
The quote from Terrence Parr is misleading.
While most languages are LL(k) for some k, most grammars are not, and require some massaging to get into a form which LL parsers will accept. The massaged version is invariably not the "most preferred" way to specify the language. In many cases, a compiler compiler (such as ANTLR) can do the massaging automatically, but this is often not the case.
Both ISO and ARM C++'s grammars, in particular, are inherently ambiguous and require semantic disambiguation no matter what you do.
Re:ANTLR? (Score:1)
Also the standard C++ grammar is not free of left-recursion, so LL (or recursive descent) parsing is not well-suited for this grammar at all.
ANTLR uses infinite lookahead (he calls it predicates), and therefore in theory the parsers constructed using this are non-linear.
Either Mark Mitchell of CodeSourcery has transformed the grammar into some unrecognizable gob of rules, or he uses backtracking and infinite look-ahead. In the first case some serious restructuring of the syntax tree is necessary.
I have just finished a thesis on the problems of parsing C++, and I would not have chosen recursive descent. I would have chosen automatically generated recursive acent (LALR(1)) instead, with backtracking and dependency on context information (aka lexical feedback). But I hope the best for their approach.
I just hope that the parser can also be used for other purposes than the GCC-compiler, ie. that it is a separate module, because a decent, free C++ frontend is needed by many projects (editors,
code analyzers).
Re:ANTLR? (Score:2)
I figure that, after a reasonable attempt to figure it out from context, the compiler should be free to throw up its hands in disgust and tell the programmer something like: "I can't figure out what the hell you mean here, and if I can't, probably a human couldn't either. Please make it less ambiguous."
As the various obfuscated C (etc) programming contests prove, just because code is syntactically (and semantically) legal doesn't mean it's comprehensible.
infinite lookahead is still required. Actually only very few problems are solved with finite (aka k) lookahead.
Not to worry. If it can't be solved with finite lookahead, the program source itself must be infinite (or incorrect), and the compiler will never halt anyway.
Determining ahead of time whether this is the case is left as an exercise for the student
Could I use that parser indepedently? (Score:2, Interesting)
Anyway, the links provide little to no information. Perhaps someone knows more.
Re:Could I use that parser indepedently? (Score:1)
Sorry, I couldn't help myself
Re:Could I use that parser indepedently? (Score:1)
Sorry, neither could I
Yes, sort of, with some help (Score:3, Informative)
You don't really want that kind of thing done in the parser, because your refactorer (or whatever) would then have to handle the possibility of incorrect code. The parser is handling syntax (mostly), remember; semantic correctness is checked later.
You want GCC to parse the code, check the code, and then do something other than generate assembly. To some extent that's being done already (there are command-line options to dump various representations of the source, e.g., -fdump-translation-unit).
Also, the back-end (code generation) is seperate from the front end (language handling), so if you were to implement a back-end whose "assembly language" output was actually, say, XML, then you would have a C-to-XML, C++-to-XML, Java-to-XML, FORTRAN-to-XML, ObjectiveC-to-XML, and whatever-else-I'm-missing-to-XML converter. Dunno why you'd want such a thing, but you could probably build some kind of program database out of it (which is what IDEs use to do things like function name completion when typing code).
All of which is independent of the front-end parser.
Re:Could I use that parser indepedently? (Score:2)
Why the extra step? (Score:3, Interesting)
Why, why, why, why? Why can't the header file simply be compiled at the first inclusion and cached somewhere? I know I am bitching about a single step here, but can anyone explain to me the rationale behind this?
Re:Why the extra step? (Score:5, Informative)
Re:Why the extra step? (Score:2)
Re:Why the extra step? (Score:5, Insightful)
A makefile lets me control the building of each and every
Precompiled headers should work the same way, or they won't be as flexible as the
Re:Why the extra step? (Score:5, Interesting)
But that's just what make will do. Why rebuild the same functionality within a different tool? Basically, the reason is (probably, I'm not a GCC developer) the UNIX philosophy of having small tools doing their job. GCC is a compiler and nothing else, make is a tool that decides what needs to be compiled.
If you want automation, you can always use an IDE (or some other tool) that includes a make equivalent or that creates appropriate makefiles for you.
Re:Why the extra step? (Score:1)
Re:Why the extra step? (Score:2)
Yes, that's what I meant. If you write an appropriate makefile rule, make will cause the headers to be recompiled only when they have been modified. Effectively, this means that make will be "caching" your precompiled headers.
Re:Why the extra step? (Score:1)
gcc niftystuff.h -o niftystuff.o
I suppose it might work, but it is meaningless. You can't access the declarations in niftystuff.h by linking with niftystuff.o, since the declarations are a compile-time feature.
The best you can do is 1) minimize the headers included in each
As far as I can tell, you are talking about part 2. If properly set up, make can compile a
The point of precompiled headers (.pch) is to save the time spent parsing the declarations of headers. That way, even when the
This is primarily useful for headers that come from outside of the current project, e.g. OS headers, CRT headers, X11 toolkit headers, libraries, etc. You probably never change these, so you precompile these headers once when they are installed (or upgraded). Now you never have to wait for stdlib.h to parse.
gcc already has precompiled headers? (Score:1, Interesting)
Re:gcc already has precompiled headers? (Score:3, Informative)
Close (Score:2)
The version of GCC that ships with OS X has been heavily hacked on to make it work better for that operating system. One of the things Apple added was a simple implementation of PCH.
Other companies have done the same thing, many times. One of the customized versions of GCC that Cygnus did for a customer had a (simple) PCH in it, too.
The version being checked in to GCC is the result of all these different implementations coming together.
what a coincidence (Score:5, Funny)
context free grammar
This is good for slashdot, which is a grammar-free context!
old story? (Score:1)
This work will happen in early 2001.
am i missing something?
Re:old story? (Score:1)
PCH is so 1993... (Score:1, Informative)
Is this simply something that nobody cared about and finally got around to doing it - or was it a tricky feature to implement given that GCC already worked quite well enough without it?
Re:PCH is so 1993... (Score:1, Insightful)
Standard C++ Easier (Score:5, Interesting)
(OT) Just wait until you see C++0x. It will (probably) support variable definitions like
and figure out a type for iter by looking at the result type from map<>::begin().Re:Standard C++ Easier (Score:1)
Re:Standard C++ Easier (Score:2)
where's good info on the possible features of C++0x? I've seen the a/v presentations on technet from a year or so ago.
Re:Standard C++ Easier (Score:2)
There's nothing public I know of. Join! Help!
Re:Standard C++ Easier (Score:2)
What's the problem with:
or simply defining meaningful-sounding typedefs for templates?Re:Standard C++ Easier (Score:2)
The problem with
is that sometimes, you don't know the type of blah, or even blah::iterator. Grab a Google Groups, and search the archives of comp.lang.c++.moderated, comp.std.C++ and similar groups if you're interested in why this matters.
Re:Standard C++ Easier (Score:2)
typeof(some_map)::iterator = some_map.begin();
But I guess that
typeof(some_map.begin()) iter = some_map.begin();
is only a slight step away. And the auto there looks awfully nice.
Re:Standard C++ Easier (Score:2)
That's there too. It's more useful for library declarations than for straight coding, but both of the declarations suggested will work.
Re:Standard C++ Easier (Score:1)
Re:Standard C++ Easier (Score:2)
auto int x = 0;
auto x = 0f;
Unfortunately, it could make for confusing constructs like
static auto x = 0f;
Maybe we should just stick with typeof -- it's more verbose but no less functional.
C++ Type Inference, great.. (Score:2)
(Of course, SML and O'Caml (and related languages) have had much more sophisticated type inference for 30 years!)
Re:Standard C++ Easier (Score:2, Interesting)
Re:Standard C++ Easier (Score:3, Informative)
You can resolve the ambiguity using the same rules that decide which version you'll be calling (which you have to work out anyway, of course) and take the return type of that version. As long as you've got context -- which you have in the example case -- it's not a problem. In general, you'd need to specify which overload you wanted. It's kind of like explicit template instantiation (and equally horrible if you have to do it).
Re:Standard C++ Easier (Score:1)
Re:Standard C++ Easier (Score:2)
The context is (and must be) unambiguous at the time of the function call. In this case, you'll get the const version if you call begin() on a const object, and it's basically as simple as that. Whether or not you'll ever modify data doesn't have any bearing on its constancy in C++, which is unfortunate at other times, but helpful here.
Re:Standard C++ Easier (Score:2)
Re:Standard C++ Easier (Score:1)
Re:Standard C++ Easier (Score:2)
I appreciate that it's not always what you want, but it is always what you'll get. When you use a const_iterator on a non-const container, you're still getting a normal iterator back first, it's just then being implicitly converted to the constant access version. If you want to do that, you're explicitly overriding the rules of the language, in which case presumably the auto stuff mentioned isn't what you're looking for anyway.
Re:Standard C++ Easier (Score:2)
class Foo { typedef std::map<std::string, std::string> Data; Data m_data; public: std::string getBar(const std::string & key) const { Data::const_iterator i = m_data.find(key); if (i == m_data.end()) return ""; return i->second; } };
This is a common and useful idiom (btw. for people less experienced with C++: using [] to check for existence of a value in a map is a BAD thing, as the [] operator always create a new element if it can't find an existing matching element).
Personally I find that I use const_iterator all over the place, most of the time with non-const containers, as I tend to keep the number of operations that mutate data members to a minimum, and const_iterator serves the dual purpose of protecting me from accidentally introducing bugs and making my const member functions correct.
Personally I don't see much value of auto in the first place, and in this case I'd be unable to use it most of the time, while a proper typeof operator give the flexibility of auto and work with my case and provide lots of other benefits without the problem of overloading the meaning of yet another keyword (it's bad enough with static).
Re:Standard C++ Easier (Score:2)
You're not overriding it in that sense, sure. My point was that the language rules are clear on the overriding: if you call the method on a non-const object, you get the non-const method, whether or not you will use it as such. This can be a pain, for example in the context of operator[] on an associative array type, where you might prefer to throw an exception rather than implicitly create an entry that does not yet exist if you're looking up rather than assigning. You are, of course, at liberty to convert the returned value yourself into a constant form, but that is a second step that the language would not ordinarily take for you.
The purpose of something like auto would be to replace the first step. If you're wanting to explicitly set the type, you don't need auto. Its value is elsewhere; look at some of the recent template discussions in places like comp.lang.c++.moderated and you'll find the need for that feature, or something like typeof, does come up every now and then.
Re:Standard C++ Easier (Score:1)
But you will not use
auto x = map::begin();
You will use
auto x = some_map.begin();
The overloading of begin() depends on whether some_map is itself const or not, so I see no problem here.
Re:Standard C++ Easier (Score:1)
Re:Standard C++ Easier (Score:1)
Your first comment talks about overloading which is simply calling a different function according to the parameter. So map::begin() is overloaded as:
const_iterator begin() const;
or
iterator begin();
To use overloading for your case:
auto x = static_cast<const map>(some_map).begin();
but this clearly defies the scope, as you still have to use the type here, and you might as well use:
map::const_iterator x = some_map.begin();
Re:Standard C++ Easier (Score:1)
Re:Standard C++ Easier (Score:1)
Sure it is useless. Go on and write something like:
for (std::map<std::string, std::vector<my_ns::my_class>, my_comp>::iterator i = v.begin(); i != v.end(); ++i)
instead of the seemingly cleaner
for (auto i = v.begin; i != v.end(); ++i)
:)
Re:Standard C++ Easier (Score:1)
using std::map;
using std::vector;
using std::map;
typedef map<string, vector<my_ns::my_class>, my_comp> mymap;
for (mymap::iterator i = v.begin(); i != v.end(); ++i)
Check out Stroustrup's book, as this is what he frequently does.
I really hope they don't add it to the standard, it will only lead to more shitty code.
Um, that's already in the library (Score:2)
You don't need GCC to implement it, nor do you need to wait on C++0x. Every container declares appropriate nested typedefs. Using your example:
The "figure out a type for you" was done when some_map was declared.
Re:Um, that's already in the library (Score:2)
devphil's code above is an example of a workaround. It was my work to insist on having typedefs all over the STL classes, for this purpose. They were necessary clutter because of a weakness in the language. They will remain as not-so-necessary clutter, for backward compatibility.
Well done GCC, but.... (Score:4, Insightful)
Re:Well done GCC, but.... (Score:4, Insightful)
Re:Well done GCC, but.... (Score:3, Interesting)
You could argue that including only the very specific headers you need for each source file is the best way to go, but I think it is a reasonable trade-off to include all these static system/library headers in a precompiled header, then to re-reference the specific headers in the user source code to indicate the dependency explicitly. I totally agree with the stuff in the book about people binding modules together too tightly through inter-dependencies in user headers though (although I'm not convinced by everything he talks about :)
I usually chuck almost all the stl and alot of boost into my pre-compiled headers when setting up a build, which cuts the full rebuild time at least in half, usually more. I'm always reluctant to do a full rebuild of the same module under g++ as I know it'll take a long time (comparatively). Admittedly the larger each source file is, the less the benefit, but I tend towards lots of small to medium sized source files anyway.
I'm not sure how the gcc version will do it, but the msvc (boo, hiss etc.) version actually takes a memory dump of the parsed code tree when pre-compiling, then just copies this back into memory for successive compilations, so the speed increase is dramatic. Hopefully the gcc version does something similar (or better). Be nice if it had something similar to ccache [samba.org] built in as well.
Re:Well done GCC, but.... (Score:2)
Re:Well done GCC, but.... (Score:1)
Re:Well done GCC, but.... (Score:5, Interesting)
Putting less into each
I haven't had a lot of luck with precompiled headers, either. (Context: a project with a hundred source files spread across a dozen directories, totalling about fifty thousand lines of source.)
Best solution I know of for C++: Use as many forward declarations as you can, periodically trim your include directives, and have relatively large
I know of C++ systems that take a CPU week to build because of these issues!
Note that Java doesn't have this problem, or the problem of teaching your makefile about header file dependencies. (Not important enough to get all projects to switch from C or C++, but among the reasons that some projects should.)
Re:Well done GCC, but.... (Score:2)
This occurs because programmers created these superheaders where everything go. Instead, related declarations should be clustered in a single include file, so that a single #include statement can satisfy several uses. Conversely, unrelated declarations should be split over different headers. Headers should have include guards to avoid repeated inclusion. Finally, headers should #include everything it needs to compile, but no more.
Messy headers are a symptom of a deeper problem with the organization. If the developer didn't even bother thinking about where to put the API, it's a good bet that the API is not well documented either.
Look at the Standard C Library for an example. It's a relatively small library by today's standards, yet its declarations are broken out over two dozen headers.
(Not important enough to get all projects to switch from C or C++ [to Java], but among the reasons that some projects should.)
Which projects? If it's small enough, the build time wasn't worth worrying about to start with. If it's big enough for the build time to be a problem, then the rewrite can cost you your entire competitive advantage.
How does that work with templates? (Score:2)
I have no idea whether those libraries could be rewritten to have small headers, it would probably require a working version of the "export" keyword at least, but I don't know if that is enough.
Recursive Descent / Context Freeness (Score:5, Informative)
[exp]
[dec]
(where x is the set of variables and n is the set of integer constants)
This language is context-free, but the following restriction isn't: We say that strings are only in this language if variables aren't used before being declared. Legal:
let
val x = 3
in
print x
end
Illegal:
let
val x = 3
in
print y
end
This language isn't context-free (in the usual sense) but can be parsed easily by a recursive function. That function simply takes with it a list of all the declared variables. (In fact, you can pull this same sort of hack with lex/yac by having the lexer make a call into your code, which keeps a symbol table of variables it has seen as it runs.)
(If I understand the problem with C and C++ correctly, the difficulty parsing has to do with recognizing a token as a type name or an identifier, so I think this is relevant.)
LL(k)? I thought LALR(1) was "better." (Score:2)
Re:LL(k)? I thought LALR(1) was "better." (Score:3, Informative)
In terms of grammars alone, I believe that is somewhat correct but we're talking about a compiler here. LL parsing is often helpful because it can create and use inherited attributes. A top-down parser can perform some of the semantic work that an LR bottom-up parser cannot.
I also don't think that the recursive call stack should be of much concern because GCC will probably do something like that anyway (though maybe not as fine grained) in the next compiler pass. As said before, it might actually reduce the amount of work.
What do you mean by 'huge memory requirements (non-deterministic)'? Yes, an LL parser will maintain a deeper stack but the memory usage is by no means unbounded.
I don't know how you can classify changing GCC's internal structure as 'feature-creep'. LL grammars are usually considered easier to read/understand and despite what some wannabe-macho programmers may think, readability/clarity is good. It is also easier to write meaningful error messages because an LL parser kind of models the way we naturally think.
Now I'm not saying that LL parsers are better. The workings of LR grammars are extremely interesting (Knuth is a god). However, don't knock it for no reason at all (stick to 2.95? gimme a break). I'm sure the GCC guys know more than both of us about compiler design and have good reasons for their design decisions.
Re:LL(k)? I thought LALR(1) was "better." (Score:2, Informative)
Actually a table based solution like bison is slower than implementing the parser directly in code (like recursive descent).
But giving up on LALR parsing is not necessary. You can use recursive ascent. See this article [arizona.edu] that claims a speedup of 2.5 to 6.5 compared to table based LALR(1).
Re:LL(k)? I thought LALR(1) was "better." (Score:1)
I have never heard of the term recursive ascent.
Basically the work mentioned in that technical report says that if you convert a table driven LALR parser into a directly executable function then you can get a 2.5 to 6.5 speed up for the parser.
A little background: The tables in bison and yacc generated parsers encode information about what to do at some particular point in the parse.For example the tables could say state 20 should shift token 1 or 2 but if its token 4 then a reduce should take place. Basically the tables in a LALR parser are state machines.
What we showed was that if you replace the tables with executable code the resulting parser will be faster. And the amount of code that is generated is not too big. So the above example could be encoded as a switch statement with case labels for 1,2 and 4.
Of course there are lot more details that need to be taken care of, but those are mentioned in the paper.
I feel very safe concluding that a directly executable parser would always be faster (for any real world language) than a table based parser. I would similarly expect that the recursive descent parser added to gcc would be faster than the old table driven one.
One final note: In case anyone wants the source code, its not ready. I've been wanting to clean it up and release it to the world for the past 7 years now, but I've not yet managed to find the time!!
Re:LL(k)? I thought LALR(1) was "better." (Score:2)
Wait a minute, function calls tend to incur penalties on almost every OS & platform combination. Unless of course, there's alot of inline code and globals, then you can probably cheat and not setup any stack frames (except pushing the return address).
But any program you can implement with recursive function calls, you can implement with one function and a stack. Your new task: implement a LALR(1) C parser w/ one function and a heap. A "real" solution, i.e. GCC, would probably involve a stack and FSM code generated by a tool that could do state minimization, the whole reason for using tables to being with.
Objective-C++? (Score:1)
While considered ugly by some, Objective-C++ would allow for the easier creation of Objective-C bindings for C++ libraries under platforms supporting the GCC Objective-C compiler. I would sure look forward to Objective-C bindings to libraries such as wxWindows and FOX.