Link-time optimization versus. project inlining; limitations on each approach [closed]?

Up vote 3 down vote favorite 1 share g+ share fb share tw.

Usually people when designing proper software architectures on c++ that also need to have great performance, enter into the dangerous game of premature optimization, but rather that doing optimization at the architecture level (which is a perfectly good and encouraged form of premature optimization) they do compromises at the code level, like avoiding virtual methods and interfaces altogether, low level hacks, etc. some people avoids this by doing a practice called usually application inlining or unity builds which is basically generating one or two really big . Cpp with all the headers and . Cpp from the whole project included, and then compile it as a single translation unit.

This approach is very reliable when it comes to inlining virtual methods (devirtualization) since the compiler does have everything to make the required optimizations Question what drawbacks does have this approach regarding more "elegant & modern" methods like link-time optimization? C++ optimization compiler-optimization link|improve this question edited Jul 17 '11 at 14:43 asked Jul 14 '11 at 21:44lurscher2,429624 73% accept rate.

Some people" are just wrong. Don't listen to them! – Bo Persson Jul 14 '11 at 21:52 3 please explain why they are wrong, that is what the question is about.

Thanks! – lurscher Jul 14 '11 at 21:54 possible duplicate of (c++) The benefits / disadvantages of unity builds? – larsmans Jul 14 '11 at 22:06 just found this for automation of the generated .

Cpp with CMake: cheind.wordpress.com/2009/12/10/… – lurscher Jul 14 '11 at 22:17.

The technical name, approaching minor buzzword status, for that approach is unity build. See for example: (c++) The benefits / disadvantages of unity builds? The downside is best described here: leewinder.co.uk/blog/?p=394 The short version is it is more or less a choice of languages: you either write in regular-C++ or Unified-build-C++.

The 'correct' way of writing virtually any code will differ between the two.

Ok, so it seems its main drawback its mantainability - and its not hard to imagine, but I still think there is room for automation of the ALL. Cpp generation – lurscher Jul 14 '11 at 22:13.

For a large project, the technique of having "all files in one" potentially increases the build time, though that only matters to the developers. With multiple smaller files, one code change usually causes less code to compile, therefore it should be a faster incremental build. If changes are more often to header files or other components which many files are dependent upon, then the single file approach saves build time since header files need not be processed by multiple dependents when changed.

There are a surprising number of compilers which cannot process very large source files, even those which are widely used and have popularity far in disproportion to their quality. Such compilers tend to choke at random, report errors for syntactically correct code, or generate incorrect instructions. A project with multiple developers and old version control tools might have trouble coordinating changes to the limited number of modules.

This practice is not for evelopment! Is only for doing optimal release builds. The project is still split in several .

Cpp, its only the final build which is done in this way (i should edit my post to make that point a bit clearer) You have a point on compilers choking on big source files though, but it can always be solved by splitting it into more . Cpp files each one smaller, and still reap the rewards of out-of-the-box devirtualization – lurscher Jul 14 '11 at 22:06 2 Doing unity builds for release rules out things like anonymous namespaces, local helper functions, and file static variables. Now everything is global!

Can cause interesting new overload resolutions. – Bo Persson Jul 14 '11 at 22:42 If you have two helper functions with different roles and same name, is that not confusing anyway. – Daniel Aug 4 '11 at 7:50.

Just think about it for a minute. There's some part of your program, call it A, and without any optimization it takes time T, but with optimization it takes, say, T/2. Now there are other parts of your program also, so let's think about what effect A's optimization has on the whole program.

If the time originally spent in A was none (0%) then what is the effect of A's optimization? Zero. If the time originally spent in A was all (100%) then what is the effect of A's optimization?

T/2 If the time originally spent in A was half (50%) then the effect of A's optimization is T/4. So the benefit of optimizing a piece of code is proportional to the amount of time originally spent in that code. So if one wants inlining, avoidance of virtual functions, etc. to have significant benefit, what kind of code does it have to be?

It has to be code that, before optimization, contained the program counter (exclusive time) a significant fraction of the time. In significant applications, containing many layers of function/method calls, where the call tree nearly always dips down into new, or I/O, or string libraries, or data structure libraries, or database libraries, that are outside the application's source code, what percent of total time is exclusive time in the code compiled as part of the application? Often (not always) from little to very little.

And the potential benefit from inlining or other compiler/linker optimization is proportional to that.

I posed the example of virtual vs. inline calls because its sort of representative of the gains of inter-process optimizations that are not possible with the traditional compilation model, but certainly its not the only benefit you'll get from it – lurscher Jul 15 '11 at 3:42.

One obvious drawback is potential clash between static or local symbols: // File a. Cpp namespace { void f() { make_peace(); } } void action() { f(); } // file b. Cpp void f() { launch_missiles(); } void action2() { f(); } If b.

Cpp is included before a. Cpp, Bad Things happen. Another drawback is that compilers (MSVC?

) may not cope very well with large files. Also, every time you change a tiny bit of your code, you're going for full compilation. In conclusion, I'd never do that.

It is not worth the extra bucks for a SSD* or a more powerful build machine. And Link time code generation is usually good enough. The rare benefits I see is enhanced virtual function analysis, and better alias analysis, but I really don't think it is worth it.

You can use restrict or its variants for the latter, and the former is very rare to have a real impact on your program's performance. * If compilation times really bother you, try to replace hard disks by SSD on your build machines. Magic.

Good points, add to that that using clauses should not be at global scope for the same reasons; about MSVC, yes, but for compilers with coping problems you can always do a partial-unitization in 2 or more big (but not so big) . Cpp – lurscher Jul 15 '11 at 1:53 @lurscher: you never use using, do you? – Alexandre C.

Jul 15 '11 at 7:06 yes, if you do this, you have to be like Steely Dan: no static at all. – soru Jul 15 '11 at 10:48 @Alexandre, afaik using will still be OK to use as long as it is in either class-scope or function-scope. @soru, what is the problem with static?

– lurscher Jul 15 '11 at 13:11 1 @lurscher: using at class scope may be ok (and sometimes necessary), although I take it as a sign of poor design. At function scope, it can have nasty effects, especially when ADL is thrown into the mix: the only using which is really unavoidable (this issue has been cleared in C++0x) is using std::swap at function scope, so that you can look up your own swap function via ADL if it is not possible to specialize std::swap. – Alexandre C.

Jul 15 '117 at 21:07.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions