Optimizations in C++ Compilers

guodong | 102 points

> I went home that evening and created Compiler Explorer.

Nice try. You can’t escape being known as a verb now.

Everyone knows the tool as godbolt.

orangepanda | 4 years ago

Compiler Explorer (aka godbolt) is awesome; I use it at least weekly.

It's amazing how many more code generation questions occur to me now that there's so much less friction in getting the answers.

usefulcat | 4 years ago

The floating point comment leaves out that one can use

    #pragma omp simd reduction(+:res)
as a more precise way to achieve vectorization in the reduction (compile with -fopenmp-simd to only use it for SIMD without linking an OpenMP library): https://godbolt.org/z/17oTz1

Unfortunately, the pragma is not supported with the new-style class iterators in a released compiler, though it works in clang-trunk: https://godbolt.org/z/hbP11W Note that Clang disables floating point contraction by default (so no vfmadd instructions), despite them being more accurate. One usually wants this globally (-ffp-contract=fast) except when trying to bitwise reproduce software compiled for pre-Haswell.

jedbrown | 4 years ago

> I hope that some of these optimizations are a pleasant surprise and will factor in your decisions to write clear, intention-revealing code and leave it to the compiler to do the right thing.

This was my key takeaway from this article. Writing clear code that is easier to maintain will have good enough performance most of the time. I was particularly impressed with the devirtualization optimizations and will be less likely to shy away from using polymorphism in future due to performance concerns.

manch23 | 4 years ago

> Tail call removal. A recursive function that ends in a call to itself can often be rewritten as a loop, reducing call overhead and reducing the chance of stack overflow.

Most important: this optimization enables pipelined execution.

When people talk about a CPU executing an integer add instruction in ~1 cycle, what they actually mean is that the add has this latency when the CPU pipelines are full.

If you have an 11 stage pipeline... the add can often have a latency of ~11 cycles... if you write the _right_ code for it.

fluffything | 4 years ago

That's a cool bag of tricks. But I'm impressed when compilers start optimizing programs in the big-O sense.

amelius | 4 years ago

Godbolt sounds like a top quality brand of quidditch broom :-D

mrlonglong | 4 years ago