[llvmlinux] Heavy experiments

Renato Golin renato.golin at linaro.org
Thu Jul 25 09:43:24 UTC 2013


On 25 July 2013 00:42, Marcelo Sousa <marceloabsousa at gmail.com> wrote:

> I think we have different perspectives here.
>

Precisely, and that's the source of confusion.

You seem to have read all the papers on LLVM. While that's a good source of
information, it only provides you with a very narrow view of a
not-so-experienced computer scientist on what LLVM should be. If you base
your assumptions on academic papers, you'll end up thinking that LLVM has
to be that perfect tool that will work for whatever you want it to work.
But life is not that simple.

LLVM is a production compiler, like GCC, ICC, ARMCC, MSVC, TurboC, etc. It
compiles real world code, kernels and userland (*BSD and Darwin at least)
and it's the default compiler of Apple's XCode. It has hacks,
approximations, heuristics, pattern-matching and many other uncertainties
that would be almost impossible to rid of the code without penalising other
parts of the compiler. It won't happen.

To begin with, you must understand that the IR is *not* a language's lower
level representation, but assembly's higher level representation. Some
things you propose infer the former.

You propose a clean, unambiguous IR, and more annotations which are great
for analysis, but not so much for optimizations. Since the IR was designed
as an optimization language, it won't clean up, that's a fact.
Interestingly, most of the things you propose were also proposed by me and
others on the last few years and we were all convinced that it's not going
to happen.

Each proposal has the following obstacles:
 - Implementation: real world code is hacky and hard to understand. You'll
spend a few days understanding the code, more so to perform changes and a
lot more so to understand why your changes are not working.
 - Testing: Compiling is only the first step. You'll have to run tests to
show other people that your change is sound AND don't break any
assumptions, and running only the check-all is not enough. You'll have to
run the test-suite, and you'll have to send your patch to the list and wait
for feedback. Assuming everyone is happy with your patch in the first time
(I doubt it), they will test on their own use-cases and will come back to
you with correctness and performance concerns. This could take months.
 - Benchmarking: The test-suite is not good for benchmarking, and changing
the IR means some optimization passes will not understand as well as they
did and that invariably means performance will be hit hard. You'll have to
run benchmarks to prove that it didn't matter, but if it did, you'll have
to change each optimization that gets hit by your change to understand the
new syntax. Again, send to the list, and let people run their own
benchmarks. This could take years.

I say "you", not the community, because that's a change *you* want. You
can't submit a change that breaks lots of optimization passes and other
back-ends that you don't test (like SystemZ or PTX) and expect them to fix
the problems for you. Most likely they will revert your change when
anything happens to their buildbots or benchmarks.

Now, the first big question is: Is that worth for everyone?

To answer this question, we have to go back to the original design, which
meant that IR is an optimization and code-generation language, not an
analysis helper. So the answer is: no.

The second big question is: Do you have time for that?

I can't answer it, but I'm assuming you want to do your analysis, and that
this is more important than embarking on a very long journey that has
vanishingly small chances of success...


Something like that. Can you give me a reference for PNaCl?
>>
>
Google's first two hits for "PNaCl"... ;)

http://en.wikipedia.org/wiki/Google_Native_Client
http://www.chromium.org/nativeclient/pnacl/building-and-testing-portable-native-client


Not just JIT but this (taken from Lattner's MSc thesis):
> "Some transformations are too expensive to perform at run-time, even given
> an e fficient representation to work with. For these transformations, the
> run-time optimizer gathers pro file information, serializing it to disk.
> When idle time is detected on the user's computer, an offl ine reoptimizer
> is used to perform the most aggressive profi le-driven optimizations to the
> application. "
> Where the hell is this functionality?!
>

PGO: Profile-guided optimization. LLVM supports both PGO and LTO (Link-time
optimization) and it works quite well.

PGO is when you annotate your sources to dump profile information to a
file, and compile your code again with that file's information on what are
the hot paths, so you can spend more time in them and no time in the cold
paths.

This is really good when your CPU is extremely speculative and where the
branch prediction and instruction shuffling mechanisms can do wild things,
like the Cortex-A9. You'll only truly know how your workload performs when
it actually runs on real hardware.

Of course, that depends on the workload, and different inputs will generate
different hot/cold path patterns. So, when you're "training" your compiler,
you must use a very representative sub-set of your data, or it'll make
matters worse. I don't need to say that choosing a "representative sub-set"
is an undefined problem... ;)


It is. http://llvm.org/docs/LangRef.html
>>
>
> That is hardly a formalization of its semantics.
>

I agree that the C++ standard is a lot more specific, but the standard
committee is paid to do that, while no one will ever be paid to formalise
the IR "language".

That is so because the IR *must* change to accommodate new back-ends, new
languages, new standards. Unfortunately, the LangRef is the closest you'll
ever have of a formalization process. To compare that with another
real-world case, see if you can get the *complete* formalization of the
assembly syntax for any major architecture (Intel, ARM, MIPS)... oh, and
while you're at it, ask them to give you every syntax variation their own
(proprietary) compiler will accept.


In the Vellvm paper (POPL'12), they show some cases of uses of undefined
> that are very strange. It seems that once you have *undef* you can do any
> optimization whatsoever. I may be wrong but it feels to me that *undef* is
> trying to represent the undefined in any programming language.
>

That's correct. The front-end will tell you what's undef (based on the
language), so the back-end can do whatever it wants. LLVM might
automatically detect undefs during constant folding, or when comparing
signed and unsigned values, but I don't know.

"undef" also taints operations, so that a chain of operations that ends up
in an undef can be completely removed. So, for instance, if the return
value of a function is undef, and the function has no side-effects (other
calls, access to global variables), the compiler is free to remove the
function altogether.


My hunch is that you'll need heuristics to control false positives without
>> introducing false negatives, and that's not a precise analysis.
>>
>
> That's incorrect. You can reduce false positives while maintaining
> soundness, which is exactly increasing precision.
>

Heisenberg would beg to differ... ;)

Real-world complex code, especially compilers, behave a lot like quantum
states. Once you go down the path of heuristics and pattern-matching, there
is no room for determinism, and compilers *must* use heuristics to compile
complex languages and all their quirks to real-world hardware and all their
bugs.

Having said that, I'd love if you could prove me wrong! ;)

cheers,
--renato
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/llvmlinux/attachments/20130725/c3d54117/attachment.html>


More information about the LLVMLinux mailing list