[FoRK] The collapse of the .net ecosystem

Stephen D. Williams sdw at lig.net
Mon Jun 22 13:25:06 PDT 2015

On 6/22/15 10:32 AM, J. Andrew Rogers wrote:
>> On Jun 21, 2015, at 11:32 AM, Stephen D. Williams <sdw at lig.net> wrote:
>> One thing that C++ needs, which I think the LLVM guys are working on, is global and peephole run-time performance analysis and JIT replacement code.
> Why does C++ “need” this? This optimization is primarily useful for JIT compilation environments and C++ is not a JIT environment typically. It is the kind of thing you do in languages that are difficult to properly profile/optimize due to their design.

Static analysis is very limiting.  A compiler can only conclude certain things without seeing actual numbers from real runs on real 
data, which could even change over time.  JIT systems are required for languages that are scripted or otherwise flexible, but they 
have the opportunity to do more optimization than static optimizers because they have more information.  I remember having that 
thought a while ago when reading about current Javascript or Java JIT methods.  I think I may also have heard something about it at 
the LLVM conference in SF from a year or two ago.

>> And we always need far more intelligence for SIMD and other parallelism.
> Never going to happen. A compiler can turn certain local code idioms into SIMD and parallel versions because the intent and safety is clear. It cannot fix programmers using non-parallelizable code idioms or non-parallelizable architecture. It is a training issue more than a compiler issue. With AVX-512 being released in a few months, knowing how to write inherently vectorizable code is going to be a much more valuable skill for programmers and very few have it.

It's never going to be as good as humans, but it can get further. They've already made some interesting gains.

> Compilers will never solve parallelism until there is a “completely rewrite my crap software design to support parallelism” flag. It would require the compiler to infer the intent of the entire code base rather than just analyzing the code base as presented.
> I think we can safely categorize automagic compiler code parallelization as “AI complete”. A compiler can’t parallelize the code of a programmer that is completely ignorant of parallelism because the compiler still requires the programmer to convey their intent for code behavior in a parallel context.

Yes, to be as good as or better, true.  But there are ways to get closer than we have so far.


More information about the FoRK mailing list