Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
SIMD City: Auto-Vectorisation (xania.org)
60 points by brewmarche 1 day ago | hide | past | favorite | 19 comments




Auto-vectorization is consistently one of the least predictable optimization passes, which is rather awful, since when it doesn't trigger your functions are suddenly >3x slower. This drives people to more explicit SIMD coding, from direct assembly like in FFMPEG to wrappers providing some cross-platform support like Google's Highway.

It's just really hard to detect and exploit profitable and safe vectorization opportunities. The theory behind some of the optimizers is beautiful, though: https://en.wikipedia.org/wiki/Polytope_model


In most of the cases I've seen where people felt the need for intrinsics, GCC will vectorize it -- at least if it's allowed to use the same potentially-incorrect semantics as the intrinsics version -- and potentially for multiple micro-architectures with GCC's target_clones attribute. GCC's -fopt-... flags can give you a lot of information on vectorization and other optimizations, if maybe couched in somewhat compiler-internal jargon, and other compiler probably do something similar. Vectorizing compilers have existed for 50-ish years, so it's well-established stuff.

I’m always shocked at what the compiler is able to deduce wrt vectorization. When it works, it’s magical.

In the abstract, it's the inverse of the argument that "configuration formats should be programming languages"; the more general something can be, the less you can assume about it.

A way to express the operations you want, without unintentionally expressing operations you don't want, would be much easier to auto-vectorise. I'm not familiar enough with SIMD to give examples, but if a transformation would preserve the operations you want, but observably be different to what you coded, I assume it's not eligible (unless you enable flags that allow a compiler to perform optimisations that produce code that's not quite what you wrote).


That's very much an issue with SIMD, especially where floating point numbers are concerned.

Matt Godbolt wrote about it recently.

https://xania.org/202512/21-vectorising-floats

TLDR, math notation and language specify particular orders in which floating point operations happen, and precision limits of IEEE float representation mean those have to be honoured by default.

Allowing compilers to reorder things in breach of that contract is an option, but it comes with risks.


I like that Zig allows using relaxed floating point rules with per block granularity to reduce the risk of breaking something else where IEEE compliance does matter. I think OpenMP simd pragmas can be used similarly for C/C++, but that's non-standard.

You can do the same thing with types or the wide crate. But it isn't always obvious when it will become a problem. Usung these types does make auto vectorization fairly reliable.

Fortran requires compilers to “honor the integrity of parentheses” but otherwise doesn’t restrict compilers from rearranging expressions. Want a specific order of operations and rounding? Use parentheses to force them. This is why you’ll sometimes see parens around operations that already have arithmetic precedence, like `(x times x)-(y times y)`, to prevent the use of FMA for one of the multiplications but not the other.

It seems that proper vectorization requires a different kind of language, something similar to cuda and the like, not a general putpose scalar kind of language.

I remember intel had something like it but it went nowhere.


That is ispc.

You don't want "vectorization" though, you either want

a) a code generation tool that generates exactly the platform-specific code you want and can't silently fail.

b) at least a fundamentally vectorized language that does "scalarization" instead of the other way round.


Fortran calling...

i am quietly waiting for the "bitter lesson" to hit compilers: a large language model that speaks in LLVM IR tokens that takes unoptimized IR from the frontend, and spits out an optimized version that works better than any "classical" compiler.

the only thing that might stand in the way is a dependence on reproducibility, but it seems like a weak argument: We already have a long history of people trying to push build reproducibility, and for better or worse they never got traction.

same story with LTO and PGO: I can't think of anyone other than browser and compiler people who are using either (and even they took a long time before they started using them). judged to be more effort than its worth i guess.


The major constraint is that the compiler needs to guarantee that transformations produce semantically identical results to the unoptimized code, with the exception of undefined behavior or specific opt-outs (eg. `-ffast-math` rules).

An ML model can fit into existing compiler pipelines anywhere that heuristics are used though, as an alternative to PGO.


Us video game folks are big fans of LTO, PGO, FDO, etc.

Indeed we are. I wish we interacted with the other industries more. There is a lot to learn from video game development where we are driven by soft real-time constraints.

Alas the standards committee is always asking for people like us to join but few of our billion dollar companies will pony up any money. This is despite many of them having custom forks of clang that they maintain.


There is a low-latency study group at the C++ standards committee, but most of the proposals coming from there where new libraries of limited value to the standard at large.

There is a large presence from the trading industry, less from gaming but you still see a lot of those guys.


Fedora, for instance, is built with LTO, except for some packages which it breaks. I've forgotten the details of where I had to turn it off.

How's it going in the other direction - LLMs as disassemblers?

I tried it a year or so back and was sorta disappointed at the results beyond simple cases, but it feels like an area that could improve rapidly.


You don't necessarily need to lay out your data in arrays to use SIMD, though it certainly makes things more straightforward.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: