I was looking further into this and found the following discussion: https://discourse.llvm.org/t/x86-finalizing-svml-support-in-llvm/70977
Which is the upstreaming efforts for the internal patches on enabling Intel SVML on Clang that seemed to have stalled.
Does anyone know what type of scenarios or performance wins this is providing? Also is this something that is expected to be part of auto-vectorization or more of the user still needs to manually invoke SVML intrinsics?
I’m wondering how to trial this out internally in numba and potentially other services and see if the gains are enough to help push the upstream patches forward
LLVM does have support for generating calls to SVML via -fveclib=svml: https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang-fveclib
Are there additional features enabled by the downstream patches? Reading through the LLVM-14 patch (https://github.com/Hardcode84/llvm-project/commit/9de32f5474f1f78990b399214bdbb6c21f8f098e#diff-0d6db65ec90b08444da63b5f4ece0baa276a8bfbacd2841a2df5d20688d40334) it seems to be more on the legality side. That sounds more like a bug than a missing feature if that’s the case.
Also side note it looks like I can’t directly post links so that’s why they’re in code blocks
What we remember is that it is mainly for legality in using Intel’s SVML library which uses a different calling convention. (CC @Hardcode84)
Discourse trust levels configuration confuses me. New users should be able to post a maximum of 2 links. I added *.llvm.org as part of trusted domain, hopefully that helps.