Suggest example of using LLVM IR function as @intrinsic

Hi! For example I have C/C++ function

extern "C" double Add(double a, double b) { return a + b; }

which can be compiled to LLVM IR text assembly using command

clang a.cpp -c -O3 -m64 -emit-llvm -S

which produces a.ll having this function

define dso_local double @Add(double %a, double %b) local_unnamed_addr #0 {
entry:
  %add = fadd double %a, %b
  ret double %add
}

Can I use this function assembly inside Numba’s @intrinsic? My main goal to make this function’s body being inlined and optimized into njit function’s code. So that if I use my LLVM IR function from above in njit-ed function then whole code of njit-ed function and my IR function above is mixed and optimized together as a whole.

I see there is example of compiling LLVM IR into CFUNC here Example—compiling a simple function — llvmlite 0.37.0-dirty documentation

But this CFUNC solution will have an overhead of CALL instruction if I use it from njit. In other words it is not optimized and not inlined together with njit-ed function were it is called from.

I need some minimal but fully working example of how to embed (wrap) LLVM IR text assembly into intrinsic function and then use this intrinsic inside njit-ed function.

1 Like

I already got two replies to topic’s question on GitHub here https://github.com/numba/numba/issues/7371#issuecomment-929027456 and here https://github.com/numba/numba/issues/2795#issuecomment-472013539