How to export Numba IR instead of LLVM IR

Hi Numba experts,

My team at Google is looking into ways to serialize a Python function used during ML training and port it to serving time (e.x. pushing a preprocessing function as a serialized form to the serving engine for example)

Does anyone know if there is easy way to export Numba IR of a Numba jittable Python function?

Asking this because our team is thinking of porting this IR into other runtime and potentially run it in pure C++ runtime during ML serving time

Thanks!

Looks like specifying numba.config.ANNOTATE is a way to peek into the generated Numba IR

from numba import config, njit

config.ANNOTATE = 1


@njit('void(float32[::1], int32)')
def function_to_lower(A, n):
    i = 0
    while i < n:
        A[i] = i
        i += 1

Ref: IR is not SSA · Issue #9802 · numba/numba · GitHub

Not sure if there is a way to do it programmatically

Looks like there’s been no reply for a while.

For now, I will assume there is no standard API to obtain Numba IR directly without hacking.