My team at Google is looking into ways to serialize a Python function used during ML training and port it to serving time (e.x. pushing a preprocessing function as a serialized form to the serving engine for example)
Does anyone know if there is easy way to export Numba IR of a Numba jittable Python function?
Asking this because our team is thinking of porting this IR into other runtime and potentially run it in pure C++ runtime during ML serving time