Using the Cache

I am trying to exploit some of the new numba features to speed up solver instantiation in numbakit-ode

Briefly, I have certain a function (_step) that takes another function (rhs). I have njitted both functions, and used FunctionType to avoid recompiling _step when another rhs functions is given.

When trying to cache the compillation, I get:

NumbaWarning: Cannot cache compiled function “_step” as it uses dynamic globals (such as ctypes pointers and large global arrays)

I am not sure if this is because caching functions using FunctionType is not supported or other spect of my step function, which contain constant arrays, but are no so large (~ 10 elements).

A small update. I built a very simple example and the cache seems to work. So I guess that the problem is in one of the layers of my code. Is there any way to get a better warning? (i.e. what is the dynamic global that is making trouble).

Hi @hgrecco !

I ran into this issue a month ago. I posted some simple reproducers and @stuartarchibald was kind enough to be able to pin-point why this still doesn’t work for all cases. The TL;DR is that only functions that were small enough to be inlined will work.

Sincerely,
Caleb

Thanks @CalebBell for the link. I certainly hoped that Cannot cache functions with callable arguments · Issue #6251 · numba/numba · GitHub was going to be a game changer. Indeed, I have played with the implementation and it works in certain cases. .

I do not think that exposing the LLVM flags is the right way to go. It seems to me that it would lead to a fragile and inconsistent user experience. There is a path that I think could work when a explicit signature is provided: do not inline the target of any argument typed as FunctionType. Example:

import numba as nb
from numba import float64, types

func_sig = types.FunctionType(float64(float64))

@nb.njit(float64(func_sig, float64))
def step(func, x):
    # I keep it simple here for demonstration purposes
    # but this function could be as complex as required
    # by the algorithm.
    return func(x)

In this case, step will be compiled but func will never be inlined. Therefore, storing step in the cache should be possible.

best,

Hernán

Ps.- In numbakit-ode I implemented a numba compatible newton and bisect methods. I was planning at some point to move them out to a numbakit-optimize package. But if you have something done along this line, I would be happy to use it or contribute.

Hi Hernán,

Your strategy sounds really good to me. I only proposed inlining everything out of a lack of creativity.
For all my applications I’m able to provide full signatures for every call, although I presently don’t apply them as that forces numba to compile those functions right away. It takes tens of minutes to compile everything in my libraries, mostly a few functions that seem to slow down due to the use of strings.

I’m so glad you are working on numbakit-ode. It’s tough to get good performance out of scipy.integrate. I only have a few places that integrate right now but there are plenty more applicatiosn in the future!

I am not proud of the numerical methods I have implemented or anything. I coded them originally to be fast with PyPy, and then was able to coerce them to work with numba. I am not scared to tough them or anything, but I am definitely not looking to share them widely to the world.

I do know Yoel Cortes is pretty proud of his collection of solvers, GitHub - yoelcortes/flexsolve: Flexible function solvers and might be more interested in maintaining a numba-compatible solver library!

Thank you again for all your incredible work. Pint is still so cool to use!

Sincerely,
Caleb

Hi Caleb,

Indeed, compilation takes too long. I is fine to do it one time but now makes certain libraries difficult to use. I think my suggestion (or similar such as having a NoInlineFunctionType) could work. Let’s see what the numba devs think of this approach. I think it makes sense but I am not sure if it can be easily implemented in Numba.

By the way, I have developed numbakit-anjit as way to simplify using signatures with numba. Briefly, it can build and apply a signature from Python annotations, use another fuction signature as signature (or part of it, e.g. return type, the type of an argument, etc) and register certain signatures for later reuse.

Best,

Hernán

Ps.- Thanks for you comments about Pint! My work on the Pint, PyVISA and other open source projects have been great opportunity to contribute back and meet fantastic people.

@hgrecco RE:

I do not think that exposing the LLVM flags is the right way to go. It seems to me that it would lead to a fragile and inconsistent user experience. There is a path that I think could work when a explicit signature is provided: do not inline the target of any argument typed as FunctionType.

Any chance you could open a ticket on the issue tracker about this please? It may be that this is a way through and needs discussion! It’s certainly possible to prevent inlining of functions, Numba does this internally already to create a consistent optimisation experience regardless of whether there’s a “wrapper” around your compiled function (e.g. a special wrapper so Python can call your function).

Thanks!

Done: Wrapper or type to avoid inlining · Issue #6972 · numba/numba · GitHub

@hgrecco Many thanks for opening the issue!