Hi all, I’m super new to numba and compiled code and am trying to make this library AOT compiled, since it currently takes a minute to import: compiling ahead of time (remove jit) · Issue #17 · aquacropos/aquacrop · GitHub
Some of the functions in solutions.py depend on many user supplied parameters that are currently in jitclasses. jitclasses aren’t supported with AOT though, so I’m not sure what to convert these to. Is the solution to create as many typed function arguments as there are parameters for the model? Have others found solutions here? the ideal case would be to have something like jitclasses that can be AOT compiled.
any tips are super appreciated!
@njit decorator is lazy, if import time is high due to compilation, something must be triggering that compilation. Perhaps consider working out what is causing this behaviour first?
RE: AOT compilation, I’d first try to reorgnaising the code to make it such that functions consume plain data (not jitclass) and then switch on caching
@njit(cache=True). When caching is on the compilation occurs the first time the function is run, but running it again from a new process etc would result in a replay from the cache which should avoid the compilation cost.
It’s also worth noting that AOT compiled libraries will be compiled with a relatively minimal instruction set and optimisations, this to increase their portability, whereas the JIT compiled+cached functions will be optimised to the user’s machine.
RE jitclass and AOT, as noted this isn’t supported yet, you’d have to put the class members in as separate arguments to the function as noted.
Hope this helps.
A further thought, I think
namedtuple would permit packing of the data from e.g. jitclass into a single container and retain some of the attribute related semantics, it also works with caching and AOT.
from numba.pycc import CC
from numba import types
from collections import namedtuple
cc = CC('my_module')
nt = namedtuple('nt', 'x,y')
@cc.export('mult_tup', types.int64(types.NamedUniTuple(types.int64, 2, nt)))
return tup.x * tup.y
if __name__ == "__main__":
Note that functions that take jitclass instances cannot be cached
Thanks for these tips! We are going to explore using named tuples so that we have the option to use cache-ing or AOT if needed.
got it, thank you for this tip!
Namedtuple can be cached, but caching means pickling and that can sometimes take a bit of care to make work correctly. You’ll get plenty of hits googling ‘namedtuple pickle’