I’m embarking on an optimization of a long-running scipy.optimize
code, and hoping to use numba to help. The optimizers allow passing additional arguments to the model function. The model depends on quite a few externally configured scalars and arrays, and I am hoping to package up those arrays into a structured object, like a dict or namedtuple (I see dataclass
is not yet supported), to pass into my njit’d model function.
As as crude analogy, an argument to a numba-compiled function might look like:
param_map = {"arr1": some_array1,
"arr2": {"upper": some_array2, "lower": some_array3},
"arr3": {"val": 1.0, "other": some_array4}}
i.e. a nested list of scalars and arrays, where the some_array
’s can be 1D numpy arrays, or arrays of numpy structured scalars.
Note that param_map
is not constant, i.e. it is initialized outside of numba before being passed into the njit’d model function via scipy, and some of its individual values can be updated during the run.
Is there a recommended best practice for this situation, where you have dozens of individual scalars and numpy (structured) arrays you need to pass in as an argument to a numba-njit’d function?