Hey @nelson2005 ,
let’s start with two simple examples which show the potential beautiful effects (if intended) or horrible effects (if unintended) and continue from there.
There could be functions with several default or omitted arguments.
Only 3 default parameters lead to 27 function variations.
@nb.njit([
'int64(int64, int64, int64)',
'int64(int64, int64, Omitted(None))',
'int64(int64, int64, none)',
'int64(int64, Omitted(None), int64)',
'int64(int64, Omitted(None), Omitted(None))',
'int64(int64, Omitted(None), none)',
'int64(int64, none, int64)',
'int64(int64, none, Omitted(None))',
'int64(int64, none, none)',
'int64(Omitted(None), int64, int64)',
'int64(Omitted(None), int64, Omitted(None))',
'int64(Omitted(None), int64, none)',
'int64(Omitted(None), Omitted(None), int64)',
'int64(Omitted(None), Omitted(None), Omitted(None))',
'int64(Omitted(None), Omitted(None), none)',
'int64(Omitted(None), none, int64)',
'int64(Omitted(None), none, Omitted(None))',
'int64(Omitted(None), none, none)',
'int64(none, int64, int64)',
'int64(none, int64, Omitted(None))',
'int64(none, int64, none)',
'int64(none, Omitted(None), int64)',
'int64(none, Omitted(None), Omitted(None))',
'int64(none, Omitted(None), none)',
'int64(none, none, int64)',
'int64(none, none, Omitted(None))',
'int64(none, none, none)'])
def foo(a: int = None, b: int = None, c: int = None) -> int:
a = a or 0
b = b or 0
c = c or 0
return a+b+c
Or there could be functions with more complex data types or type containers.
Here the parent data type number
contains 10 or even more base types. Again this leads to multiple function variations.
@nb.njit([
'uint8(uint8)',
'uint16(uint16)',
'uint32(uint32)',
'uint64(uint64)',
'int8(int8)',
'int16(int16)',
'int32(int32)',
'int64(int64)',
'float32(float32)',
'float64(float64)'])
def bar(a: np.number) -> np.number:
return a+1
These python functions seem to be simple and innocent cases but they generate a huge number of function variations.
Pros:
Defining explicit function signatures for various input combinations can lead to a significant number of function definitions. Inferring types from Python annotations could potentially simplify code by reducing the need for explicit type signature declarations, making the code easier to maintain.
The use case would probably be for ahead of time compiliation or cached code used in a package or library.
Cons:
However, as you pointed out, there are scenarios where inferring types might not be straightforward. For example, when using type containers or multiple default parameters, the generated code by Numba could become complex and may lead to code instability and long compiliation times. It’s essential to consider how automatic type inference would handle such cases.
Another challenge to consider is how (or if) Numba would handle variable-length argument lists (*args) and keyword argument dictionaries (**kwargs) when inferring types.
It’s worth noting that Numba’s jitclass
already supports the inference of fields from Python type annotations.
Fields of a jitclass can also be inferred from Python type annotations
.
The quote is from the documentation Chapter Compiling Python classes with @jitclass
.
This suggests that Numba has the capability to perform type inference from annotations, at least for basic types in certain contexts.
The extension of this functionality to function arguments could be an option depending on the use case.